* [PATCH v6 1/7] mm/sparse-vmemmap: Fix vmemmap accounting underflow
[not found] <20260424025547.3806072-1-songmuchun@bytedance.com>
@ 2026-04-24 2:55 ` Muchun Song
2026-04-24 2:55 ` [PATCH v6 2/7] mm/memory_hotplug: Fix incorrect altmap passing in error path Muchun Song
` (3 subsequent siblings)
4 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2026-04-24 2:55 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan
Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Nicholas Piggin,
Christophe Leroy, aneesh.kumar, joao.m.martins, linux-mm,
linuxppc-dev, linux-kernel, Muchun Song, stable
In section_activate(), if populate_section_memmap() fails, the error
handling path calls section_deactivate() to roll back the state. This
causes a vmemmap accounting imbalance.
Since commit c3576889d87b ("mm: fix accounting of memmap pages"),
memmap pages are accounted for only after populate_section_memmap()
succeeds. However, the failure path unconditionally calls
section_deactivate(), which decreases the vmemmap count. Consequently,
a failure in populate_section_memmap() leads to an accounting underflow,
incorrectly reducing the system's tracked vmemmap usage.
Fix this more thoroughly by moving all accounting calls into the lower
level functions that actually perform the vmemmap allocation and freeing:
- populate_section_memmap() accounts for newly allocated vmemmap pages
- depopulate_section_memmap() unaccounts when vmemmap is freed
This ensures proper accounting in all code paths, including error
handling and early section cases.
Fixes: c3576889d87b ("mm: fix accounting of memmap pages")
Cc: stable@vger.kernel.org
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/sparse-vmemmap.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 6eadb9d116e4..a7b11248b989 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -656,7 +656,12 @@ static struct page * __meminit populate_section_memmap(unsigned long pfn,
unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
struct dev_pagemap *pgmap)
{
- return __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap);
+ struct page *page = __populate_section_memmap(pfn, nr_pages, nid, altmap,
+ pgmap);
+
+ memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
+
+ return page;
}
static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
@@ -665,13 +670,17 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
unsigned long start = (unsigned long) pfn_to_page(pfn);
unsigned long end = start + nr_pages * sizeof(struct page);
+ memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)));
vmemmap_free(start, end, altmap);
}
+
static void free_map_bootmem(struct page *memmap)
{
unsigned long start = (unsigned long)memmap;
unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION);
+ memmap_boot_pages_add(-1L * (DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),
+ PAGE_SIZE)));
vmemmap_free(start, end, NULL);
}
@@ -774,14 +783,10 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
* The memmap of early sections is always fully populated. See
* section_activate() and pfn_valid() .
*/
- if (!section_is_early) {
- memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)));
+ if (!section_is_early)
depopulate_section_memmap(pfn, nr_pages, altmap);
- } else if (memmap) {
- memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page),
- PAGE_SIZE)));
+ else if (memmap)
free_map_bootmem(memmap);
- }
if (empty)
ms->section_mem_map = (unsigned long)NULL;
@@ -826,7 +831,6 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
section_deactivate(pfn, nr_pages, altmap);
return ERR_PTR(-ENOMEM);
}
- memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
return memmap;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 2/7] mm/memory_hotplug: Fix incorrect altmap passing in error path
[not found] <20260424025547.3806072-1-songmuchun@bytedance.com>
2026-04-24 2:55 ` [PATCH v6 1/7] mm/sparse-vmemmap: Fix vmemmap accounting underflow Muchun Song
@ 2026-04-24 2:55 ` Muchun Song
2026-04-24 2:55 ` [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization Muchun Song
` (2 subsequent siblings)
4 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2026-04-24 2:55 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan
Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Nicholas Piggin,
Christophe Leroy, aneesh.kumar, joao.m.martins, linux-mm,
linuxppc-dev, linux-kernel, Muchun Song, stable
In create_altmaps_and_memory_blocks(), when arch_add_memory() succeeds
with memmap_on_memory enabled, the vmemmap pages are allocated from
params.altmap. If create_memory_block_devices() subsequently fails, the
error path calls arch_remove_memory() with a NULL altmap instead of
params.altmap.
This is a bug that could lead to memory corruption. Since altmap is
NULL, vmemmap_free() falls back to freeing the vmemmap pages into the
system buddy allocator via free_pages() instead of the altmap.
arch_remove_memory() then immediately destroys the physical linear
mapping for this memory. This injects unowned pages into the buddy
allocator, causing machine checks or memory corruption if the system
later attempts to allocate and use those freed pages.
Fix this by passing params.altmap to arch_remove_memory() in the error
path.
Fixes: 6b8f0798b85a ("mm/memory_hotplug: split memmap_on_memory requests across memblocks")
Cc: stable@vger.kernel.org
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/memory_hotplug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2a943ec57c85..0bad2aed2bde 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1468,7 +1468,7 @@ static int create_altmaps_and_memory_blocks(int nid, struct memory_group *group,
ret = create_memory_block_devices(cur_start, memblock_size, nid,
params.altmap, group);
if (ret) {
- arch_remove_memory(cur_start, memblock_size, NULL);
+ arch_remove_memory(cur_start, memblock_size, params.altmap);
kfree(params.altmap);
goto out;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
[not found] <20260424025547.3806072-1-songmuchun@bytedance.com>
2026-04-24 2:55 ` [PATCH v6 1/7] mm/sparse-vmemmap: Fix vmemmap accounting underflow Muchun Song
2026-04-24 2:55 ` [PATCH v6 2/7] mm/memory_hotplug: Fix incorrect altmap passing in error path Muchun Song
@ 2026-04-24 2:55 ` Muchun Song
2026-04-24 7:33 ` David Hildenbrand (Arm)
2026-04-24 2:55 ` [PATCH v6 5/7] mm/mm_init: Fix pageblock migratetype for ZONE_DEVICE compound pages Muchun Song
2026-04-24 2:55 ` [PATCH v6 6/7] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE Muchun Song
4 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2026-04-24 2:55 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan
Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Nicholas Piggin,
Christophe Leroy, aneesh.kumar, joao.m.martins, linux-mm,
linuxppc-dev, linux-kernel, Muchun Song, stable
When vmemmap optimization is enabled for DAX, the nr_memmap_pages
counter in /proc/vmstat is incorrect. The current code always accounts
for the full, non-optimized vmemmap size, but vmemmap optimization
reduces the actual number of vmemmap pages by reusing tail pages. This
causes the system to overcount vmemmap usage, leading to inaccurate
page statistics in /proc/vmstat.
Fix this by introducing section_vmemmap_pages(), which returns the exact
vmemmap page count for a given pfn range based on whether optimization
is in effect.
Fixes: 15995a352474 ("mm: report per-page metadata information")
Cc: stable@vger.kernel.org
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: Oscar Salvador <osalvador@suse.de>
---
mm/sparse-vmemmap.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 3340f6d30b01..2e642c5ff3f2 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -652,6 +652,28 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
}
}
+static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,
+ struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
+{
+ const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
+ const unsigned long pages_per_compound = 1UL << order;
+
+ VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
+ min(pages_per_compound, PAGES_PER_SECTION)));
+ VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
+
+ if (!vmemmap_can_optimize(altmap, pgmap))
+ return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
+
+ if (order < PFN_SECTION_SHIFT)
+ return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
+
+ if (IS_ALIGNED(pfn, pages_per_compound))
+ return VMEMMAP_RESERVE_NR;
+
+ return 0;
+}
+
static struct page * __meminit populate_section_memmap(unsigned long pfn,
unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
struct dev_pagemap *pgmap)
@@ -659,7 +681,7 @@ static struct page * __meminit populate_section_memmap(unsigned long pfn,
struct page *page = __populate_section_memmap(pfn, nr_pages, nid, altmap,
pgmap);
- memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
+ memmap_pages_add(section_nr_vmemmap_pages(pfn, nr_pages, altmap, pgmap));
return page;
}
@@ -670,7 +692,7 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
unsigned long start = (unsigned long) pfn_to_page(pfn);
unsigned long end = start + nr_pages * sizeof(struct page);
- memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)));
+ memmap_pages_add(-section_nr_vmemmap_pages(pfn, nr_pages, altmap, pgmap));
vmemmap_free(start, end, altmap);
}
@@ -678,9 +700,10 @@ static void free_map_bootmem(struct page *memmap)
{
unsigned long start = (unsigned long)memmap;
unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION);
+ unsigned long pfn = page_to_pfn(memmap);
- memmap_boot_pages_add(-1L * (DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),
- PAGE_SIZE)));
+ memmap_boot_pages_add(-section_nr_vmemmap_pages(pfn, PAGES_PER_SECTION,
+ NULL, NULL));
vmemmap_free(start, end, NULL);
}
--
2.20.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 5/7] mm/mm_init: Fix pageblock migratetype for ZONE_DEVICE compound pages
[not found] <20260424025547.3806072-1-songmuchun@bytedance.com>
` (2 preceding siblings ...)
2026-04-24 2:55 ` [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization Muchun Song
@ 2026-04-24 2:55 ` Muchun Song
2026-04-24 2:55 ` [PATCH v6 6/7] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE Muchun Song
4 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2026-04-24 2:55 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan
Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Nicholas Piggin,
Christophe Leroy, aneesh.kumar, joao.m.martins, linux-mm,
linuxppc-dev, linux-kernel, Muchun Song, stable
The memmap_init_zone_device() function only initializes the migratetype
of the first pageblock of a compound page. If the compound page size
exceeds pageblock_nr_pages (e.g., 1GB hugepages with 2MB pageblocks),
subsequent pageblocks in the compound page remain uninitialized.
Move the migratetype initialization out of __init_zone_device_page()
and into a separate pageblock_migratetype_init_range() function. This
iterates over the entire PFN range of the memory, ensuring that all
pageblocks are correctly initialized.
Also remove the stale confusing comment about MEMINIT_HOTPLUG above
the migratetype setting since it is an obsolete relic from commit
966cf44f637e ("mm: defer ZONE_DEVICE page initialization to the point
where we init pgmap") and no longer makes sense here.
Fixes: c4386bd8ee3a ("mm/memremap: add ZONE_DEVICE support for compound pages")
Cc: stable@vger.kernel.org
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/mm_init.c | 34 +++++++++++++++++++---------------
1 file changed, 19 insertions(+), 15 deletions(-)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index f9f8e1af921c..cfc76953e249 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -674,6 +674,20 @@ static inline void fixup_hashdist(void)
static inline void fixup_hashdist(void) {}
#endif /* CONFIG_NUMA */
+#ifdef CONFIG_ZONE_DEVICE
+static __meminit void pageblock_migratetype_init_range(unsigned long pfn,
+ unsigned long nr_pages, int migratetype)
+{
+ const unsigned long end = pfn + nr_pages;
+
+ for (pfn = pageblock_align(pfn); pfn < end; pfn += pageblock_nr_pages) {
+ init_pageblock_migratetype(pfn_to_page(pfn), migratetype, false);
+ if (IS_ALIGNED(pfn, PAGES_PER_SECTION))
+ cond_resched();
+ }
+}
+#endif
+
/*
* Initialize a reserved page unconditionally, finding its zone first.
*/
@@ -1011,21 +1025,6 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
page_folio(page)->pgmap = pgmap;
page->zone_device_data = NULL;
- /*
- * Mark the block movable so that blocks are reserved for
- * movable at startup. This will force kernel allocations
- * to reserve their blocks rather than leaking throughout
- * the address space during boot when many long-lived
- * kernel allocations are made.
- *
- * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
- * because this is done early in section_activate()
- */
- if (pageblock_aligned(pfn)) {
- init_pageblock_migratetype(page, MIGRATE_MOVABLE, false);
- cond_resched();
- }
-
/*
* ZONE_DEVICE pages other than MEMORY_TYPE_GENERIC are released
* directly to the driver page allocator which will set the page count
@@ -1122,6 +1121,9 @@ void __ref memmap_init_zone_device(struct zone *zone,
__init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
+ if (IS_ALIGNED(pfn, PAGES_PER_SECTION))
+ cond_resched();
+
if (pfns_per_compound == 1)
continue;
@@ -1129,6 +1131,8 @@ void __ref memmap_init_zone_device(struct zone *zone,
compound_nr_pages(altmap, pgmap));
}
+ pageblock_migratetype_init_range(start_pfn, nr_pages, MIGRATE_MOVABLE);
+
pr_debug("%s initialised %lu pages in %ums\n", __func__,
nr_pages, jiffies_to_msecs(jiffies - start));
}
--
2.20.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 6/7] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE
[not found] <20260424025547.3806072-1-songmuchun@bytedance.com>
` (3 preceding siblings ...)
2026-04-24 2:55 ` [PATCH v6 5/7] mm/mm_init: Fix pageblock migratetype for ZONE_DEVICE compound pages Muchun Song
@ 2026-04-24 2:55 ` Muchun Song
2026-04-24 8:20 ` Mike Rapoport
4 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2026-04-24 2:55 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan
Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Nicholas Piggin,
Christophe Leroy, aneesh.kumar, joao.m.martins, linux-mm,
linuxppc-dev, linux-kernel, Muchun Song, stable
If DAX memory is hotplugged into an unoccupied subsection of an early
section, section_activate() reuses the unoptimized boot memmap.
However, compound_nr_pages() still assumes that vmemmap optimization is
in effect and initializes only the reduced number of struct pages. As a
result, the remaining tail struct pages are left uninitialized, which
can later lead to unexpected behavior or crashes.
Fix this by treating early sections as unoptimized when calculating how
many struct pages to initialize.
Fixes: 6fd3620b3428 ("mm/page_alloc: reuse tail struct pages for compound devmaps")
Cc: stable@vger.kernel.org
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/mm_init.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index cfc76953e249..bd466a3c10c8 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1055,10 +1055,17 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
* of how the sparse_vmemmap internals handle compound pages in the lack
* of an altmap. See vmemmap_populate_compound_pages().
*/
-static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap,
+static inline unsigned long compound_nr_pages(unsigned long pfn,
+ struct vmem_altmap *altmap,
struct dev_pagemap *pgmap)
{
- if (!vmemmap_can_optimize(altmap, pgmap))
+ /*
+ * If DAX memory is hot-plugged into an unoccupied subsection
+ * of an early section, the unoptimized boot memmap is reused.
+ * See section_activate().
+ */
+ if (early_section(__pfn_to_section(pfn)) ||
+ !vmemmap_can_optimize(altmap, pgmap))
return pgmap_vmemmap_nr(pgmap);
return VMEMMAP_RESERVE_NR * (PAGE_SIZE / sizeof(struct page));
@@ -1128,7 +1135,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
continue;
memmap_init_compound(page, pfn, zone_idx, nid, pgmap,
- compound_nr_pages(altmap, pgmap));
+ compound_nr_pages(pfn, altmap, pgmap));
}
pageblock_migratetype_init_range(start_pfn, nr_pages, MIGRATE_MOVABLE);
--
2.20.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-24 2:55 ` [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization Muchun Song
@ 2026-04-24 7:33 ` David Hildenbrand (Arm)
2026-04-24 7:48 ` Muchun Song
2026-04-25 3:05 ` Muchun Song
0 siblings, 2 replies; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-24 7:33 UTC (permalink / raw)
To: Muchun Song, Andrew Morton, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan
Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Nicholas Piggin,
Christophe Leroy, aneesh.kumar, joao.m.martins, linux-mm,
linuxppc-dev, linux-kernel, stable
On 4/24/26 04:55, Muchun Song wrote:
> When vmemmap optimization is enabled for DAX, the nr_memmap_pages
> counter in /proc/vmstat is incorrect. The current code always accounts
> for the full, non-optimized vmemmap size, but vmemmap optimization
> reduces the actual number of vmemmap pages by reusing tail pages. This
> causes the system to overcount vmemmap usage, leading to inaccurate
> page statistics in /proc/vmstat.
>
> Fix this by introducing section_vmemmap_pages(), which returns the exact
> vmemmap page count for a given pfn range based on whether optimization
> is in effect.
>
> Fixes: 15995a352474 ("mm: report per-page metadata information")
> Cc: stable@vger.kernel.org
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> Acked-by: Oscar Salvador <osalvador@suse.de>
> ---
> mm/sparse-vmemmap.c | 31 +++++++++++++++++++++++++++----
> 1 file changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 3340f6d30b01..2e642c5ff3f2 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -652,6 +652,28 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
> }
> }
>
> +static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,
> + struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
> +{
> + const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
> + const unsigned long pages_per_compound = 1UL << order;
> +
> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
> + min(pages_per_compound, PAGES_PER_SECTION)));
FWIW, I though the right thing to do here would be:
VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound);
VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION);
I don't really see how PAGES_PER_SECTION make sense given that
PAGES_PER_SUBSECTION are the smallest granularity we allow adding/removing.
Also, the "min()" implies that there is a connection between both properties,
but there isn't to that degree.
If order == 0, then you'd only ever check alignment for ... 1, not
PAGES_PER_SUBSECTION, which already looks weird.
So you really want to check "max(pages_per_compound, PAGES_PER_SUBSECTION)", but
just having two statements is clearer.
Or am I getting something very wrong here? :)
--
Cheers,
David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-24 7:33 ` David Hildenbrand (Arm)
@ 2026-04-24 7:48 ` Muchun Song
2026-04-25 3:05 ` Muchun Song
1 sibling, 0 replies; 13+ messages in thread
From: Muchun Song @ 2026-04-24 7:48 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Muchun Song, Andrew Morton, Oscar Salvador, Michael Ellerman,
Madhavan Srinivasan, Lorenzo Stoakes, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Nicholas Piggin, Christophe Leroy, aneesh.kumar, joao.m.martins,
linux-mm, linuxppc-dev, linux-kernel, stable
> On Apr 24, 2026, at 15:33, David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 4/24/26 04:55, Muchun Song wrote:
>> When vmemmap optimization is enabled for DAX, the nr_memmap_pages
>> counter in /proc/vmstat is incorrect. The current code always accounts
>> for the full, non-optimized vmemmap size, but vmemmap optimization
>> reduces the actual number of vmemmap pages by reusing tail pages. This
>> causes the system to overcount vmemmap usage, leading to inaccurate
>> page statistics in /proc/vmstat.
>>
>> Fix this by introducing section_vmemmap_pages(), which returns the exact
>> vmemmap page count for a given pfn range based on whether optimization
>> is in effect.
>>
>> Fixes: 15995a352474 ("mm: report per-page metadata information")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
>> Acked-by: Oscar Salvador <osalvador@suse.de>
>> ---
>> mm/sparse-vmemmap.c | 31 +++++++++++++++++++++++++++----
>> 1 file changed, 27 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>> index 3340f6d30b01..2e642c5ff3f2 100644
>> --- a/mm/sparse-vmemmap.c
>> +++ b/mm/sparse-vmemmap.c
>> @@ -652,6 +652,28 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
>> }
>> }
>>
>> +static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,
>> + struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
>> +{
>> + const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
>> + const unsigned long pages_per_compound = 1UL << order;
>> +
>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
>> + min(pages_per_compound, PAGES_PER_SECTION)));
>
> FWIW, I though the right thing to do here would be:
>
> VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound);
> VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION);
>
> I don't really see how PAGES_PER_SECTION make sense given that
> PAGES_PER_SUBSECTION are the smallest granularity we allow adding/removing.
>
> Also, the "min()" implies that there is a connection between both properties,
> but there isn't to that degree.
>
> If order == 0, then you'd only ever check alignment for ... 1, not
> PAGES_PER_SUBSECTION, which already looks weird.
>
> So you really want to check "max(pages_per_compound, PAGES_PER_SUBSECTION)", but
> just having two statements is clearer.
>
> Or am I getting something very wrong here? :)
>
You are absolutely right. I misread it earlier. I mistakenly read
PAGES_PER_SUBSECTION as PAGES_PER_SECTION, which is why I still used
PAGES_PER_SECTION in v5. That was my mistake and obviously not what
you originally meant.
I completely agree with your suggestion to use two statements here,
as it makes the alignment requirements much clearer. I'll fix this in
the next version. Thanks for pointing this out!
Muchun,
Thanks.
>
> --
> Cheers,
>
> David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 6/7] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE
2026-04-24 2:55 ` [PATCH v6 6/7] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE Muchun Song
@ 2026-04-24 8:20 ` Mike Rapoport
0 siblings, 0 replies; 13+ messages in thread
From: Mike Rapoport @ 2026-04-24 8:20 UTC (permalink / raw)
To: Muchun Song
Cc: Andrew Morton, David Hildenbrand, Muchun Song, Oscar Salvador,
Michael Ellerman, Madhavan Srinivasan, Lorenzo Stoakes,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Nicholas Piggin, Christophe Leroy, aneesh.kumar,
joao.m.martins, linux-mm, linuxppc-dev, linux-kernel, stable
On Fri, Apr 24, 2026 at 10:55:46AM +0800, Muchun Song wrote:
> If DAX memory is hotplugged into an unoccupied subsection of an early
> section, section_activate() reuses the unoptimized boot memmap.
> However, compound_nr_pages() still assumes that vmemmap optimization is
> in effect and initializes only the reduced number of struct pages. As a
> result, the remaining tail struct pages are left uninitialized, which
> can later lead to unexpected behavior or crashes.
>
> Fix this by treating early sections as unoptimized when calculating how
> many struct pages to initialize.
>
> Fixes: 6fd3620b3428 ("mm/page_alloc: reuse tail struct pages for compound devmaps")
> Cc: stable@vger.kernel.org
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> mm/mm_init.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index cfc76953e249..bd466a3c10c8 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1055,10 +1055,17 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
> * of how the sparse_vmemmap internals handle compound pages in the lack
> * of an altmap. See vmemmap_populate_compound_pages().
> */
> -static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap,
> +static inline unsigned long compound_nr_pages(unsigned long pfn,
> + struct vmem_altmap *altmap,
> struct dev_pagemap *pgmap)
> {
> - if (!vmemmap_can_optimize(altmap, pgmap))
> + /*
> + * If DAX memory is hot-plugged into an unoccupied subsection
> + * of an early section, the unoptimized boot memmap is reused.
> + * See section_activate().
> + */
> + if (early_section(__pfn_to_section(pfn)) ||
> + !vmemmap_can_optimize(altmap, pgmap))
> return pgmap_vmemmap_nr(pgmap);
>
> return VMEMMAP_RESERVE_NR * (PAGE_SIZE / sizeof(struct page));
> @@ -1128,7 +1135,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
> continue;
>
> memmap_init_compound(page, pfn, zone_idx, nid, pgmap,
> - compound_nr_pages(altmap, pgmap));
> + compound_nr_pages(pfn, altmap, pgmap));
> }
>
> pageblock_migratetype_init_range(start_pfn, nr_pages, MIGRATE_MOVABLE);
> --
> 2.20.1
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-24 7:33 ` David Hildenbrand (Arm)
2026-04-24 7:48 ` Muchun Song
@ 2026-04-25 3:05 ` Muchun Song
2026-04-25 5:48 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 13+ messages in thread
From: Muchun Song @ 2026-04-25 3:05 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Muchun Song, Andrew Morton, Oscar Salvador, Michael Ellerman,
Madhavan Srinivasan, Lorenzo Stoakes, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Nicholas Piggin, Christophe Leroy, aneesh.kumar, joao.m.martins,
linux-mm, linuxppc-dev, linux-kernel, stable
> On Apr 24, 2026, at 15:33, David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 4/24/26 04:55, Muchun Song wrote:
>> When vmemmap optimization is enabled for DAX, the nr_memmap_pages
>> counter in /proc/vmstat is incorrect. The current code always accounts
>> for the full, non-optimized vmemmap size, but vmemmap optimization
>> reduces the actual number of vmemmap pages by reusing tail pages. This
>> causes the system to overcount vmemmap usage, leading to inaccurate
>> page statistics in /proc/vmstat.
>>
>> Fix this by introducing section_vmemmap_pages(), which returns the exact
>> vmemmap page count for a given pfn range based on whether optimization
>> is in effect.
>>
>> Fixes: 15995a352474 ("mm: report per-page metadata information")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
>> Acked-by: Oscar Salvador <osalvador@suse.de>
>> ---
>> mm/sparse-vmemmap.c | 31 +++++++++++++++++++++++++++----
>> 1 file changed, 27 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>> index 3340f6d30b01..2e642c5ff3f2 100644
>> --- a/mm/sparse-vmemmap.c
>> +++ b/mm/sparse-vmemmap.c
>> @@ -652,6 +652,28 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
>> }
>> }
>>
>> +static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,
>> + struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
>> +{
>> + const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
>> + const unsigned long pages_per_compound = 1UL << order;
>> +
>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
>> + min(pages_per_compound, PAGES_PER_SECTION)));
>
> FWIW, I though the right thing to do here would be:
>
> VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound);
> VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION);
>
> I don't really see how PAGES_PER_SECTION make sense given that
> PAGES_PER_SUBSECTION are the smallest granularity we allow adding/removing.
>
> Also, the "min()" implies that there is a connection between both properties,
> but there isn't to that degree.
>
> If order == 0, then you'd only ever check alignment for ... 1, not
> PAGES_PER_SUBSECTION, which already looks weird.
>
> So you really want to check "max(pages_per_compound, PAGES_PER_SUBSECTION)", but
> just having two statements is clearer.
>
> Or am I getting something very wrong here? :)
Hi David,
Sorry, I missed the 1GB hugepage scenario earlier. Given that sparse_add_section()
operates on a scale between PAGES_PER_SUBSECTION and PAGES_PER_SECTION, the pfn and
nr_pages parameters wouldn't be aligned with the hugepage size (pages_per_compound),
but rather with the PAGES_PER_SECTION boundary. Do you think this explanation makes
it clearer? In the interest of code clarity, do you think the modification below
makes it easier to follow?
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 2e642c5ff3f2..ce675c5fb94d 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -658,15 +658,18 @@ static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long n
const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
const unsigned long pages_per_compound = 1UL << order;
- VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
- min(pages_per_compound, PAGES_PER_SECTION)));
+ VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION));
VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
if (!vmemmap_can_optimize(altmap, pgmap))
return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
- if (order < PFN_SECTION_SHIFT)
+ if (order < PFN_SECTION_SHIFT) {
+ VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound));
return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
+ }
+
+ VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION));
if (IS_ALIGNED(pfn, pages_per_compound))
return VMEMMAP_RESERVE_NR;
Thanks.
>
>
> --
> Cheers,
>
> David
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-25 3:05 ` Muchun Song
@ 2026-04-25 5:48 ` David Hildenbrand (Arm)
2026-04-25 6:20 ` Muchun Song
0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-25 5:48 UTC (permalink / raw)
To: Muchun Song
Cc: Muchun Song, Andrew Morton, Oscar Salvador, Michael Ellerman,
Madhavan Srinivasan, Lorenzo Stoakes, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Nicholas Piggin, Christophe Leroy, aneesh.kumar, joao.m.martins,
linux-mm, linuxppc-dev, linux-kernel, stable
>
> Hi David,
>
> Sorry, I missed the 1GB hugepage scenario earlier. Given that sparse_add_section()
> operates on a scale between PAGES_PER_SUBSECTION and PAGES_PER_SECTION, the pfn and
> nr_pages parameters wouldn't be aligned with the hugepage size (pages_per_compound),
> but rather with the PAGES_PER_SECTION boundary. Do you think this explanation makes
> it clearer? In the interest of code clarity, do you think the modification below
> makes it easier to follow?
>
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 2e642c5ff3f2..ce675c5fb94d 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -658,15 +658,18 @@ static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long n
> const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
> const unsigned long pages_per_compound = 1UL << order;
>
> - VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
> - min(pages_per_compound, PAGES_PER_SECTION)));
> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION));
That here makes sense. We can only add/remove in multiples of PAGES_PER_SECTION.
I think what we are saying is that we want that check in addition to the
existing min() check.
> VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
>
> if (!vmemmap_can_optimize(altmap, pgmap))
> return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
>
> - if (order < PFN_SECTION_SHIFT)
> + if (order < PFN_SECTION_SHIFT) {
> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound));
> return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
That makes sense as well, within a section, we expect that we always add/remove
entire "compound"-managed chunks.
> + }
> +
> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION));
And this is then for the case where a 1G page spans multiple sections, where we
expect to add/remove an entire section.
So here, indeed the "min" makes sense. I guess we also assume:
VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION);
Looks better to me!
--
Cheers,
David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-25 5:48 ` David Hildenbrand (Arm)
@ 2026-04-25 6:20 ` Muchun Song
2026-04-25 6:47 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2026-04-25 6:20 UTC (permalink / raw)
To: David Hildenbrand
Cc: Muchun Song, Andrew Morton, Oscar Salvador, Michael Ellerman,
Madhavan Srinivasan, Lorenzo Stoakes, Liam R Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Nicholas Piggin, Christophe Leroy, aneesh.kumar, joao.m.martins,
linux-mm, linuxppc-dev, linux-kernel, stable
> On Apr 25, 2026, at 13:48, David Hildenbrand (Arm) <david@kernel.org> wrote:
>
>
>>
>>
>> Hi David,
>>
>> Sorry, I missed the 1GB hugepage scenario earlier. Given that sparse_add_section()
>> operates on a scale between PAGES_PER_SUBSECTION and PAGES_PER_SECTION, the pfn and
>> nr_pages parameters wouldn't be aligned with the hugepage size (pages_per_compound),
>> but rather with the PAGES_PER_SECTION boundary. Do you think this explanation makes
>> it clearer? In the interest of code clarity, do you think the modification below
>> makes it easier to follow?
>>
>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>> index 2e642c5ff3f2..ce675c5fb94d 100644
>> --- a/mm/sparse-vmemmap.c
>> +++ b/mm/sparse-vmemmap.c
>> @@ -658,15 +658,18 @@ static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long n
>> const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
>> const unsigned long pages_per_compound = 1UL << order;
>>
>> - VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
>> - min(pages_per_compound, PAGES_PER_SECTION)));
>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION));
>
> That here makes sense. We can only add/remove in multiples of PAGES_PER_SECTION.
> I think what we are saying is that we want that check in addition to the
> existing min() check.
Right.
>
>> VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
>>
>> if (!vmemmap_can_optimize(altmap, pgmap))
>> return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
>>
>> - if (order < PFN_SECTION_SHIFT)
>> + if (order < PFN_SECTION_SHIFT) {
>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound));
>> return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
>
> That makes sense as well, within a section, we expect that we always add/remove
> entire "compound"-managed chunks.
>
>> + }
>> +
>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION));
>
> And this is then for the case where a 1G page spans multiple sections, where we
> expect to add/remove an entire section.
>
> So here, indeed the "min" makes sense. I guess we also assume:
>
> VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION);
Yes. But this one we do not need to explicit it to
assert it since at the front of this function we have
VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
to make sure the passing range belongs to one section.
Thanks.
>
> Looks better to me!
>
> --
> Cheers,
>
> David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-25 6:20 ` Muchun Song
@ 2026-04-25 6:47 ` David Hildenbrand (Arm)
2026-04-25 6:56 ` Muchun Song
0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-25 6:47 UTC (permalink / raw)
To: Muchun Song
Cc: Muchun Song, Andrew Morton, Oscar Salvador, Michael Ellerman,
Madhavan Srinivasan, Lorenzo Stoakes, Liam R Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Nicholas Piggin, Christophe Leroy, aneesh.kumar, joao.m.martins,
linux-mm, linuxppc-dev, linux-kernel, stable
On 4/25/26 08:20, Muchun Song wrote:
>
>
>> On Apr 25, 2026, at 13:48, David Hildenbrand (Arm) <david@kernel.org> wrote:
>>
>>
>>>
>>>
>>> Hi David,
>>>
>>> Sorry, I missed the 1GB hugepage scenario earlier. Given that sparse_add_section()
>>> operates on a scale between PAGES_PER_SUBSECTION and PAGES_PER_SECTION, the pfn and
>>> nr_pages parameters wouldn't be aligned with the hugepage size (pages_per_compound),
>>> but rather with the PAGES_PER_SECTION boundary. Do you think this explanation makes
>>> it clearer? In the interest of code clarity, do you think the modification below
>>> makes it easier to follow?
>>>
>>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>>> index 2e642c5ff3f2..ce675c5fb94d 100644
>>> --- a/mm/sparse-vmemmap.c
>>> +++ b/mm/sparse-vmemmap.c
>>> @@ -658,15 +658,18 @@ static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long n
>>> const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
>>> const unsigned long pages_per_compound = 1UL << order;
>>>
>>> - VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
>>> - min(pages_per_compound, PAGES_PER_SECTION)));
>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION));
>>
>> That here makes sense. We can only add/remove in multiples of PAGES_PER_SECTION.
>> I think what we are saying is that we want that check in addition to the
>> existing min() check.
>
> Right.
>
>>
>>> VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
>>>
>>> if (!vmemmap_can_optimize(altmap, pgmap))
>>> return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
>>>
>>> - if (order < PFN_SECTION_SHIFT)
>>> + if (order < PFN_SECTION_SHIFT) {
>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound));
>>> return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
>>
>> That makes sense as well, within a section, we expect that we always add/remove
>> entire "compound"-managed chunks.
>>
>>> + }
>>> +
>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION));
>>
>> And this is then for the case where a 1G page spans multiple sections, where we
>> expect to add/remove an entire section.
>>
>> So here, indeed the "min" makes sense. I guess we also assume:
>>
>> VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION);
>
> Yes. But this one we do not need to explicit it to
> assert it since at the front of this function we have
>
> VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
Ah, yes. The alignment checks + VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION);
however imply that.
So you could simplify by using that check instead of the pfn_to_section_nr() check.
But it's still early here ... so whatever you prefer :)
--
Cheers,
David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
2026-04-25 6:47 ` David Hildenbrand (Arm)
@ 2026-04-25 6:56 ` Muchun Song
0 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2026-04-25 6:56 UTC (permalink / raw)
To: David Hildenbrand
Cc: Muchun Song, Andrew Morton, Oscar Salvador, Michael Ellerman,
Madhavan Srinivasan, Lorenzo Stoakes, Liam R Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Nicholas Piggin, Christophe Leroy, aneesh.kumar, joao.m.martins,
linux-mm, linuxppc-dev, linux-kernel, stable
> On Apr 25, 2026, at 14:47, David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 4/25/26 08:20, Muchun Song wrote:
>>
>>
>>>> On Apr 25, 2026, at 13:48, David Hildenbrand (Arm) <david@kernel.org> wrote:
>>>
>>>
>>>>
>>>>
>>>> Hi David,
>>>>
>>>> Sorry, I missed the 1GB hugepage scenario earlier. Given that sparse_add_section()
>>>> operates on a scale between PAGES_PER_SUBSECTION and PAGES_PER_SECTION, the pfn and
>>>> nr_pages parameters wouldn't be aligned with the hugepage size (pages_per_compound),
>>>> but rather with the PAGES_PER_SECTION boundary. Do you think this explanation makes
>>>> it clearer? In the interest of code clarity, do you think the modification below
>>>> makes it easier to follow?
>>>>
>>>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>>>> index 2e642c5ff3f2..ce675c5fb94d 100644
>>>> --- a/mm/sparse-vmemmap.c
>>>> +++ b/mm/sparse-vmemmap.c
>>>> @@ -658,15 +658,18 @@ static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long n
>>>> const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
>>>> const unsigned long pages_per_compound = 1UL << order;
>>>>
>>>> - VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages,
>>>> - min(pages_per_compound, PAGES_PER_SECTION)));
>>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION));
>>>
>>> That here makes sense. We can only add/remove in multiples of PAGES_PER_SECTION.
>>> I think what we are saying is that we want that check in addition to the
>>> existing min() check.
>>
>> Right.
>>
>>>
>>>> VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
>>>>
>>>> if (!vmemmap_can_optimize(altmap, pgmap))
>>>> return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
>>>>
>>>> - if (order < PFN_SECTION_SHIFT)
>>>> + if (order < PFN_SECTION_SHIFT) {
>>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound));
>>>> return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
>>>
>>> That makes sense as well, within a section, we expect that we always add/remove
>>> entire "compound"-managed chunks.
>>>
>>>> + }
>>>> +
>>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION));
>>>
>>> And this is then for the case where a 1G page spans multiple sections, where we
>>> expect to add/remove an entire section.
>>>
>>> So here, indeed the "min" makes sense. I guess we also assume:
>>>
>>> VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION);
>>
>> Yes. But this one we do not need to explicit it to
>> assert it since at the front of this function we have
>>
>> VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
>
> Ah, yes. The alignment checks + VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION);
> however imply that.
>
> So you could simplify by using that check instead of the pfn_to_section_nr() check.
>
> But it's still early here ... so whatever you prefer :)
Thanks for the suggestion. I think your approach is also
good — at least it looks shorter and cleaner. I'll switch to
using VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION) instead.
Thanks.
>
> --
> Cheers,
>
> David
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-04-25 6:56 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260424025547.3806072-1-songmuchun@bytedance.com>
2026-04-24 2:55 ` [PATCH v6 1/7] mm/sparse-vmemmap: Fix vmemmap accounting underflow Muchun Song
2026-04-24 2:55 ` [PATCH v6 2/7] mm/memory_hotplug: Fix incorrect altmap passing in error path Muchun Song
2026-04-24 2:55 ` [PATCH v6 4/7] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization Muchun Song
2026-04-24 7:33 ` David Hildenbrand (Arm)
2026-04-24 7:48 ` Muchun Song
2026-04-25 3:05 ` Muchun Song
2026-04-25 5:48 ` David Hildenbrand (Arm)
2026-04-25 6:20 ` Muchun Song
2026-04-25 6:47 ` David Hildenbrand (Arm)
2026-04-25 6:56 ` Muchun Song
2026-04-24 2:55 ` [PATCH v6 5/7] mm/mm_init: Fix pageblock migratetype for ZONE_DEVICE compound pages Muchun Song
2026-04-24 2:55 ` [PATCH v6 6/7] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE Muchun Song
2026-04-24 8:20 ` Mike Rapoport
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox