public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm/sparse: fix BUILD_BUG_ON check for section map alignment
@ 2026-03-31 11:30 Muchun Song
  2026-03-31 19:55 ` Andrew Morton
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Muchun Song @ 2026-03-31 11:30 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand
  Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Petr Tesarik, linux-mm,
	linux-kernel, muchun.song, Muchun Song

The comment in mmzone.h states that the alignment requirement
is the minimum of PAGE_SHIFT and PFN_SECTION_SHIFT. However, the
pointer arithmetic (mem_map - section_nr_to_pfn()) results in
a byte offset scaled by sizeof(struct page). Thus, the actual
alignment provided by the second term is PFN_SECTION_SHIFT +
__ffs(sizeof(struct page)).

Update the compile-time check and the mmzone.h comment to
accurately reflect this mathematically guaranteed alignment by
taking the minimum of PAGE_SHIFT and PFN_SECTION_SHIFT +
__ffs(sizeof(struct page)). This avoids the issue of the check
being overly restrictive on architectures like powerpc where
PFN_SECTION_SHIFT alone is very small (e.g., 6).

Also, remove the exhaustive per-architecture bit-width list from the
comment; such details risk falling out of date over time and may
inadvertently be left un-updated, while the existing BUILD_BUG_ON
provides sufficient compile-time verification of the constraint.

No runtime impact so far: SECTION_MAP_LAST_BIT happens to fit within
the smaller limit on all existing architectures.

Fixes: def9b71ee651 ("include/linux/mmzone.h: fix explanation of lower bits in the SPARSEMEM mem_map pointer")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mmzone.h | 24 +++++++++---------------
 mm/sparse.c            |  3 ++-
 2 files changed, 11 insertions(+), 16 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7bd0134c241c..584fa598ad75 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2073,21 +2073,15 @@ static inline struct mem_section *__nr_to_section(unsigned long nr)
 extern size_t mem_section_usage_size(void);
 
 /*
- * We use the lower bits of the mem_map pointer to store
- * a little bit of information.  The pointer is calculated
- * as mem_map - section_nr_to_pfn(pnum).  The result is
- * aligned to the minimum alignment of the two values:
- *   1. All mem_map arrays are page-aligned.
- *   2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
- *      lowest bits.  PFN_SECTION_SHIFT is arch-specific
- *      (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
- *      worst combination is powerpc with 256k pages,
- *      which results in PFN_SECTION_SHIFT equal 6.
- * To sum it up, at least 6 bits are available on all architectures.
- * However, we can exceed 6 bits on some other architectures except
- * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available
- * with the worst case of 64K pages on arm64) if we make sure the
- * exceeded bit is not applicable to powerpc.
+ * We use the lower bits of the mem_map pointer to store a little bit of
+ * information. The pointer is calculated as mem_map - section_nr_to_pfn().
+ * The result is aligned to the minimum alignment of the two values:
+ *
+ * 1. All mem_map arrays are page-aligned.
+ * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT lowest bits. Because
+ *    it is subtracted from a struct page pointer, the offset is scaled by
+ *    sizeof(struct page). This provides an alignment of PFN_SECTION_SHIFT +
+ *    __ffs(sizeof(struct page)).
  */
 enum {
 	SECTION_MARKED_PRESENT_BIT,
diff --git a/mm/sparse.c b/mm/sparse.c
index dfabe554adf8..c2eb36bfb86d 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -269,7 +269,8 @@ static unsigned long sparse_encode_mem_map(struct page *mem_map, unsigned long p
 {
 	unsigned long coded_mem_map =
 		(unsigned long)(mem_map - (section_nr_to_pfn(pnum)));
-	BUILD_BUG_ON(SECTION_MAP_LAST_BIT > PFN_SECTION_SHIFT);
+	BUILD_BUG_ON(SECTION_MAP_LAST_BIT > min(PFN_SECTION_SHIFT + __ffs(sizeof(struct page)),
+						PAGE_SHIFT));
 	BUG_ON(coded_mem_map & ~SECTION_MAP_MASK);
 	return coded_mem_map;
 }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-04-01 16:33 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 11:30 [PATCH] mm/sparse: fix BUILD_BUG_ON check for section map alignment Muchun Song
2026-03-31 19:55 ` Andrew Morton
2026-03-31 20:04   ` David Hildenbrand (Arm)
2026-04-01  2:47     ` Muchun Song
2026-03-31 20:07 ` Andrew Morton
2026-04-01  2:47   ` Muchun Song
2026-03-31 20:29 ` David Hildenbrand (Arm)
2026-04-01  2:57   ` Muchun Song
2026-04-01  2:59     ` Muchun Song
2026-04-01  4:01     ` Muchun Song
2026-04-01  7:08       ` David Hildenbrand (Arm)
2026-04-01  7:23         ` Muchun Song
2026-04-01  7:26           ` David Hildenbrand (Arm)
2026-04-01  7:28             ` Muchun Song
2026-04-01 16:33             ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox