public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm/hugetlb_cma: round up per_node before logging it
@ 2026-04-22 14:33 Sang-Heon Jeon
  2026-04-23  2:11 ` Muchun Song
  0 siblings, 1 reply; 2+ messages in thread
From: Sang-Heon Jeon @ 2026-04-22 14:33 UTC (permalink / raw)
  To: muchun.song, osalvador, david, akpm; +Cc: linux-mm, Sang-Heon Jeon

When the user requests a total hugetlb CMA size without per-node
specification, hugetlb_cma_reserve() computes per_node from
hugetlb_cma_size and the number of nodes that have memory

        per_node = DIV_ROUND_UP(hugetlb_cma_size,
                                nodes_weight(hugetlb_bootmem_nodes));

The reservation loop later computes

        size = round_up(min(per_node, hugetlb_cma_size - reserved),
                          PAGE_SIZE << order);

So the actually reserved per_node size is multiple of (PAGE_SIZE <<
order), but the logged per_node is not rounded up, so it may be smaller
than the actual reserved size.

For example, as the existing comment describes, if a 3 GB area is
requested on a machine with 4 NUMA nodes that have memory, 1 GB is
allocated on the first three nodes, but the printed log is

        hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node

Round per_node up to (PAGE_SIZE << order) before logging so that the
printed log always matches the actual reserved size. No functional change
to the actual reservation size, as the following case analysis shows

1. remaining (hugetlb_cma_size - reserved) >= rounded per_node
 - AS-IS: min() picks unrounded per_node;
    round_up() returns rounded per_node
 - TO-BE: min() picks rounded per_node;
    round_up() returns rounded per_node (no-op)
2. remaining < unrounded per_node
 - AS-IS: min() picks remaining;
    round_up() returns round_up(remaining)
 - TO-BE: min() picks remaining;
    round_up() returns round_up(remaining)
3. unrounded per_node <= remaining < rounded per_node
 - AS-IS: min() picks unrounded per_node;
    round_up() returns rounded per_node
 - TO-BE: min() picks remaining;
    round_up() returns round_up(remaining) equals rounded per_node

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7
Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
---
Changes from RFC v1 [1]

- Fix missing semicolon
- Add Fixes tag
- Remove RFC tag 

[1] https://lore.kernel.org/all/20260421230220.4122996-1-ekffu200098@gmail.com/
---
QEMU-based test results

- arm64, 4GB per 4 NUMA nodes
- cmdline: hugetlb_cma=3G

1) AS-IS (beforce fix)
[    0.000000] psci: SMC Calling Convention v1.0
[    0.000000] hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 0
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 1
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 2
[    0.000000] Zone ranges:

2) TO-BE (after fix)
[    0.000000] psci: SMC Calling Convention v1.0
[    0.000000] hugetlb_cma: reserve 3072 MiB, up to 1024 MiB per node
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 0
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 1
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 2
[    0.000000] Zone ranges:

---
 mm/hugetlb_cma.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index f83ae4998990..7693ccefd0c6 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -204,6 +204,7 @@ void __init hugetlb_cma_reserve(void)
 		 */
 		per_node = DIV_ROUND_UP(hugetlb_cma_size,
 					nodes_weight(hugetlb_bootmem_nodes));
+		per_node = round_up(per_node, PAGE_SIZE << order);
 		pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
 			hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
 	}
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-23  2:12 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-22 14:33 [PATCH] mm/hugetlb_cma: round up per_node before logging it Sang-Heon Jeon
2026-04-23  2:11 ` Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox