public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Sang-Heon Jeon <ekffu200098@gmail.com>
To: muchun.song@linux.dev, osalvador@suse.de, david@kernel.org,
	akpm@linux-foundation.org
Cc: linux-mm@kvack.org, Sang-Heon Jeon <ekffu200098@gmail.com>
Subject: [PATCH] mm/hugetlb_cma: round up per_node before logging it
Date: Wed, 22 Apr 2026 23:33:53 +0900	[thread overview]
Message-ID: <20260422143353.852257-1-ekffu200098@gmail.com> (raw)

When the user requests a total hugetlb CMA size without per-node
specification, hugetlb_cma_reserve() computes per_node from
hugetlb_cma_size and the number of nodes that have memory

        per_node = DIV_ROUND_UP(hugetlb_cma_size,
                                nodes_weight(hugetlb_bootmem_nodes));

The reservation loop later computes

        size = round_up(min(per_node, hugetlb_cma_size - reserved),
                          PAGE_SIZE << order);

So the actually reserved per_node size is multiple of (PAGE_SIZE <<
order), but the logged per_node is not rounded up, so it may be smaller
than the actual reserved size.

For example, as the existing comment describes, if a 3 GB area is
requested on a machine with 4 NUMA nodes that have memory, 1 GB is
allocated on the first three nodes, but the printed log is

        hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node

Round per_node up to (PAGE_SIZE << order) before logging so that the
printed log always matches the actual reserved size. No functional change
to the actual reservation size, as the following case analysis shows

1. remaining (hugetlb_cma_size - reserved) >= rounded per_node
 - AS-IS: min() picks unrounded per_node;
    round_up() returns rounded per_node
 - TO-BE: min() picks rounded per_node;
    round_up() returns rounded per_node (no-op)
2. remaining < unrounded per_node
 - AS-IS: min() picks remaining;
    round_up() returns round_up(remaining)
 - TO-BE: min() picks remaining;
    round_up() returns round_up(remaining)
3. unrounded per_node <= remaining < rounded per_node
 - AS-IS: min() picks unrounded per_node;
    round_up() returns rounded per_node
 - TO-BE: min() picks remaining;
    round_up() returns round_up(remaining) equals rounded per_node

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7
Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
---
Changes from RFC v1 [1]

- Fix missing semicolon
- Add Fixes tag
- Remove RFC tag 

[1] https://lore.kernel.org/all/20260421230220.4122996-1-ekffu200098@gmail.com/
---
QEMU-based test results

- arm64, 4GB per 4 NUMA nodes
- cmdline: hugetlb_cma=3G

1) AS-IS (beforce fix)
[    0.000000] psci: SMC Calling Convention v1.0
[    0.000000] hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 0
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 1
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 2
[    0.000000] Zone ranges:

2) TO-BE (after fix)
[    0.000000] psci: SMC Calling Convention v1.0
[    0.000000] hugetlb_cma: reserve 3072 MiB, up to 1024 MiB per node
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 0
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 1
[    0.000000] cma: Reserved 1024 MiB in 1 range
[    0.000000] hugetlb_cma: reserved 1024 MiB on node 2
[    0.000000] Zone ranges:

---
 mm/hugetlb_cma.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index f83ae4998990..7693ccefd0c6 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -204,6 +204,7 @@ void __init hugetlb_cma_reserve(void)
 		 */
 		per_node = DIV_ROUND_UP(hugetlb_cma_size,
 					nodes_weight(hugetlb_bootmem_nodes));
+		per_node = round_up(per_node, PAGE_SIZE << order);
 		pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
 			hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
 	}
-- 
2.43.0



             reply	other threads:[~2026-04-22 14:34 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-22 14:33 Sang-Heon Jeon [this message]
2026-04-23  2:11 ` [PATCH] mm/hugetlb_cma: round up per_node before logging it Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260422143353.852257-1-ekffu200098@gmail.com \
    --to=ekffu200098@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox