* [PATCH] mm/hugetlb_cma: restrict hugetlb_cma parameter to gigantic-page alignment
@ 2026-05-03 8:42 Sang-Heon Jeon
2026-05-11 2:52 ` Muchun Song
2026-05-11 8:23 ` Oscar Salvador
0 siblings, 2 replies; 3+ messages in thread
From: Sang-Heon Jeon @ 2026-05-03 8:42 UTC (permalink / raw)
To: muchun.song, osalvador, david, akpm; +Cc: linux-mm, Sang-Heon Jeon
Existing hugetlb_cma parameter handling logic rejects sizes smaller than
one gigantic page, but rounds up larger sizes that are not a multiple
of it. The two behaviors are inconsistent and neither is documented.
To remove existing inconsistent and undefined behavior, restrict
hugetlb_cma parameter to only accept multiples of the gigantic page size.
After this restriction, the redundant round_up() in the allocation loop
can be removed.
The new restriction is also documented in kernel-parameters.txt.
Also, including other minor changes for readability improvement with no
functional change.
Suggested-by: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
---
QEMU based test result
1) hugetlb_cma=1300M
a) AS-IS : total 2G reserved
[ 0.000000] hugetlb_cma: reserve 1300 MiB, up to 1024 MiB per node
[ 0.000000] cma: Reserved 1024 MiB in 1 range
[ 0.000000] hugetlb_cma: reserved 1024 MiB on node 0
[ 0.000000] cma: Reserved 1024 MiB in 1 range
[ 0.000000] hugetlb_cma: reserved 1024 MiB on node 1
b) TO-BE : rejected
[ 0.000000] hugetlb_cma: cma area must be a multiple of 1024 MiB
2) hugetlb_cma=0:1300M,1:1G
a) AS-IS : 2G reserved on node 0, 1G reserved on node 1
[ 0.000000] cma: Reserved 2048 MiB in 1 range
[ 0.000000] hugetlb_cma: reserved 2048 MiB on node 0
[ 0.000000] cma: Reserved 1024 MiB in 1 range
[ 0.000000] hugetlb_cma: reserved 1024 MiB on node 1
b) TO-BE : 1G reserved on node 1
[ 0.000000] hugetlb_cma: cma area of node 0 must be a multiple of 1024 MiB
[ 0.000000] cma: Reserved 1024 MiB in 1 range
[ 0.000000] hugetlb_cma: reserved 1024 MiB on node 1
---
Hello,
This patch implements a new restriction to hugetlb_cma parameter that
was suggested during the review of the previous patch [1].
Thanks to Muchun for the guidance.
[1] https://lore.kernel.org/all/BFB1F124-B599-4832-A9DA-F4931999BAF9@linux.dev/
Best Regards,
Sang-Heon Jeon
---
.../admin-guide/kernel-parameters.txt | 4 +++
mm/hugetlb_cma.c | 35 +++++++++----------
2 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 4d0f545fb3ec..23be2f64439c 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2100,6 +2100,10 @@ Kernel parameters
Format: nn[KMGTPE] or (node format)
<node>:nn[KMGTPE][,<node>:nn[KMGTPE]]
+ The size must be a multiple of the gigantic page size.
+ When using node format, this applies to each per-node size.
+ Missaligned values are dropped with a warning.
+
Reserve a CMA area of given size and allocate gigantic
hugepages using the CMA allocator. If enabled, the
boot-time allocation of gigantic hugepages is skipped.
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index 7693ccefd0c6..39344d6c78d8 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -142,7 +142,7 @@ unsigned int __weak arch_hugetlb_cma_order(void)
void __init hugetlb_cma_reserve(void)
{
- unsigned long size, reserved, per_node, order;
+ unsigned long size, reserved, per_node, order, gigantic_page_size;
bool node_specific_cma_alloc = false;
int nid;
@@ -162,37 +162,36 @@ void __init hugetlb_cma_reserve(void)
* breaking this assumption.
*/
VM_WARN_ON(order <= MAX_PAGE_ORDER);
+ gigantic_page_size = PAGE_SIZE << order;
hugetlb_bootmem_set_nodes();
for (nid = 0; nid < MAX_NUMNODES; nid++) {
- if (hugetlb_cma_size_in_node[nid] == 0)
+ size = hugetlb_cma_size_in_node[nid];
+ if (size == 0)
continue;
if (!node_isset(nid, hugetlb_bootmem_nodes)) {
pr_warn("hugetlb_cma: invalid node %d specified\n", nid);
- hugetlb_cma_size -= hugetlb_cma_size_in_node[nid];
- hugetlb_cma_size_in_node[nid] = 0;
- continue;
- }
-
- if (hugetlb_cma_size_in_node[nid] < (PAGE_SIZE << order)) {
- pr_warn("hugetlb_cma: cma area of node %d should be at least %lu MiB\n",
- nid, (PAGE_SIZE << order) / SZ_1M);
- hugetlb_cma_size -= hugetlb_cma_size_in_node[nid];
- hugetlb_cma_size_in_node[nid] = 0;
+ } else if (!IS_ALIGNED(size, gigantic_page_size)) {
+ pr_warn("hugetlb_cma: cma area of node %d must be a multiple of %lu MiB\n",
+ nid, gigantic_page_size / SZ_1M);
} else {
node_specific_cma_alloc = true;
+ continue;
}
+
+ hugetlb_cma_size -= size;
+ hugetlb_cma_size_in_node[nid] = 0;
}
/* Validate the CMA size again in case some invalid nodes specified. */
if (!hugetlb_cma_size)
return;
- if (hugetlb_cma_size < (PAGE_SIZE << order)) {
- pr_warn("hugetlb_cma: cma area should be at least %lu MiB\n",
- (PAGE_SIZE << order) / SZ_1M);
+ if (!IS_ALIGNED(hugetlb_cma_size, gigantic_page_size)) {
+ pr_warn("hugetlb_cma: cma area must be a multiple of %lu MiB\n",
+ gigantic_page_size / SZ_1M);
hugetlb_cma_size = 0;
return;
}
@@ -204,7 +203,7 @@ void __init hugetlb_cma_reserve(void)
*/
per_node = DIV_ROUND_UP(hugetlb_cma_size,
nodes_weight(hugetlb_bootmem_nodes));
- per_node = round_up(per_node, PAGE_SIZE << order);
+ per_node = round_up(per_node, gigantic_page_size);
pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
}
@@ -223,15 +222,13 @@ void __init hugetlb_cma_reserve(void)
size = min(per_node, hugetlb_cma_size - reserved);
}
- size = round_up(size, PAGE_SIZE << order);
-
snprintf(name, sizeof(name), "hugetlb%d", nid);
/*
* Note that 'order per bit' is based on smallest size that
* may be returned to CMA allocator in the case of
* huge page demotion.
*/
- res = cma_declare_contiguous_multi(size, PAGE_SIZE << order,
+ res = cma_declare_contiguous_multi(size, gigantic_page_size,
HUGETLB_PAGE_ORDER, name,
&hugetlb_cma[nid], nid);
if (res) {
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] mm/hugetlb_cma: restrict hugetlb_cma parameter to gigantic-page alignment
2026-05-03 8:42 [PATCH] mm/hugetlb_cma: restrict hugetlb_cma parameter to gigantic-page alignment Sang-Heon Jeon
@ 2026-05-11 2:52 ` Muchun Song
2026-05-11 8:23 ` Oscar Salvador
1 sibling, 0 replies; 3+ messages in thread
From: Muchun Song @ 2026-05-11 2:52 UTC (permalink / raw)
To: Sang-Heon Jeon; +Cc: osalvador, david, akpm, linux-mm
> On May 3, 2026, at 16:42, Sang-Heon Jeon <ekffu200098@gmail.com> wrote:
>
> Existing hugetlb_cma parameter handling logic rejects sizes smaller than
> one gigantic page, but rounds up larger sizes that are not a multiple
> of it. The two behaviors are inconsistent and neither is documented.
>
> To remove existing inconsistent and undefined behavior, restrict
> hugetlb_cma parameter to only accept multiples of the gigantic page size.
>
> After this restriction, the redundant round_up() in the allocation loop
> can be removed.
>
> The new restriction is also documented in kernel-parameters.txt.
>
> Also, including other minor changes for readability improvement with no
> functional change.
>
> Suggested-by: Muchun Song <muchun.song@linux.dev>
> Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
> ---
> QEMU based test result
>
> 1) hugetlb_cma=1300M
> a) AS-IS : total 2G reserved
> [ 0.000000] hugetlb_cma: reserve 1300 MiB, up to 1024 MiB per node
> [ 0.000000] cma: Reserved 1024 MiB in 1 range
> [ 0.000000] hugetlb_cma: reserved 1024 MiB on node 0
> [ 0.000000] cma: Reserved 1024 MiB in 1 range
> [ 0.000000] hugetlb_cma: reserved 1024 MiB on node 1
>
> b) TO-BE : rejected
> [ 0.000000] hugetlb_cma: cma area must be a multiple of 1024 MiB
>
> 2) hugetlb_cma=0:1300M,1:1G
> a) AS-IS : 2G reserved on node 0, 1G reserved on node 1
> [ 0.000000] cma: Reserved 2048 MiB in 1 range
> [ 0.000000] hugetlb_cma: reserved 2048 MiB on node 0
> [ 0.000000] cma: Reserved 1024 MiB in 1 range
> [ 0.000000] hugetlb_cma: reserved 1024 MiB on node 1
>
> b) TO-BE : 1G reserved on node 1
> [ 0.000000] hugetlb_cma: cma area of node 0 must be a multiple of 1024 MiB
> [ 0.000000] cma: Reserved 1024 MiB in 1 range
> [ 0.000000] hugetlb_cma: reserved 1024 MiB on node 1
> ---
> Hello,
>
> This patch implements a new restriction to hugetlb_cma parameter that
> was suggested during the review of the previous patch [1].
> Thanks to Muchun for the guidance.
>
> [1] https://lore.kernel.org/all/BFB1F124-B599-4832-A9DA-F4931999BAF9@linux.dev/
>
> Best Regards,
> Sang-Heon Jeon
> ---
> .../admin-guide/kernel-parameters.txt | 4 +++
> mm/hugetlb_cma.c | 35 +++++++++----------
> 2 files changed, 20 insertions(+), 19 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 4d0f545fb3ec..23be2f64439c 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2100,6 +2100,10 @@ Kernel parameters
> Format: nn[KMGTPE] or (node format)
> <node>:nn[KMGTPE][,<node>:nn[KMGTPE]]
>
> + The size must be a multiple of the gigantic page size.
> + When using node format, this applies to each per-node size.
> + Missaligned values are dropped with a warning.
^
Misaligned
Acked-by: Muchun Song <muchun.song@linux.dev>
Thanks.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mm/hugetlb_cma: restrict hugetlb_cma parameter to gigantic-page alignment
2026-05-03 8:42 [PATCH] mm/hugetlb_cma: restrict hugetlb_cma parameter to gigantic-page alignment Sang-Heon Jeon
2026-05-11 2:52 ` Muchun Song
@ 2026-05-11 8:23 ` Oscar Salvador
1 sibling, 0 replies; 3+ messages in thread
From: Oscar Salvador @ 2026-05-11 8:23 UTC (permalink / raw)
To: Sang-Heon Jeon; +Cc: muchun.song, david, akpm, linux-mm
On Sun, May 03, 2026 at 05:42:25PM +0900, Sang-Heon Jeon wrote:
> Existing hugetlb_cma parameter handling logic rejects sizes smaller than
> one gigantic page, but rounds up larger sizes that are not a multiple
> of it. The two behaviors are inconsistent and neither is documented.
>
> To remove existing inconsistent and undefined behavior, restrict
> hugetlb_cma parameter to only accept multiples of the gigantic page size.
>
> After this restriction, the redundant round_up() in the allocation loop
> can be removed.
>
> The new restriction is also documented in kernel-parameters.txt.
>
> Also, including other minor changes for readability improvement with no
> functional change.
>
> Suggested-by: Muchun Song <muchun.song@linux.dev>
> Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
Acked-by: Oscar Salvador <osalvador@suse.de>
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-11 8:23 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-03 8:42 [PATCH] mm/hugetlb_cma: restrict hugetlb_cma parameter to gigantic-page alignment Sang-Heon Jeon
2026-05-11 2:52 ` Muchun Song
2026-05-11 8:23 ` Oscar Salvador
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox