* FAILED: patch "[PATCH] mm/hugetlb_cma: round up per_node before logging it" failed to apply to 5.15-stable tree
@ 2026-05-12 13:17 gregkh
2026-05-14 15:52 ` [PATCH 5.15.y] mm/hugetlb_cma: round up per_node before logging it Sasha Levin
0 siblings, 1 reply; 2+ messages in thread
From: gregkh @ 2026-05-12 13:17 UTC (permalink / raw)
To: ekffu200098, akpm, david, muchun.song, osalvador, stable; +Cc: stable
The patch below does not apply to the 5.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable@vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.15.y
git checkout FETCH_HEAD
git cherry-pick -x 8f5ce56b76303c55b78a87af996e2e0f8535f979
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable@vger.kernel.org>' --in-reply-to '2026051242-tanned-unlatch-32ed@gregkh' --subject-prefix 'PATCH 5.15.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 8f5ce56b76303c55b78a87af996e2e0f8535f979 Mon Sep 17 00:00:00 2001
From: Sang-Heon Jeon <ekffu200098@gmail.com>
Date: Wed, 22 Apr 2026 23:33:53 +0900
Subject: [PATCH] mm/hugetlb_cma: round up per_node before logging it
When the user requests a total hugetlb CMA size without per-node
specification, hugetlb_cma_reserve() computes per_node from
hugetlb_cma_size and the number of nodes that have memory
per_node = DIV_ROUND_UP(hugetlb_cma_size,
nodes_weight(hugetlb_bootmem_nodes));
The reservation loop later computes
size = round_up(min(per_node, hugetlb_cma_size - reserved),
PAGE_SIZE << order);
So the actually reserved per_node size is multiple of (PAGE_SIZE <<
order), but the logged per_node is not rounded up, so it may be smaller
than the actual reserved size.
For example, as the existing comment describes, if a 3 GB area is
requested on a machine with 4 NUMA nodes that have memory, 1 GB is
allocated on the first three nodes, but the printed log is
hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node
Round per_node up to (PAGE_SIZE << order) before logging so that the
printed log always matches the actual reserved size. No functional change
to the actual reservation size, as the following case analysis shows
1. remaining (hugetlb_cma_size - reserved) >= rounded per_node
- AS-IS: min() picks unrounded per_node;
round_up() returns rounded per_node
- TO-BE: min() picks rounded per_node;
round_up() returns rounded per_node (no-op)
2. remaining < unrounded per_node
- AS-IS: min() picks remaining;
round_up() returns round_up(remaining)
- TO-BE: min() picks remaining;
round_up() returns round_up(remaining)
3. unrounded per_node <= remaining < rounded per_node
- AS-IS: min() picks unrounded per_node;
round_up() returns rounded per_node
- TO-BE: min() picks remaining;
round_up() returns round_up(remaining) equals rounded per_node
Link: https://lore.kernel.org/20260422143353.852257-1-ekffu200098@gmail.com
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7
Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index f83ae4998990..7693ccefd0c6 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -204,6 +204,7 @@ void __init hugetlb_cma_reserve(void)
*/
per_node = DIV_ROUND_UP(hugetlb_cma_size,
nodes_weight(hugetlb_bootmem_nodes));
+ per_node = round_up(per_node, PAGE_SIZE << order);
pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
}
^ permalink raw reply related [flat|nested] 2+ messages in thread* [PATCH 5.15.y] mm/hugetlb_cma: round up per_node before logging it
2026-05-12 13:17 FAILED: patch "[PATCH] mm/hugetlb_cma: round up per_node before logging it" failed to apply to 5.15-stable tree gregkh
@ 2026-05-14 15:52 ` Sasha Levin
0 siblings, 0 replies; 2+ messages in thread
From: Sasha Levin @ 2026-05-14 15:52 UTC (permalink / raw)
To: stable
Cc: Sang-Heon Jeon, Muchun Song, David Hildenbrand, Oscar Salvador,
Andrew Morton, Sasha Levin
From: Sang-Heon Jeon <ekffu200098@gmail.com>
[ Upstream commit 8f5ce56b76303c55b78a87af996e2e0f8535f979 ]
When the user requests a total hugetlb CMA size without per-node
specification, hugetlb_cma_reserve() computes per_node from
hugetlb_cma_size and the number of nodes that have memory
per_node = DIV_ROUND_UP(hugetlb_cma_size,
nodes_weight(hugetlb_bootmem_nodes));
The reservation loop later computes
size = round_up(min(per_node, hugetlb_cma_size - reserved),
PAGE_SIZE << order);
So the actually reserved per_node size is multiple of (PAGE_SIZE <<
order), but the logged per_node is not rounded up, so it may be smaller
than the actual reserved size.
For example, as the existing comment describes, if a 3 GB area is
requested on a machine with 4 NUMA nodes that have memory, 1 GB is
allocated on the first three nodes, but the printed log is
hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node
Round per_node up to (PAGE_SIZE << order) before logging so that the
printed log always matches the actual reserved size. No functional change
to the actual reservation size, as the following case analysis shows
1. remaining (hugetlb_cma_size - reserved) >= rounded per_node
- AS-IS: min() picks unrounded per_node;
round_up() returns rounded per_node
- TO-BE: min() picks rounded per_node;
round_up() returns rounded per_node (no-op)
2. remaining < unrounded per_node
- AS-IS: min() picks remaining;
round_up() returns round_up(remaining)
- TO-BE: min() picks remaining;
round_up() returns round_up(remaining)
3. unrounded per_node <= remaining < rounded per_node
- AS-IS: min() picks unrounded per_node;
round_up() returns rounded per_node
- TO-BE: min() picks remaining;
round_up() returns round_up(remaining) equals rounded per_node
Link: https://lore.kernel.org/20260422143353.852257-1-ekffu200098@gmail.com
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7
Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ applied the one-line `round_up` to `mm/hugetlb.c` ]
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/hugetlb.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 64dfa3fcd554b..8b4bc38e1c0a6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6493,6 +6493,7 @@ void __init hugetlb_cma_reserve(int order)
* let's allocate 1 GB on first three nodes and ignore the last one.
*/
per_node = DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes);
+ per_node = round_up(per_node, PAGE_SIZE << order);
pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
--
2.53.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-05-14 15:52 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12 13:17 FAILED: patch "[PATCH] mm/hugetlb_cma: round up per_node before logging it" failed to apply to 5.15-stable tree gregkh
2026-05-14 15:52 ` [PATCH 5.15.y] mm/hugetlb_cma: round up per_node before logging it Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox