From: Sasha Levin <sashal@kernel.org>
To: stable@vger.kernel.org
Cc: Sang-Heon Jeon <ekffu200098@gmail.com>,
Muchun Song <muchun.song@linux.dev>,
David Hildenbrand <david@kernel.org>,
Oscar Salvador <osalvador@suse.de>,
Andrew Morton <akpm@linux-foundation.org>,
Sasha Levin <sashal@kernel.org>
Subject: [PATCH 6.6.y] mm/hugetlb_cma: round up per_node before logging it
Date: Thu, 14 May 2026 09:06:35 -0400 [thread overview]
Message-ID: <20260514130635.228150-1-sashal@kernel.org> (raw)
In-Reply-To: <2026051241-flakily-uniquely-50ca@gregkh>
From: Sang-Heon Jeon <ekffu200098@gmail.com>
[ Upstream commit 8f5ce56b76303c55b78a87af996e2e0f8535f979 ]
When the user requests a total hugetlb CMA size without per-node
specification, hugetlb_cma_reserve() computes per_node from
hugetlb_cma_size and the number of nodes that have memory
per_node = DIV_ROUND_UP(hugetlb_cma_size,
nodes_weight(hugetlb_bootmem_nodes));
The reservation loop later computes
size = round_up(min(per_node, hugetlb_cma_size - reserved),
PAGE_SIZE << order);
So the actually reserved per_node size is multiple of (PAGE_SIZE <<
order), but the logged per_node is not rounded up, so it may be smaller
than the actual reserved size.
For example, as the existing comment describes, if a 3 GB area is
requested on a machine with 4 NUMA nodes that have memory, 1 GB is
allocated on the first three nodes, but the printed log is
hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node
Round per_node up to (PAGE_SIZE << order) before logging so that the
printed log always matches the actual reserved size. No functional change
to the actual reservation size, as the following case analysis shows
1. remaining (hugetlb_cma_size - reserved) >= rounded per_node
- AS-IS: min() picks unrounded per_node;
round_up() returns rounded per_node
- TO-BE: min() picks rounded per_node;
round_up() returns rounded per_node (no-op)
2. remaining < unrounded per_node
- AS-IS: min() picks remaining;
round_up() returns round_up(remaining)
- TO-BE: min() picks remaining;
round_up() returns round_up(remaining)
3. unrounded per_node <= remaining < rounded per_node
- AS-IS: min() picks unrounded per_node;
round_up() returns rounded per_node
- TO-BE: min() picks remaining;
round_up() returns round_up(remaining) equals rounded per_node
Link: https://lore.kernel.org/20260422143353.852257-1-ekffu200098@gmail.com
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7
Signed-off-by: Sang-Heon Jeon <ekffu200098@gmail.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ applied the single-line addition to mm/hugetlb.c since mm/hugetlb_cma.c didn't exist yet in 6.12 ]
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/hugetlb.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index aa0ef3bc4dd65..6a1e0eefd2540 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7493,6 +7493,7 @@ void __init hugetlb_cma_reserve(int order)
* let's allocate 1 GB on first three nodes and ignore the last one.
*/
per_node = DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes);
+ per_node = round_up(per_node, PAGE_SIZE << order);
pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
}
--
2.53.0
prev parent reply other threads:[~2026-05-14 13:06 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 13:17 FAILED: patch "[PATCH] mm/hugetlb_cma: round up per_node before logging it" failed to apply to 6.6-stable tree gregkh
2026-05-14 13:06 ` Sasha Levin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260514130635.228150-1-sashal@kernel.org \
--to=sashal@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=ekffu200098@gmail.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox