From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B6DD3F411C for ; Thu, 14 May 2026 15:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778773975; cv=none; b=Rz0OfHAVVmWQRo9E8A5v8h6yIE55Pt16A3vBGZEFP1YUJBsKpZIvcVZ5B6hAPkX5togIs+cs1VgBpfCnXFf7SqvQsMUonwQj8zFe9SjW+j8EtZrqHzqqZEqfNz4NqqG3Imt6evsA+0NVuww8kZ8DIT5ABlt6ocggJzky+sXjI9Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778773975; c=relaxed/simple; bh=8dMzPv1mjBw4yMzE764B5LbM5PmExCQnOLBBlhNj6is=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ku1wA82tikBvAJuICS8innt8vgTCOyfPquN+Ns5lv4cpauGf2E/IuZR4GtRlRFhJGvQHmNbADB/qd0zI0UR9BxPRUVniF+Jon7l2kND4Gtm3Wr8LMZJjEBxVwZnmwlGvN1lC7EPpCe052X06cwq+e6jHIXnbwd6+oJw0o7NgoAQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PX9leW+X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PX9leW+X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63FCDC2BCB3; Thu, 14 May 2026 15:52:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778773975; bh=8dMzPv1mjBw4yMzE764B5LbM5PmExCQnOLBBlhNj6is=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PX9leW+XV2nhvs3fpbNK5endO1CAb5h7J5MJo98tLjf6F0LtK0o1PzzTUjBW2HaVF rzm5Nj5GdqiIbeqah0atyzhQLcF49XTINTlI2gfHuqlF14hkPOoq3PR2rj4p3ktacO kLUmB3uAp2ZtyQWoUk0birx8vq4xEAtB7AKNVmLBZG2wQS4vLHOmCjYD6vqfVgIqqA 9FYkEdz21TlLZqTLxSrR39HDmZvLemmiQ+SSuIXh+RiOrELhY/HXHUlJuunNdNDIjO zCirLJIl8l3oQqLp7Pv7R8simGRNnzqNA7C28bcjP4nYbz0AAbYpxhCUJbh4Z3grRK vYsZsXZqueQAQ== From: Sasha Levin To: stable@vger.kernel.org Cc: Sang-Heon Jeon , Muchun Song , David Hildenbrand , Oscar Salvador , Andrew Morton , Sasha Levin Subject: [PATCH 5.15.y] mm/hugetlb_cma: round up per_node before logging it Date: Thu, 14 May 2026 11:52:52 -0400 Message-ID: <20260514155252.308763-1-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <2026051242-tanned-unlatch-32ed@gregkh> References: <2026051242-tanned-unlatch-32ed@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Sang-Heon Jeon [ Upstream commit 8f5ce56b76303c55b78a87af996e2e0f8535f979 ] When the user requests a total hugetlb CMA size without per-node specification, hugetlb_cma_reserve() computes per_node from hugetlb_cma_size and the number of nodes that have memory per_node = DIV_ROUND_UP(hugetlb_cma_size, nodes_weight(hugetlb_bootmem_nodes)); The reservation loop later computes size = round_up(min(per_node, hugetlb_cma_size - reserved), PAGE_SIZE << order); So the actually reserved per_node size is multiple of (PAGE_SIZE << order), but the logged per_node is not rounded up, so it may be smaller than the actual reserved size. For example, as the existing comment describes, if a 3 GB area is requested on a machine with 4 NUMA nodes that have memory, 1 GB is allocated on the first three nodes, but the printed log is hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node Round per_node up to (PAGE_SIZE << order) before logging so that the printed log always matches the actual reserved size. No functional change to the actual reservation size, as the following case analysis shows 1. remaining (hugetlb_cma_size - reserved) >= rounded per_node - AS-IS: min() picks unrounded per_node; round_up() returns rounded per_node - TO-BE: min() picks rounded per_node; round_up() returns rounded per_node (no-op) 2. remaining < unrounded per_node - AS-IS: min() picks remaining; round_up() returns round_up(remaining) - TO-BE: min() picks remaining; round_up() returns round_up(remaining) 3. unrounded per_node <= remaining < rounded per_node - AS-IS: min() picks unrounded per_node; round_up() returns rounded per_node - TO-BE: min() picks remaining; round_up() returns round_up(remaining) equals rounded per_node Link: https://lore.kernel.org/20260422143353.852257-1-ekffu200098@gmail.com Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7 Signed-off-by: Sang-Heon Jeon Reviewed-by: Muchun Song Cc: David Hildenbrand Cc: Oscar Salvador Cc: Signed-off-by: Andrew Morton [ applied the one-line `round_up` to `mm/hugetlb.c` ] Signed-off-by: Sasha Levin --- mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 64dfa3fcd554b..8b4bc38e1c0a6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6493,6 +6493,7 @@ void __init hugetlb_cma_reserve(int order) * let's allocate 1 GB on first three nodes and ignore the last one. */ per_node = DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes); + per_node = round_up(per_node, PAGE_SIZE << order); pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", hugetlb_cma_size / SZ_1M, per_node / SZ_1M); -- 2.53.0