From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE2353DE431 for ; Thu, 14 May 2026 13:06:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778764001; cv=none; b=i3QHwKLjWfk8CH8sRdPt8/3MHzR45LNQCp1NnxRMvkkEDXyDtW3NZzkTcmb05EoHPBe5wlAvPS3aaxrAv+eyV+GVIDmY0DmslHd0NpYYx7Dagj+OvA3+QB0iZYhYsxAgpqtyTaYcbS5E1/9wq7FNUBm/t3Ee6ABxl7Mr3kqiKIg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778764001; c=relaxed/simple; bh=qQLOYY2hyFlDRmqdluzOGGA9xPn4wVP6HE20/lSFFHQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EDBOxm9FHUUQ5AZt5rpOOsSS2v+Ul+fBTaxBbhI24cnE65v9pm3YXN4w4V4XO/j9Tq86QidHCt8niFhn0hO+9R+wxUU7f24tzrPrmUb/sGHW4m+1Ih1srQeUSR62QGh5Gj/vmwKzWu3XxUN2wjAYGDrhCVS+kLHxP9aOR2Hkug0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Be3vrJM0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Be3vrJM0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D07BFC2BCB3; Thu, 14 May 2026 13:06:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778764000; bh=qQLOYY2hyFlDRmqdluzOGGA9xPn4wVP6HE20/lSFFHQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Be3vrJM0WQab/Rju9KPBDRgncPkaHZWmcuqpVPL41HwzlNJMc35nZGfismBbfaFc+ LIbIg+m/uMiYHasVrTX0C0rF0upQjyVPxZKMFQcUthrsHGNgHXs8XU+9K9Lm8orF03 HCh6PDK4k/ceiaamgfDM0bvNqQsf3gkAzrh3QdvAUfljnHHwFoKcrIsCsK9Tfzj2q1 SAt/v7m9SSxfZSbKXbK+MWWrjrELc33mUrDRcqFzxRJhutbWUI7cVqRRFiS7BRlvhT 3G/w67seb/3IILYTTOTtRstm/PcLmegcsvA29qriX2LyJh7eMjaIoX9Yw8Omj+IjJl AyOQX7M0mJTRQ== From: Sasha Levin To: stable@vger.kernel.org Cc: Sang-Heon Jeon , Muchun Song , David Hildenbrand , Oscar Salvador , Andrew Morton , Sasha Levin Subject: [PATCH 6.6.y] mm/hugetlb_cma: round up per_node before logging it Date: Thu, 14 May 2026 09:06:35 -0400 Message-ID: <20260514130635.228150-1-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <2026051241-flakily-uniquely-50ca@gregkh> References: <2026051241-flakily-uniquely-50ca@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Sang-Heon Jeon [ Upstream commit 8f5ce56b76303c55b78a87af996e2e0f8535f979 ] When the user requests a total hugetlb CMA size without per-node specification, hugetlb_cma_reserve() computes per_node from hugetlb_cma_size and the number of nodes that have memory per_node = DIV_ROUND_UP(hugetlb_cma_size, nodes_weight(hugetlb_bootmem_nodes)); The reservation loop later computes size = round_up(min(per_node, hugetlb_cma_size - reserved), PAGE_SIZE << order); So the actually reserved per_node size is multiple of (PAGE_SIZE << order), but the logged per_node is not rounded up, so it may be smaller than the actual reserved size. For example, as the existing comment describes, if a 3 GB area is requested on a machine with 4 NUMA nodes that have memory, 1 GB is allocated on the first three nodes, but the printed log is hugetlb_cma: reserve 3072 MiB, up to 768 MiB per node Round per_node up to (PAGE_SIZE << order) before logging so that the printed log always matches the actual reserved size. No functional change to the actual reservation size, as the following case analysis shows 1. remaining (hugetlb_cma_size - reserved) >= rounded per_node - AS-IS: min() picks unrounded per_node; round_up() returns rounded per_node - TO-BE: min() picks rounded per_node; round_up() returns rounded per_node (no-op) 2. remaining < unrounded per_node - AS-IS: min() picks remaining; round_up() returns round_up(remaining) - TO-BE: min() picks remaining; round_up() returns round_up(remaining) 3. unrounded per_node <= remaining < rounded per_node - AS-IS: min() picks unrounded per_node; round_up() returns rounded per_node - TO-BE: min() picks remaining; round_up() returns round_up(remaining) equals rounded per_node Link: https://lore.kernel.org/20260422143353.852257-1-ekffu200098@gmail.com Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") # 5.7 Signed-off-by: Sang-Heon Jeon Reviewed-by: Muchun Song Cc: David Hildenbrand Cc: Oscar Salvador Cc: Signed-off-by: Andrew Morton [ applied the single-line addition to mm/hugetlb.c since mm/hugetlb_cma.c didn't exist yet in 6.12 ] Signed-off-by: Sasha Levin --- mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index aa0ef3bc4dd65..6a1e0eefd2540 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7493,6 +7493,7 @@ void __init hugetlb_cma_reserve(int order) * let's allocate 1 GB on first three nodes and ignore the last one. */ per_node = DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes); + per_node = round_up(per_node, PAGE_SIZE << order); pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", hugetlb_cma_size / SZ_1M, per_node / SZ_1M); } -- 2.53.0