* [PATCH v2 1/2] book3s64/radix : Handle error conditions properly in radix_vmemmap_populate
@ 2025-06-22 12:01 Donet Tom
2025-06-22 12:01 ` [PATCH v2 2/2] book3s64/radix : Optimize vmemmap start alignment Donet Tom
0 siblings, 1 reply; 2+ messages in thread
From: Donet Tom @ 2025-06-22 12:01 UTC (permalink / raw)
To: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy
Cc: Ritesh Harjani, Hari Bathini, linuxppc-dev, linux-kernel,
Donet Tom
Error conditions are not handled properly if altmap is not present
and PMD_SIZE vmemmap_alloc_block_buf fails.
In this patch, if vmemmap_alloc_block_buf fails in the non-altmap
case, we will fall back to the base mapping.
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Signed-off-by: Donet Tom <donettom@linux.ibm.com>
---
v1 -> v2 - rebased to v6.16-rc2
v1 - https://lore.kernel.org/all/e876a700a4caa5610e994b946b84f71d0fe6f919.1746255312.git.donettom@linux.ibm.com/
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 9f764bc42b8c..3d67aee8c8ca 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1173,7 +1173,7 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
vmemmap_set_pmd(pmd, p, node, addr, next);
pr_debug("PMD_SIZE vmemmap mapping\n");
continue;
- } else if (altmap) {
+ } else {
/*
* A vmemmap block allocation can fail due to
* alignment requirements and we trying to align
--
2.47.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [PATCH v2 2/2] book3s64/radix : Optimize vmemmap start alignment
2025-06-22 12:01 [PATCH v2 1/2] book3s64/radix : Handle error conditions properly in radix_vmemmap_populate Donet Tom
@ 2025-06-22 12:01 ` Donet Tom
0 siblings, 0 replies; 2+ messages in thread
From: Donet Tom @ 2025-06-22 12:01 UTC (permalink / raw)
To: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy
Cc: Ritesh Harjani, Hari Bathini, linuxppc-dev, linux-kernel,
Donet Tom
If we always align the vmemmap start to PAGE_SIZE, there is a
chance that we may end up allocating page-sized vmemmap backing
pages in RAM in the altmap not present case, because a PAGE_SIZE
aligned address is not PMD_SIZE-aligned.
In this patch, we are aligning the vmemmap start address to
PMD_SIZE if altmap is not present. This ensures that a PMD_SIZE
page is always allocated for the vmemmap mapping if altmap is
not present.
If altmap is present, Make sure we align the start vmemmap addr to
PAGE_SIZE so that we calculate the correct start_pfn in altmap
boundary check to decide whether we should use altmap or RAM based
backing memory allocation. Also the address need to be aligned for
set_pte operation. If the start addr is already PMD_SIZE aligned
and with in the altmap boundary then we will try to use a pmd size
altmap mapping else we go for page size mapping.
So if altmap is present, we try to use the maximum number of
altmap pages; otherwise, we allocate a PMD_SIZE RAM page.
Signed-off-by: Donet Tom <donettom@linux.ibm.com>
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 29 +++++++++++++++---------
1 file changed, 18 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 3d67aee8c8ca..c630cece8ed4 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1122,18 +1122,25 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
pte_t *pte;
/*
- * Make sure we align the start vmemmap addr so that we calculate
- * the correct start_pfn in altmap boundary check to decided whether
- * we should use altmap or RAM based backing memory allocation. Also
- * the address need to be aligned for set_pte operation.
-
- * If the start addr is already PMD_SIZE aligned we will try to use
- * a pmd mapping. We don't want to be too aggressive here beacause
- * that will cause more allocations in RAM. So only if the namespace
- * vmemmap start addr is PMD_SIZE aligned we will use PMD mapping.
+ * If altmap is present, Make sure we align the start vmemmap addr
+ * to PAGE_SIZE so that we calculate the correct start_pfn in
+ * altmap boundary check to decide whether we should use altmap or
+ * RAM based backing memory allocation. Also the address need to be
+ * aligned for set_pte operation. If the start addr is already
+ * PMD_SIZE aligned and with in the altmap boundary then we will
+ * try to use a pmd size altmap mapping else we go for page size
+ * mapping.
+ *
+ * If altmap is not present, align the vmemmap addr to PMD_SIZE and
+ * always allocate a PMD size page for vmemmap backing.
+ *
*/
- start = ALIGN_DOWN(start, PAGE_SIZE);
+ if (altmap)
+ start = ALIGN_DOWN(start, PAGE_SIZE);
+ else
+ start = ALIGN_DOWN(start, PMD_SIZE);
+
for (addr = start; addr < end; addr = next) {
next = pmd_addr_end(addr, end);
@@ -1159,7 +1166,7 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
* in altmap block allocation failures, in which case
* we fallback to RAM for vmemmap allocation.
*/
- if (!IS_ALIGNED(addr, PMD_SIZE) || (altmap &&
+ if (altmap && (!IS_ALIGNED(addr, PMD_SIZE) ||
altmap_cross_boundary(altmap, addr, PMD_SIZE))) {
/*
* make sure we don't create altmap mappings
--
2.47.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-06-22 12:01 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-22 12:01 [PATCH v2 1/2] book3s64/radix : Handle error conditions properly in radix_vmemmap_populate Donet Tom
2025-06-22 12:01 ` [PATCH v2 2/2] book3s64/radix : Optimize vmemmap start alignment Donet Tom
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).