public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] x86/mm: some cleanups for pagetable setup code
@ 2026-05-03 13:04 Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 1/3] x86/mm: drop unused return from init_memory_mapping() Brendan Jackman
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Brendan Jackman @ 2026-05-03 13:04 UTC (permalink / raw)
  To: Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Andy Lutomirski, Peter Zijlstra, Thomas Gleixner
  Cc: linux-kernel, Brendan Jackman

Per discussion in [0] I'm looking for ways to refactor this code to make
ASI easier to deal with. But, while looking, I found some little things
that seem like just straightforward cleanups without any real
refactoring needed. So let's start there.

This applies to tip/master.

I'm having some infra issues so this hasn't been through Sashiko review
yet. I've tested it on QEMU. 

[0] https://lore.kernel.org/all/20250924-b4-asi-page-alloc-v1-0-2d861768041f@google.com/T/#t

Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
Changes in v2:
- Simplified patchset, instead of trying to fix confusing code only to
  delete it in a subsequent patch, just delete it in the first place.
- Fixed add_pfn_range_mapped() args. (This bug causes a KASAN build to
  crash during boot).
- Link to v1: https://lore.kernel.org/r/20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com

---
Brendan Jackman (3):
      x86/mm: drop unused return from init_memory_mapping()
      x86/mm: simplify calculation of max_pfn_mapped
      x86/mm: drop unused returns from direct map setup functions

 arch/x86/include/asm/pgtable.h |  3 +-
 arch/x86/mm/init.c             | 19 ++++-----
 arch/x86/mm/init_32.c          |  5 +--
 arch/x86/mm/init_64.c          | 96 +++++++++++++++---------------------------
 arch/x86/mm/mm_internal.h      | 11 ++---
 5 files changed, 48 insertions(+), 86 deletions(-)
---
base-commit: 32b8f4c4b8650a879d15ca10f2462d1072e49381
change-id: 20251003-x86-init-cleanup-0ad754910bac

Best regards,
-- 
Brendan Jackman <jackmanb@google.com>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 1/3] x86/mm: drop unused return from init_memory_mapping()
  2026-05-03 13:04 [PATCH v2 0/3] x86/mm: some cleanups for pagetable setup code Brendan Jackman
@ 2026-05-03 13:04 ` Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 2/3] x86/mm: simplify calculation of max_pfn_mapped Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 3/3] x86/mm: drop unused returns from direct map setup functions Brendan Jackman
  2 siblings, 0 replies; 4+ messages in thread
From: Brendan Jackman @ 2026-05-03 13:04 UTC (permalink / raw)
  To: Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Andy Lutomirski, Peter Zijlstra, Thomas Gleixner
  Cc: linux-kernel, Brendan Jackman

None of the callers look at the return value.

Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
 arch/x86/include/asm/pgtable.h |  3 +--
 arch/x86/mm/init.c             | 16 +++++++---------
 2 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 2187e9cfcefa1..eb09fa7840b49 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1170,8 +1170,7 @@ extern int direct_gbpages;
 void init_mem_mapping(void);
 void early_alloc_pgt_buf(void);
 void __init poking_init(void);
-unsigned long init_memory_mapping(unsigned long start,
-				  unsigned long end, pgprot_t prot);
+void init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot);
 
 #ifdef CONFIG_X86_64
 extern pgd_t trampoline_pgd_entry;
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index fb67217fddcd3..ae3e9e0820153 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -531,11 +531,11 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn)
  * This runs before bootmem is initialized and gets pages directly from
  * the physical memory. To access them they are temporarily mapped.
  */
-unsigned long __ref init_memory_mapping(unsigned long start,
-					unsigned long end, pgprot_t prot)
+void __ref init_memory_mapping(unsigned long start,
+			       unsigned long end, pgprot_t prot)
 {
 	struct map_range mr[NR_RANGE_MR];
-	unsigned long ret = 0;
+	unsigned long paddr_last = 0;
 	int nr_range, i;
 
 	pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n",
@@ -545,13 +545,11 @@ unsigned long __ref init_memory_mapping(unsigned long start,
 	nr_range = split_mem_range(mr, 0, start, end);
 
 	for (i = 0; i < nr_range; i++)
-		ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
-						   mr[i].page_size_mask,
-						   prot);
+		paddr_last = kernel_physical_mapping_init(mr[i].start, mr[i].end,
+							  mr[i].page_size_mask,
+							  prot);
 
-	add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
-
-	return ret >> PAGE_SHIFT;
+	add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last >> PAGE_SHIFT);
 }
 
 /*

-- 
2.51.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 2/3] x86/mm: simplify calculation of max_pfn_mapped
  2026-05-03 13:04 [PATCH v2 0/3] x86/mm: some cleanups for pagetable setup code Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 1/3] x86/mm: drop unused return from init_memory_mapping() Brendan Jackman
@ 2026-05-03 13:04 ` Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 3/3] x86/mm: drop unused returns from direct map setup functions Brendan Jackman
  2 siblings, 0 replies; 4+ messages in thread
From: Brendan Jackman @ 2026-05-03 13:04 UTC (permalink / raw)
  To: Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Andy Lutomirski, Peter Zijlstra, Thomas Gleixner
  Cc: linux-kernel, Brendan Jackman

The phys_*_init()s return the "last physical address mapped". The exact
definition of this is pretty fiddly, but only in these conditions:

1. There is a mismatch between the alignment of the requested range and
   the page sizes allowed by page_size_mask

2. The range ends in a region that is not mapped according to
   e820.

3. The range ends in a region that was already mapped (note this case is
   particularly fiddly because the return value depends on what level
   the existing mapping is at. This is probably a bug, see [0] for
   discussion).

Luckily, init_memory_mapping() avoids all these conditions. In that
case, the return value is just paddr_end. And that value is already
present, no need to depend on the confusing return value.

[0]: https://lore.kernel.org/all/84b2e7a3-7115-45fe-89ff-db8ee46729f2@intel.com/

Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
 arch/x86/mm/init.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index ae3e9e0820153..1a6a6fc700bb5 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -544,10 +544,11 @@ void __ref init_memory_mapping(unsigned long start,
 	memset(mr, 0, sizeof(mr));
 	nr_range = split_mem_range(mr, 0, start, end);
 
-	for (i = 0; i < nr_range; i++)
-		paddr_last = kernel_physical_mapping_init(mr[i].start, mr[i].end,
-							  mr[i].page_size_mask,
-							  prot);
+	for (i = 0; i < nr_range; i++) {
+		kernel_physical_mapping_init(mr[i].start, mr[i].end,
+					     mr[i].page_size_mask, prot);
+		paddr_last = mr[i].end;
+	}
 
 	add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last >> PAGE_SHIFT);
 }

-- 
2.51.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 3/3] x86/mm: drop unused returns from direct map setup functions
  2026-05-03 13:04 [PATCH v2 0/3] x86/mm: some cleanups for pagetable setup code Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 1/3] x86/mm: drop unused return from init_memory_mapping() Brendan Jackman
  2026-05-03 13:04 ` [PATCH v2 2/3] x86/mm: simplify calculation of max_pfn_mapped Brendan Jackman
@ 2026-05-03 13:04 ` Brendan Jackman
  2 siblings, 0 replies; 4+ messages in thread
From: Brendan Jackman @ 2026-05-03 13:04 UTC (permalink / raw)
  To: Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Andy Lutomirski, Peter Zijlstra, Thomas Gleixner
  Cc: linux-kernel, Brendan Jackman

Nothing looks at these return values. Furthermore, as discussed in [0],
it seems like in the case of a pre-existing 4K mapping, the return value
of kernel_physical_mapping_init() is wrong anyway. So, just stop
returning a value.

Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
 arch/x86/mm/init_32.c     |  5 +--
 arch/x86/mm/init_64.c     | 96 ++++++++++++++++-------------------------------
 arch/x86/mm/mm_internal.h | 11 ++----
 3 files changed, 38 insertions(+), 74 deletions(-)

diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 0908c44d51e6f..05c456dc9855f 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -245,14 +245,13 @@ static inline int is_x86_32_kernel_text(unsigned long addr)
  * of max_low_pfn pages, by creating page tables starting from address
  * PAGE_OFFSET:
  */
-unsigned long __init
+void __init
 kernel_physical_mapping_init(unsigned long start,
 			     unsigned long end,
 			     unsigned long page_size_mask,
 			     pgprot_t prot)
 {
 	int use_pse = page_size_mask == (1<<PG_LEVEL_2M);
-	unsigned long last_map_addr = end;
 	unsigned long start_pfn, end_pfn;
 	pgd_t *pgd_base = swapper_pg_dir;
 	int pgd_idx, pmd_idx, pte_ofs;
@@ -356,7 +355,6 @@ kernel_physical_mapping_init(unsigned long start,
 				pages_4k++;
 				if (mapping_iter == 1) {
 					set_pte(pte, pfn_pte(pfn, init_prot));
-					last_map_addr = (pfn << PAGE_SHIFT) + PAGE_SIZE;
 				} else
 					set_pte(pte, pfn_pte(pfn, prot));
 			}
@@ -382,7 +380,6 @@ kernel_physical_mapping_init(unsigned long start,
 		mapping_iter = 2;
 		goto repeat;
 	}
-	return last_map_addr;
 }
 
 #ifdef CONFIG_HIGHMEM
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index df2261fa4f985..1a22254e9e234 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -464,16 +464,12 @@ void __init cleanup_highmap(void)
 	}
 }
 
-/*
- * Create PTE level page table mapping for physical addresses.
- * It returns the last physical address mapped.
- */
-static unsigned long __meminit
+/* Create PTE level page table mapping for physical addresses. */
+static void __meminit
 phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
 	      pgprot_t prot, bool init)
 {
 	unsigned long pages = 0, paddr_next;
-	unsigned long paddr_last = paddr_end;
 	pte_t *pte;
 	int i;
 
@@ -506,25 +502,20 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
 
 		pages++;
 		set_pte_init(pte, pfn_pte(paddr >> PAGE_SHIFT, prot), init);
-		paddr_last = (paddr & PAGE_MASK) + PAGE_SIZE;
 	}
 
 	update_page_count(PG_LEVEL_4K, pages);
-
-	return paddr_last;
 }
 
 /*
  * Create PMD level page table mapping for physical addresses. The virtual
  * and physical address have to be aligned at this level.
- * It returns the last physical address mapped.
  */
-static unsigned long __meminit
+static void __meminit
 phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
 	      unsigned long page_size_mask, pgprot_t prot, bool init)
 {
 	unsigned long pages = 0, paddr_next;
-	unsigned long paddr_last = paddr_end;
 
 	int i = pmd_index(paddr);
 
@@ -548,9 +539,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
 			if (!pmd_leaf(*pmd)) {
 				spin_lock(&init_mm.page_table_lock);
 				pte = (pte_t *)pmd_page_vaddr(*pmd);
-				paddr_last = phys_pte_init(pte, paddr,
-							   paddr_end, prot,
-							   init);
+				phys_pte_init(pte, paddr, paddr_end, prot, init);
 				spin_unlock(&init_mm.page_table_lock);
 				continue;
 			}
@@ -569,7 +558,6 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
 			if (page_size_mask & (1 << PG_LEVEL_2M)) {
 				if (!after_bootmem)
 					pages++;
-				paddr_last = paddr_next;
 				continue;
 			}
 			new_prot = pte_pgprot(pte_clrhuge(*(pte_t *)pmd));
@@ -582,33 +570,29 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
 				     pfn_pmd(paddr >> PAGE_SHIFT, prot_sethuge(prot)),
 				     init);
 			spin_unlock(&init_mm.page_table_lock);
-			paddr_last = paddr_next;
 			continue;
 		}
 
 		pte = alloc_low_page();
-		paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init);
+		phys_pte_init(pte, paddr, paddr_end, new_prot, init);
 
 		spin_lock(&init_mm.page_table_lock);
 		pmd_populate_kernel_init(&init_mm, pmd, pte, init);
 		spin_unlock(&init_mm.page_table_lock);
 	}
 	update_page_count(PG_LEVEL_2M, pages);
-	return paddr_last;
 }
 
 /*
  * Create PUD level page table mapping for physical addresses. The virtual
  * and physical address do not have to be aligned at this level. KASLR can
  * randomize virtual addresses up to this level.
- * It returns the last physical address mapped.
  */
-static unsigned long __meminit
+static void __meminit
 phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 	      unsigned long page_size_mask, pgprot_t _prot, bool init)
 {
 	unsigned long pages = 0, paddr_next;
-	unsigned long paddr_last = paddr_end;
 	unsigned long vaddr = (unsigned long)__va(paddr);
 	int i = pud_index(vaddr);
 
@@ -634,10 +618,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 		if (!pud_none(*pud)) {
 			if (!pud_leaf(*pud)) {
 				pmd = pmd_offset(pud, 0);
-				paddr_last = phys_pmd_init(pmd, paddr,
-							   paddr_end,
-							   page_size_mask,
-							   prot, init);
+				phys_pmd_init(pmd, paddr, paddr_end,
+					      page_size_mask, prot, init);
 				continue;
 			}
 			/*
@@ -655,7 +637,6 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 			if (page_size_mask & (1 << PG_LEVEL_1G)) {
 				if (!after_bootmem)
 					pages++;
-				paddr_last = paddr_next;
 				continue;
 			}
 			prot = pte_pgprot(pte_clrhuge(*(pte_t *)pud));
@@ -668,13 +649,11 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 				     pfn_pud(paddr >> PAGE_SHIFT, prot_sethuge(prot)),
 				     init);
 			spin_unlock(&init_mm.page_table_lock);
-			paddr_last = paddr_next;
 			continue;
 		}
 
 		pmd = alloc_low_page();
-		paddr_last = phys_pmd_init(pmd, paddr, paddr_end,
-					   page_size_mask, prot, init);
+		phys_pmd_init(pmd, paddr, paddr_end, page_size_mask, prot, init);
 
 		spin_lock(&init_mm.page_table_lock);
 		pud_populate_init(&init_mm, pud, pmd, init);
@@ -682,23 +661,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 	}
 
 	update_page_count(PG_LEVEL_1G, pages);
-
-	return paddr_last;
 }
 
-static unsigned long __meminit
+static void __meminit
 phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
 	      unsigned long page_size_mask, pgprot_t prot, bool init)
 {
-	unsigned long vaddr, vaddr_end, vaddr_next, paddr_next, paddr_last;
+	unsigned long vaddr, vaddr_end, vaddr_next, paddr_next;
 
-	paddr_last = paddr_end;
 	vaddr = (unsigned long)__va(paddr);
 	vaddr_end = (unsigned long)__va(paddr_end);
 
-	if (!pgtable_l5_enabled())
-		return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
-				     page_size_mask, prot, init);
+	if (!pgtable_l5_enabled()) {
+		phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
+			      page_size_mask, prot, init);
+		return;
+	}
 
 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
 		p4d_t *p4d = p4d_page + p4d_index(vaddr);
@@ -720,33 +698,30 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
 
 		if (!p4d_none(*p4d)) {
 			pud = pud_offset(p4d, 0);
-			paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end),
-					page_size_mask, prot, init);
+			phys_pud_init(pud, paddr, __pa(vaddr_end),
+				      page_size_mask, prot, init);
 			continue;
 		}
 
 		pud = alloc_low_page();
-		paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end),
-					   page_size_mask, prot, init);
+		phys_pud_init(pud, paddr, __pa(vaddr_end),
+			      page_size_mask, prot, init);
 
 		spin_lock(&init_mm.page_table_lock);
 		p4d_populate_init(&init_mm, p4d, pud, init);
 		spin_unlock(&init_mm.page_table_lock);
 	}
-
-	return paddr_last;
 }
 
-static unsigned long __meminit
+static void __meminit
 __kernel_physical_mapping_init(unsigned long paddr_start,
 			       unsigned long paddr_end,
 			       unsigned long page_size_mask,
 			       pgprot_t prot, bool init)
 {
 	bool pgd_changed = false;
-	unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last;
+	unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next;
 
-	paddr_last = paddr_end;
 	vaddr = (unsigned long)__va(paddr_start);
 	vaddr_end = (unsigned long)__va(paddr_end);
 	vaddr_start = vaddr;
@@ -759,16 +734,14 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
 
 		if (pgd_val(*pgd)) {
 			p4d = (p4d_t *)pgd_page_vaddr(*pgd);
-			paddr_last = phys_p4d_init(p4d, __pa(vaddr),
-						   __pa(vaddr_end),
-						   page_size_mask,
-						   prot, init);
+			phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
+				      page_size_mask, prot, init);
 			continue;
 		}
 
 		p4d = alloc_low_page();
-		paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
-					   page_size_mask, prot, init);
+		phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
+			      page_size_mask, prot, init);
 
 		spin_lock(&init_mm.page_table_lock);
 		if (pgtable_l5_enabled())
@@ -783,8 +756,6 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
 
 	if (pgd_changed)
 		sync_global_pgds(vaddr_start, vaddr_end - 1);
-
-	return paddr_last;
 }
 
 
@@ -792,15 +763,15 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
  * Create page table mapping for the physical memory for specific physical
  * addresses. Note that it can only be used to populate non-present entries.
  * The virtual and physical addresses have to be aligned on PMD level
- * down. It returns the last physical address mapped.
+ * down.
  */
-unsigned long __meminit
+void __meminit
 kernel_physical_mapping_init(unsigned long paddr_start,
 			     unsigned long paddr_end,
 			     unsigned long page_size_mask, pgprot_t prot)
 {
-	return __kernel_physical_mapping_init(paddr_start, paddr_end,
-					      page_size_mask, prot, true);
+	__kernel_physical_mapping_init(paddr_start, paddr_end,
+				       page_size_mask, prot, true);
 }
 
 /*
@@ -809,14 +780,13 @@ kernel_physical_mapping_init(unsigned long paddr_start,
  * when updating the mapping. The caller is responsible to flush the TLBs after
  * the function returns.
  */
-unsigned long __meminit
+void __meminit
 kernel_physical_mapping_change(unsigned long paddr_start,
 			       unsigned long paddr_end,
 			       unsigned long page_size_mask)
 {
-	return __kernel_physical_mapping_init(paddr_start, paddr_end,
-					      page_size_mask, PAGE_KERNEL,
-					      false);
+	__kernel_physical_mapping_init(paddr_start, paddr_end,
+				       page_size_mask, PAGE_KERNEL, false);
 }
 
 #ifndef CONFIG_NUMA
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 7c4a41235323b..dad8abe65ed03 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -10,13 +10,10 @@ static inline void *alloc_low_page(void)
 
 void early_ioremap_page_table_range_init(void);
 
-unsigned long kernel_physical_mapping_init(unsigned long start,
-					     unsigned long end,
-					     unsigned long page_size_mask,
-					     pgprot_t prot);
-unsigned long kernel_physical_mapping_change(unsigned long start,
-					     unsigned long end,
-					     unsigned long page_size_mask);
+void kernel_physical_mapping_init(unsigned long start, unsigned long end,
+				  unsigned long page_size_mask, pgprot_t prot);
+void kernel_physical_mapping_change(unsigned long start, unsigned long end,
+				    unsigned long page_size_mask);
 
 extern int after_bootmem;
 

-- 
2.51.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-05-03 13:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-03 13:04 [PATCH v2 0/3] x86/mm: some cleanups for pagetable setup code Brendan Jackman
2026-05-03 13:04 ` [PATCH v2 1/3] x86/mm: drop unused return from init_memory_mapping() Brendan Jackman
2026-05-03 13:04 ` [PATCH v2 2/3] x86/mm: simplify calculation of max_pfn_mapped Brendan Jackman
2026-05-03 13:04 ` [PATCH v2 3/3] x86/mm: drop unused returns from direct map setup functions Brendan Jackman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox