public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 6.1 0/3]  arm64: Speed up boot with faster linear map creation
@ 2026-02-17 13:35 Ryan Roberts
  2026-02-17 13:35 ` [PATCH 6.1 1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block Ryan Roberts
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Ryan Roberts @ 2026-02-17 13:35 UTC (permalink / raw)
  To: stable
  Cc: Ryan Roberts, catalin.marinas, will, linux-arm-kernel,
	linux-kernel, Jack Aboutboul, Sharath George John, Noah Meyerhans,
	Jim Perrin

Hi All,

This series is a backport that applies to stable kernel 6.1 (base v6.1.163), for
some speed ups to enable significantly faster booting on systems with a lot of
memory. The patches were originally posted at:

  https://lore.kernel.org/linux-arm-kernel/20240412131908.433043-1-ryan.roberts@arm.com/

... and were originally merged upstream in v6.10-rc1.

I'm requesting this be merged to stable on behalf of a partner who wants to get
the benefit of this series in Debian 12.

Thanks,
Ryan

Ryan Roberts (3):
  arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
  arm64: mm: Batch dsb and isb when populating pgtables
  arm64: mm: Don't remap pgtables for allocate vs populate

 arch/arm64/include/asm/pgtable.h |  7 ++-
 arch/arm64/mm/mmu.c              | 92 ++++++++++++++++++--------------
 2 files changed, 57 insertions(+), 42 deletions(-)

--
2.43.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 6.1 1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
  2026-02-17 13:35 [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Ryan Roberts
@ 2026-02-17 13:35 ` Ryan Roberts
  2026-02-17 13:35 ` [PATCH 6.1 2/3] arm64: mm: Batch dsb and isb when populating pgtables Ryan Roberts
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Ryan Roberts @ 2026-02-17 13:35 UTC (permalink / raw)
  To: stable
  Cc: Ryan Roberts, catalin.marinas, will, linux-arm-kernel,
	linux-kernel, Jack Aboutboul, Sharath George John, Noah Meyerhans,
	Jim Perrin, Itaru Kitayama, Eric Chanudet, Mark Rutland,
	Ard Biesheuvel

[ Upstream commit 5c63db59c5f89925add57642be4f789d0d671ccd ]

A large part of the kernel boot time is creating the kernel linear map
page tables. When rodata=full, all memory is mapped by pte. And when
there is lots of physical ram, there are lots of pte tables to populate.
The primary cost associated with this is mapping and unmapping the pte
table memory in the fixmap; at unmap time, the TLB entry must be
invalidated and this is expensive.

Previously, each pmd and pte table was fixmapped/fixunmapped for each
cont(pte|pmd) block of mappings (16 entries with 4K granule). This means
we ended up issuing 32 TLBIs per (pmd|pte) table during the population
phase.

Let's fix that, and fixmap/fixunmap each page once per population, for a
saving of 31 TLBIs per (pmd|pte) table. This gives a significant boot
speedup.

Execution time of map_mem(), which creates the kernel linear map page
tables, was measured on different machines with different RAM configs:

               | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
               | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
---------------|-------------|-------------|-------------|-------------
               |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
---------------|-------------|-------------|-------------|-------------
before         |  168   (0%) | 2198   (0%) | 8644   (0%) | 17447   (0%)
after          |   78 (-53%) |  435 (-80%) | 1723 (-80%) |  3779 (-78%)

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Itaru Kitayama <itaru.kitayama@fujitsu.com>
Tested-by: Eric Chanudet <echanude@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20240412131908.433043-2-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
[ Ryan: Trivial backport ]
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/mm/mmu.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e9288b28cb1e3..b193ea2c0a629 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -169,12 +169,9 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
 	return ((old ^ new) & ~mask) == 0;
 }

-static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
 		     phys_addr_t phys, pgprot_t prot)
 {
-	pte_t *ptep;
-
-	ptep = pte_set_fixmap_offset(pmdp, addr);
 	do {
 		pte_t old_pte = READ_ONCE(*ptep);

@@ -189,8 +186,6 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,

 		phys += PAGE_SIZE;
 	} while (ptep++, addr += PAGE_SIZE, addr != end);
-
-	pte_clear_fixmap();
 }

 static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
@@ -201,6 +196,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 {
 	unsigned long next;
 	pmd_t pmd = READ_ONCE(*pmdp);
+	pte_t *ptep;

 	BUG_ON(pmd_sect(pmd));
 	if (pmd_none(pmd)) {
@@ -216,6 +212,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 	}
 	BUG_ON(pmd_bad(pmd));

+	ptep = pte_set_fixmap_offset(pmdp, addr);
 	do {
 		pgprot_t __prot = prot;

@@ -226,20 +223,21 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 		    (flags & NO_CONT_MAPPINGS) == 0)
 			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);

-		init_pte(pmdp, addr, next, phys, __prot);
+		init_pte(ptep, addr, next, phys, __prot);

+		ptep += pte_index(next) - pte_index(addr);
 		phys += next - addr;
 	} while (addr = next, addr != end);
+
+	pte_clear_fixmap();
 }

-static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
 		     phys_addr_t phys, pgprot_t prot,
 		     phys_addr_t (*pgtable_alloc)(int), int flags)
 {
 	unsigned long next;
-	pmd_t *pmdp;

-	pmdp = pmd_set_fixmap_offset(pudp, addr);
 	do {
 		pmd_t old_pmd = READ_ONCE(*pmdp);

@@ -265,8 +263,6 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 		}
 		phys += next - addr;
 	} while (pmdp++, addr = next, addr != end);
-
-	pmd_clear_fixmap();
 }

 static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
@@ -276,6 +272,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 {
 	unsigned long next;
 	pud_t pud = READ_ONCE(*pudp);
+	pmd_t *pmdp;

 	/*
 	 * Check for initial section mappings in the pgd/pud.
@@ -294,6 +291,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 	}
 	BUG_ON(pud_bad(pud));

+	pmdp = pmd_set_fixmap_offset(pudp, addr);
 	do {
 		pgprot_t __prot = prot;

@@ -304,10 +302,13 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 		    (flags & NO_CONT_MAPPINGS) == 0)
 			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);

-		init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
+		init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);

+		pmdp += pmd_index(next) - pmd_index(addr);
 		phys += next - addr;
 	} while (addr = next, addr != end);
+
+	pmd_clear_fixmap();
 }

 static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
--
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 6.1 2/3] arm64: mm: Batch dsb and isb when populating pgtables
  2026-02-17 13:35 [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Ryan Roberts
  2026-02-17 13:35 ` [PATCH 6.1 1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block Ryan Roberts
@ 2026-02-17 13:35 ` Ryan Roberts
  2026-02-17 13:35 ` [PATCH 6.1 3/3] arm64: mm: Don't remap pgtables for allocate vs populate Ryan Roberts
  2026-02-17 13:50 ` [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Greg KH
  3 siblings, 0 replies; 6+ messages in thread
From: Ryan Roberts @ 2026-02-17 13:35 UTC (permalink / raw)
  To: stable
  Cc: Ryan Roberts, catalin.marinas, will, linux-arm-kernel,
	linux-kernel, Jack Aboutboul, Sharath George John, Noah Meyerhans,
	Jim Perrin, Itaru Kitayama, Eric Chanudet, Mark Rutland,
	Ard Biesheuvel

[ Upstream commit 1fcb7cea8a5f7747e02230f816c2c80b060d9517 ]

After removing uneccessary TLBIs, the next bottleneck when creating the
page tables for the linear map is DSB and ISB, which were previously
issued per-pte in __set_pte(). Since we are writing multiple ptes in a
given pte table, we can elide these barriers and insert them once we
have finished writing to the table.

Execution time of map_mem(), which creates the kernel linear map page
tables, was measured on different machines with different RAM configs:

               | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
               | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
---------------|-------------|-------------|-------------|-------------
               |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
---------------|-------------|-------------|-------------|-------------
before         |   78   (0%) |  435   (0%) | 1723   (0%) |  3779   (0%)
after          |   11 (-86%) |  161 (-63%) |  656 (-62%) |  1654 (-56%)

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Itaru Kitayama <itaru.kitayama@fujitsu.com>
Tested-by: Eric Chanudet <echanude@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20240412131908.433043-3-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
[ Ryan: Trivial backport ]
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/pgtable.h |  7 ++++++-
 arch/arm64/mm/mmu.c              | 11 ++++++++++-
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 62326f249aa71..3ea0c9768c4c9 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -261,9 +261,14 @@ static inline pte_t pte_mkdevmap(pte_t pte)
 	return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
 }

-static inline void set_pte(pte_t *ptep, pte_t pte)
+static inline void set_pte_nosync(pte_t *ptep, pte_t pte)
 {
 	WRITE_ONCE(*ptep, pte);
+}
+
+static inline void set_pte(pte_t *ptep, pte_t pte)
+{
+	set_pte_nosync(ptep, pte);

 	/*
 	 * Only if the new pte is valid and kernel, otherwise TLB maintenance
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b193ea2c0a629..ca06b5e131e0f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -175,7 +175,11 @@ static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
 	do {
 		pte_t old_pte = READ_ONCE(*ptep);

-		set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
+		/*
+		 * Required barriers to make this visible to the table walker
+		 * are deferred to the end of alloc_init_cont_pte().
+		 */
+		set_pte_nosync(ptep, pfn_pte(__phys_to_pfn(phys), prot));

 		/*
 		 * After the PTE entry has been populated once, we
@@ -229,6 +233,11 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 		phys += next - addr;
 	} while (addr = next, addr != end);

+	/*
+	 * Note: barriers and maintenance necessary to clear the fixmap slot
+	 * ensure that all previous pgtable writes are visible to the table
+	 * walker.
+	 */
 	pte_clear_fixmap();
 }

--
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 6.1 3/3] arm64: mm: Don't remap pgtables for allocate vs populate
  2026-02-17 13:35 [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Ryan Roberts
  2026-02-17 13:35 ` [PATCH 6.1 1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block Ryan Roberts
  2026-02-17 13:35 ` [PATCH 6.1 2/3] arm64: mm: Batch dsb and isb when populating pgtables Ryan Roberts
@ 2026-02-17 13:35 ` Ryan Roberts
  2026-02-17 13:50 ` [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Greg KH
  3 siblings, 0 replies; 6+ messages in thread
From: Ryan Roberts @ 2026-02-17 13:35 UTC (permalink / raw)
  To: stable
  Cc: Ryan Roberts, catalin.marinas, will, linux-arm-kernel,
	linux-kernel, Jack Aboutboul, Sharath George John, Noah Meyerhans,
	Jim Perrin, Mark Rutland, Itaru Kitayama, Eric Chanudet,
	Ard Biesheuvel

[ Upstream commit 0e9df1c905d8293d333ace86c13d147382f5caf9 ]

During linear map pgtable creation, each pgtable is fixmapped /
fixunmapped twice; once during allocation to zero the memory, and a
again during population to write the entries. This means each table has
2 TLB invalidations issued against it. Let's fix this so that each table
is only fixmapped/fixunmapped once, halving the number of TLBIs, and
improving performance.

Achieve this by separating allocation and initialization (zeroing) of
the page. The allocated page is now fixmapped directly by the walker and
initialized, before being populated and finally fixunmapped.

This approach keeps the change small, but has the side effect that late
allocations (using __get_free_page()) must also go through the generic
memory clearing routine. So let's tell __get_free_page() not to zero the
memory to avoid duplication.

Additionally this approach means that fixmap/fixunmap is still used for
late pgtable modifications. That's not technically needed since the
memory is all mapped in the linear map by that point. That's left as a
possible future optimization if found to be needed.

Execution time of map_mem(), which creates the kernel linear map page
tables, was measured on different machines with different RAM configs:

               | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
               | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
---------------|-------------|-------------|-------------|-------------
               |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
---------------|-------------|-------------|-------------|-------------
before         |   11   (0%) |  161   (0%) |  656   (0%) |  1654   (0%)
after          |   10 (-11%) |  104 (-35%) |  438 (-33%) |  1223 (-26%)

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Itaru Kitayama <itaru.kitayama@fujitsu.com>
Tested-by: Eric Chanudet <echanude@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20240412131908.433043-4-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
[ Ryan: Trivial backport ]
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/mm/mmu.c | 58 ++++++++++++++++++++++-----------------------
 1 file changed, 29 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ca06b5e131e0f..ca0bf180082d3 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -110,28 +110,12 @@ EXPORT_SYMBOL(phys_mem_access_prot);
 static phys_addr_t __init early_pgtable_alloc(int shift)
 {
 	phys_addr_t phys;
-	void *ptr;

 	phys = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0,
 					 MEMBLOCK_ALLOC_NOLEAKTRACE);
 	if (!phys)
 		panic("Failed to allocate page table page\n");

-	/*
-	 * The FIX_{PGD,PUD,PMD} slots may be in active use, but the FIX_PTE
-	 * slot will be free, so we can (ab)use the FIX_PTE slot to initialise
-	 * any level of table.
-	 */
-	ptr = pte_set_fixmap(phys);
-
-	memset(ptr, 0, PAGE_SIZE);
-
-	/*
-	 * Implicit barriers also ensure the zeroed page is visible to the page
-	 * table walker
-	 */
-	pte_clear_fixmap();
-
 	return phys;
 }

@@ -169,6 +153,14 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
 	return ((old ^ new) & ~mask) == 0;
 }

+static void init_clear_pgtable(void *table)
+{
+	clear_page(table);
+
+	/* Ensure the zeroing is observed by page table walks. */
+	dsb(ishst);
+}
+
 static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
 		     phys_addr_t phys, pgprot_t prot)
 {
@@ -211,12 +203,15 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 			pmdval |= PMD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pte_phys = pgtable_alloc(PAGE_SHIFT);
+		ptep = pte_set_fixmap(pte_phys);
+		init_clear_pgtable(ptep);
+		ptep += pte_index(addr);
 		__pmd_populate(pmdp, pte_phys, pmdval);
-		pmd = READ_ONCE(*pmdp);
+	} else {
+		BUG_ON(pmd_bad(pmd));
+		ptep = pte_set_fixmap_offset(pmdp, addr);
 	}
-	BUG_ON(pmd_bad(pmd));

-	ptep = pte_set_fixmap_offset(pmdp, addr);
 	do {
 		pgprot_t __prot = prot;

@@ -295,12 +290,15 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 			pudval |= PUD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pmd_phys = pgtable_alloc(PMD_SHIFT);
+		pmdp = pmd_set_fixmap(pmd_phys);
+		init_clear_pgtable(pmdp);
+		pmdp += pmd_index(addr);
 		__pud_populate(pudp, pmd_phys, pudval);
-		pud = READ_ONCE(*pudp);
+	} else {
+		BUG_ON(pud_bad(pud));
+		pmdp = pmd_set_fixmap_offset(pudp, addr);
 	}
-	BUG_ON(pud_bad(pud));

-	pmdp = pmd_set_fixmap_offset(pudp, addr);
 	do {
 		pgprot_t __prot = prot;

@@ -338,12 +336,15 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 			p4dval |= P4D_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pud_phys = pgtable_alloc(PUD_SHIFT);
+		pudp = pud_set_fixmap(pud_phys);
+		init_clear_pgtable(pudp);
+		pudp += pud_index(addr);
 		__p4d_populate(p4dp, pud_phys, p4dval);
-		p4d = READ_ONCE(*p4dp);
+	} else {
+		BUG_ON(p4d_bad(p4d));
+		pudp = pud_set_fixmap_offset(p4dp, addr);
 	}
-	BUG_ON(p4d_bad(p4d));

-	pudp = pud_set_fixmap_offset(p4dp, addr);
 	do {
 		pud_t old_pud = READ_ONCE(*pudp);

@@ -425,11 +426,10 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,

 static phys_addr_t __pgd_pgtable_alloc(int shift)
 {
-	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL);
-	BUG_ON(!ptr);
+	/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
+	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL & ~__GFP_ZERO);

-	/* Ensure the zeroed page is visible to the page table walker */
-	dsb(ishst);
+	BUG_ON(!ptr);
 	return __pa(ptr);
 }

--
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 6.1 0/3]  arm64: Speed up boot with faster linear map creation
  2026-02-17 13:35 [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Ryan Roberts
                   ` (2 preceding siblings ...)
  2026-02-17 13:35 ` [PATCH 6.1 3/3] arm64: mm: Don't remap pgtables for allocate vs populate Ryan Roberts
@ 2026-02-17 13:50 ` Greg KH
  2026-03-19 13:54   ` Greg KH
  3 siblings, 1 reply; 6+ messages in thread
From: Greg KH @ 2026-02-17 13:50 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: stable, catalin.marinas, will, linux-arm-kernel, linux-kernel,
	Jack Aboutboul, Sharath George John, Noah Meyerhans, Jim Perrin

On Tue, Feb 17, 2026 at 01:35:21PM +0000, Ryan Roberts wrote:
> Hi All,
> 
> This series is a backport that applies to stable kernel 6.1 (base v6.1.163), for
> some speed ups to enable significantly faster booting on systems with a lot of
> memory. The patches were originally posted at:
> 
>   https://lore.kernel.org/linux-arm-kernel/20240412131908.433043-1-ryan.roberts@arm.com/
> 
> ... and were originally merged upstream in v6.10-rc1.
> 
> I'm requesting this be merged to stable on behalf of a partner who wants to get
> the benefit of this series in Debian 12.

Same here, why not just use 6.12.y?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 6.1 0/3]  arm64: Speed up boot with faster linear map creation
  2026-02-17 13:50 ` [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Greg KH
@ 2026-03-19 13:54   ` Greg KH
  0 siblings, 0 replies; 6+ messages in thread
From: Greg KH @ 2026-03-19 13:54 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: stable, catalin.marinas, will, linux-arm-kernel, linux-kernel,
	Jack Aboutboul, Sharath George John, Noah Meyerhans, Jim Perrin

On Tue, Feb 17, 2026 at 02:50:28PM +0100, Greg KH wrote:
> On Tue, Feb 17, 2026 at 01:35:21PM +0000, Ryan Roberts wrote:
> > Hi All,
> > 
> > This series is a backport that applies to stable kernel 6.1 (base v6.1.163), for
> > some speed ups to enable significantly faster booting on systems with a lot of
> > memory. The patches were originally posted at:
> > 
> >   https://lore.kernel.org/linux-arm-kernel/20240412131908.433043-1-ryan.roberts@arm.com/
> > 
> > ... and were originally merged upstream in v6.10-rc1.
> > 
> > I'm requesting this be merged to stable on behalf of a partner who wants to get
> > the benefit of this series in Debian 12.
> 
> Same here, why not just use 6.12.y?

Ok, I'll take the 6.6.y patches, but for 6.1.y, people should _REALLY_
move off of it if they are using these types of systems as there are
loads of other things/fixes that they will get if they move.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-03-19 13:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-17 13:35 [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Ryan Roberts
2026-02-17 13:35 ` [PATCH 6.1 1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block Ryan Roberts
2026-02-17 13:35 ` [PATCH 6.1 2/3] arm64: mm: Batch dsb and isb when populating pgtables Ryan Roberts
2026-02-17 13:35 ` [PATCH 6.1 3/3] arm64: mm: Don't remap pgtables for allocate vs populate Ryan Roberts
2026-02-17 13:50 ` [PATCH 6.1 0/3] arm64: Speed up boot with faster linear map creation Greg KH
2026-03-19 13:54   ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox