public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH 0/4] arm64: Unmap linear alias of kernel data/bss
@ 2026-01-19 16:47 Ard Biesheuvel
  2026-01-19 16:47 ` [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-19 16:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening

From: Ard Biesheuvel <ardb@kernel.org>

One of the reasons the lack of randomization of the linear map on arm64
is considered problematic is the fact that bootloaders adhering to the
original arm64 boot protocol may place the kernel at the base of DRAM,
and therefore at the base of the non-randomized linear map. This puts a
writable alias of the kernel's data and bss regions at a predictable
location, removing the need for an attacker to guess where KASLR mapped
the kernel.

Let's unmap this linear, writable alias entirely, so that knowing the
location of the linear alias does not give write access to the kernel's
data and bss regions.

Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Liz Prucka <lizprucka@google.com>
Cc: Seth Jenkins <sethjenkins@google.com>
Cc: Kees Cook <kees@kernel.org>
Cc: linux-hardening@vger.kernel.org

Ard Biesheuvel (4):
  arm64: Move fixmap page tables to end of kernel image
  arm64: Map the kernel data/bss read-only in the linear map
  arm64: Move the zero page to rodata
  arm64: Unmap kernel data/bss entirely from the linear map

 arch/arm64/include/asm/mmu.h    |  2 +-
 arch/arm64/kernel/smp.c         |  2 +-
 arch/arm64/kernel/vmlinux.lds.S |  5 +++
 arch/arm64/mm/fixmap.c          |  7 +--
 arch/arm64/mm/mmu.c             | 46 ++++++++++++++++++--
 5 files changed, 54 insertions(+), 8 deletions(-)

-- 
2.52.0.457.g6b5491de43-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image
  2026-01-19 16:47 [PATCH 0/4] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
@ 2026-01-19 16:47 ` Ard Biesheuvel
  2026-01-23  6:12   ` Anshuman Khandual
  2026-01-19 16:47 ` [PATCH 2/4] arm64: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-19 16:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening

From: Ard Biesheuvel <ardb@kernel.org>

Move the fixmap page tables out of the BSS section, and place them at
the end of the image, right before the init_pg_dir section where some of
the other statically allocated page tables live.

These page tables are currently the only data objects in vmlinux that
are meant to be accessed via the kernel image's linear alias, and so
placing them together allows the remainder of the data/bss section to be
remapped read-only or unmapped entirely.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/vmlinux.lds.S | 5 +++++
 arch/arm64/mm/fixmap.c          | 7 ++++---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index ad6133b89e7a..df530e6f3e53 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -334,6 +334,11 @@ SECTIONS
 	__pi___bss_start = __bss_start;
 
 	. = ALIGN(PAGE_SIZE);
+	.pgdir : {
+		__pgdir_start = .;
+		*(.fixmap_bss)
+	}
+
 	__pi_init_pg_dir = .;
 	. += INIT_DIR_SIZE;
 	__pi_init_pg_end = .;
diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c
index c5c5425791da..b649ea1a46e4 100644
--- a/arch/arm64/mm/fixmap.c
+++ b/arch/arm64/mm/fixmap.c
@@ -31,9 +31,10 @@ static_assert(NR_BM_PMD_TABLES == 1);
 
 #define BM_PTE_TABLE_IDX(addr)	__BM_TABLE_IDX(addr, PMD_SHIFT)
 
-static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __page_aligned_bss;
-static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
-static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
+#define __fixmap_bss	__section(".fixmap_bss") __aligned(PAGE_SIZE)
+static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __fixmap_bss;
+static pmd_t bm_pmd[PTRS_PER_PMD] __fixmap_bss __maybe_unused;
+static pud_t bm_pud[PTRS_PER_PUD] __fixmap_bss __maybe_unused;
 
 static inline pte_t *fixmap_pte(unsigned long addr)
 {
-- 
2.52.0.457.g6b5491de43-goog



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4] arm64: Map the kernel data/bss read-only in the linear map
  2026-01-19 16:47 [PATCH 0/4] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
  2026-01-19 16:47 ` [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
@ 2026-01-19 16:47 ` Ard Biesheuvel
  2026-01-23  6:32   ` Anshuman Khandual
  2026-01-19 16:47 ` [PATCH 3/4] arm64: Move the zero page to rodata Ard Biesheuvel
  2026-01-19 16:47 ` [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map Ard Biesheuvel
  3 siblings, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-19 16:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening

From: Ard Biesheuvel <ardb@kernel.org>

On systems where the bootloader adheres to the original arm64 boot
protocol, the placement of the kernel in the physical address space is
highly predictable, and this makes the placement of its linear alias in
the kernel virtual address space equally predictable, given the lack of
randomization of the linear map.

The linear aliases of the kernel text and rodata regions are already
mapped read-only, but the kernel data and bss are mapped read-write in
this region in this region. This is not needed, so map them read-only as
well.

Note that the statically allocated kernel page tables do need to be
modifiable via the linear map, so leave these mapped read-write.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/mmu.h |  2 +-
 arch/arm64/kernel/smp.c      |  2 +-
 arch/arm64/mm/mmu.c          | 14 ++++++++++++--
 3 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 137a173df1ff..8b64d2fcb228 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -77,7 +77,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
-extern void mark_linear_text_alias_ro(void);
+extern void remap_linear_kernel_alias(void);
 extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
 extern void linear_map_maybe_split_to_ptes(void);
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 1aa324104afb..b5f888ab5d17 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -441,7 +441,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	hyp_mode_check();
 	setup_system_features();
 	setup_user_features();
-	mark_linear_text_alias_ro();
+	remap_linear_kernel_alias();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8e1d80a7033e..2a18637ecc15 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1023,14 +1023,24 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
 				 prot, early_pgtable_alloc, flags);
 }
 
-void __init mark_linear_text_alias_ro(void)
+static void remap_linear_data_alias(void)
+{
+	extern const u8 __pgdir_start[];
+
+	update_mapping_prot(__pa_symbol(__init_end), (unsigned long)lm_alias(__init_end),
+			    (unsigned long)__pgdir_start - (unsigned long)__init_end,
+			    PAGE_KERNEL_RO);
+}
+
+void __init remap_linear_kernel_alias(void)
 {
 	/*
-	 * Remove the write permissions from the linear alias of .text/.rodata
+	 * Remove the write permissions from the linear alias of the kernel
 	 */
 	update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text),
 			    (unsigned long)__init_begin - (unsigned long)_text,
 			    PAGE_KERNEL_RO);
+	remap_linear_data_alias();
 }
 
 #ifdef CONFIG_KFENCE
-- 
2.52.0.457.g6b5491de43-goog



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4] arm64: Move the zero page to rodata
  2026-01-19 16:47 [PATCH 0/4] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
  2026-01-19 16:47 ` [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
  2026-01-19 16:47 ` [PATCH 2/4] arm64: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
@ 2026-01-19 16:47 ` Ard Biesheuvel
  2026-01-23  6:43   ` Anshuman Khandual
  2026-01-19 16:47 ` [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map Ard Biesheuvel
  3 siblings, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-19 16:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening

From: Ard Biesheuvel <ardb@kernel.org>

The zero page should contain only zero bytes, and so mapping it
read-write is unnecessary. Move it to __ro_after_init instead.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/mm/mmu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2a18637ecc15..d978b07ab7b8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -68,7 +68,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
  * Empty_zero_page is a special page that is used for zero-initialized data
  * and COW.
  */
-unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
+unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
+					__ro_after_init __aligned(PAGE_SIZE);
 EXPORT_SYMBOL(empty_zero_page);
 
 static DEFINE_SPINLOCK(swapper_pgdir_lock);
-- 
2.52.0.457.g6b5491de43-goog



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map
  2026-01-19 16:47 [PATCH 0/4] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2026-01-19 16:47 ` [PATCH 3/4] arm64: Move the zero page to rodata Ard Biesheuvel
@ 2026-01-19 16:47 ` Ard Biesheuvel
  2026-01-23  6:52   ` Anshuman Khandual
  3 siblings, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-19 16:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening

From: Ard Biesheuvel <ardb@kernel.org>

The linear aliases of the kernel text and rodata are mapped read-only as
well. Given that the contents of these regions are mostly identical to
the version in the loadable image, mapping them read-only is a
reasonable hardening measure.

Data and bss, however, are now also mapped read-only but the contents of
these regions are more likely to contain data that we'd rather not leak.
So let's unmap these entirely in the linear map when the kernel is
running normally.

Only when going into hibernation or waking up from it do these regions
need to be mapped, so take care of this using a PM notifier.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/mm/mmu.c | 35 ++++++++++++++++++--
 1 file changed, 32 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index d978b07ab7b8..7b3ce9cafe64 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -24,6 +24,7 @@
 #include <linux/mm.h>
 #include <linux/vmalloc.h>
 #include <linux/set_memory.h>
+#include <linux/suspend.h>
 #include <linux/kfence.h>
 #include <linux/pkeys.h>
 #include <linux/mm_inline.h>
@@ -1024,13 +1025,13 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
 				 prot, early_pgtable_alloc, flags);
 }
 
-static void remap_linear_data_alias(void)
+static void remap_linear_data_alias(bool unmap)
 {
 	extern const u8 __pgdir_start[];
 
 	update_mapping_prot(__pa_symbol(__init_end), (unsigned long)lm_alias(__init_end),
 			    (unsigned long)__pgdir_start - (unsigned long)__init_end,
-			    PAGE_KERNEL_RO);
+			    unmap ? __pgprot(0) : PAGE_KERNEL_RO);
 }
 
 void __init remap_linear_kernel_alias(void)
@@ -1041,7 +1042,7 @@ void __init remap_linear_kernel_alias(void)
 	update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text),
 			    (unsigned long)__init_begin - (unsigned long)_text,
 			    PAGE_KERNEL_RO);
-	remap_linear_data_alias();
+	remap_linear_data_alias(true);
 }
 
 #ifdef CONFIG_KFENCE
@@ -2257,3 +2258,31 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, unsigned long i
 	return 0;
 }
 #endif
+
+#ifdef CONFIG_HIBERNATION
+static int arm64_hibernate_pm_notify(struct notifier_block *nb,
+				     unsigned long mode, void *unused)
+{
+	switch (mode) {
+	case PM_HIBERNATION_PREPARE:
+	case PM_RESTORE_PREPARE:
+		remap_linear_data_alias(false);
+		break;
+	case PM_POST_HIBERNATION:
+	case PM_POST_RESTORE:
+		remap_linear_data_alias(true);
+		break;
+	}
+	return 0;
+}
+
+static struct notifier_block arm64_hibernate_pm_notifier = {
+	.notifier_call = arm64_hibernate_pm_notify,
+};
+
+static int arm64_hibernate_register_pm_notifier(void)
+{
+	return register_pm_notifier(&arm64_hibernate_pm_notifier);
+}
+late_initcall(arm64_hibernate_register_pm_notifier);
+#endif
-- 
2.52.0.457.g6b5491de43-goog



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image
  2026-01-19 16:47 ` [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
@ 2026-01-23  6:12   ` Anshuman Khandual
  2026-01-23  6:43     ` Ard Biesheuvel
  0 siblings, 1 reply; 12+ messages in thread
From: Anshuman Khandual @ 2026-01-23  6:12 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening



On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Move the fixmap page tables out of the BSS section, and place them at
> the end of the image, right before the init_pg_dir section where some of
> the other statically allocated page tables live.
> 
> These page tables are currently the only data objects in vmlinux that
> are meant to be accessed via the kernel image's linear alias, and so
> placing them together allows the remainder of the data/bss section to be
> remapped read-only or unmapped entirely.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/kernel/vmlinux.lds.S | 5 +++++
>  arch/arm64/mm/fixmap.c          | 7 ++++---
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index ad6133b89e7a..df530e6f3e53 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -334,6 +334,11 @@ SECTIONS
>  	__pi___bss_start = __bss_start;
>  
>  	. = ALIGN(PAGE_SIZE);
> +	.pgdir : {
> +		__pgdir_start = .;

Does __fixmap_pgdir_start with an end marker __fixmap_pgdir_end sound better ?

> +		*(.fixmap_bss)
> +	}
> +
>  	__pi_init_pg_dir = .;
>  	. += INIT_DIR_SIZE;
>  	__pi_init_pg_end = .;
> diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c
> index c5c5425791da..b649ea1a46e4 100644
> --- a/arch/arm64/mm/fixmap.c
> +++ b/arch/arm64/mm/fixmap.c
> @@ -31,9 +31,10 @@ static_assert(NR_BM_PMD_TABLES == 1);
>  
>  #define BM_PTE_TABLE_IDX(addr)	__BM_TABLE_IDX(addr, PMD_SHIFT)
>  
> -static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __page_aligned_bss;
> -static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
> -static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
> +#define __fixmap_bss	__section(".fixmap_bss") __aligned(PAGE_SIZE)
> +static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __fixmap_bss;
> +static pmd_t bm_pmd[PTRS_PER_PMD] __fixmap_bss __maybe_unused;
> +static pud_t bm_pud[PTRS_PER_PUD] __fixmap_bss __maybe_unused;
>  
>  static inline pte_t *fixmap_pte(unsigned long addr)
>  {



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] arm64: Map the kernel data/bss read-only in the linear map
  2026-01-19 16:47 ` [PATCH 2/4] arm64: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
@ 2026-01-23  6:32   ` Anshuman Khandual
  0 siblings, 0 replies; 12+ messages in thread
From: Anshuman Khandual @ 2026-01-23  6:32 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening


On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> On systems where the bootloader adheres to the original arm64 boot
> protocol, the placement of the kernel in the physical address space is
> highly predictable, and this makes the placement of its linear alias in
> the kernel virtual address space equally predictable, given the lack of
> randomization of the linear map.
> 
> The linear aliases of the kernel text and rodata regions are already
> mapped read-only, but the kernel data and bss are mapped read-write in
> this region in this region. This is not needed, so map them read-only as
> well.
> 
> Note that the statically allocated kernel page tables do need to be
> modifiable via the linear map, so leave these mapped read-write.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/mmu.h |  2 +-
>  arch/arm64/kernel/smp.c      |  2 +-
>  arch/arm64/mm/mmu.c          | 14 ++++++++++++--
>  3 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 137a173df1ff..8b64d2fcb228 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -77,7 +77,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
> -extern void mark_linear_text_alias_ro(void);
> +extern void remap_linear_kernel_alias(void);
>  extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
>  extern void linear_map_maybe_split_to_ptes(void);
>  
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 1aa324104afb..b5f888ab5d17 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -441,7 +441,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>  	hyp_mode_check();
>  	setup_system_features();
>  	setup_user_features();
> -	mark_linear_text_alias_ro();
> +	remap_linear_kernel_alias();

Both kernel text/rodat and now the bss segment are mapped via PAGE_KERNEL_RO.
Hence the new name here should still have __ro at the end as well.

>  }
>  
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 8e1d80a7033e..2a18637ecc15 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1023,14 +1023,24 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
>  				 prot, early_pgtable_alloc, flags);
>  }
>  
> -void __init mark_linear_text_alias_ro(void)
> +static void remap_linear_data_alias(void)
> +{
> +	extern const u8 __pgdir_start[];
> +
> +	update_mapping_prot(__pa_symbol(__init_end), (unsigned long)lm_alias(__init_end),
> +			    (unsigned long)__pgdir_start - (unsigned long)__init_end,
> +			    PAGE_KERNEL_RO);
> +}

BSS segment mapping update is split into its own helper here because it gets
used later in the series. Probably would be helpful to just mention that in
the commit message here.

> +
> +void __init remap_linear_kernel_alias(void)
>  {
>  	/*
> -	 * Remove the write permissions from the linear alias of .text/.rodata
> +	 * Remove the write permissions from the linear alias of the kernel
>  	 */
>  	update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text),
>  			    (unsigned long)__init_begin - (unsigned long)_text,
>  			    PAGE_KERNEL_RO);
> +	remap_linear_data_alias();
>  }
>  
>  #ifdef CONFIG_KFENCE



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image
  2026-01-23  6:12   ` Anshuman Khandual
@ 2026-01-23  6:43     ` Ard Biesheuvel
  0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-23  6:43 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Ard Biesheuvel, linux-kernel, linux-arm-kernel, will,
	catalin.marinas, mark.rutland, Ryan Roberts, Liz Prucka,
	Seth Jenkins, Kees Cook, linux-hardening

Hello Anshuman,

On Fri, 23 Jan 2026 at 07:12, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Move the fixmap page tables out of the BSS section, and place them at
> > the end of the image, right before the init_pg_dir section where some of
> > the other statically allocated page tables live.
> >
> > These page tables are currently the only data objects in vmlinux that
> > are meant to be accessed via the kernel image's linear alias, and so
> > placing them together allows the remainder of the data/bss section to be
> > remapped read-only or unmapped entirely.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/arm64/kernel/vmlinux.lds.S | 5 +++++
> >  arch/arm64/mm/fixmap.c          | 7 ++++---
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> > index ad6133b89e7a..df530e6f3e53 100644
> > --- a/arch/arm64/kernel/vmlinux.lds.S
> > +++ b/arch/arm64/kernel/vmlinux.lds.S
> > @@ -334,6 +334,11 @@ SECTIONS
> >       __pi___bss_start = __bss_start;
> >
> >       . = ALIGN(PAGE_SIZE);
> > +     .pgdir : {
> > +             __pgdir_start = .;
>
> Does __fixmap_pgdir_start with an end marker __fixmap_pgdir_end sound better ?
>

No. __pgdir_start covers the init_pg_dir as well. And I don't think we
should be adding end markers that we do not intend to use.

> > +             *(.fixmap_bss)
> > +     }
> > +
> >       __pi_init_pg_dir = .;
> >       . += INIT_DIR_SIZE;
> >       __pi_init_pg_end = .;
> > diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c
> > index c5c5425791da..b649ea1a46e4 100644
> > --- a/arch/arm64/mm/fixmap.c
> > +++ b/arch/arm64/mm/fixmap.c
> > @@ -31,9 +31,10 @@ static_assert(NR_BM_PMD_TABLES == 1);
> >
> >  #define BM_PTE_TABLE_IDX(addr)       __BM_TABLE_IDX(addr, PMD_SHIFT)
> >
> > -static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __page_aligned_bss;
> > -static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
> > -static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
> > +#define __fixmap_bss __section(".fixmap_bss") __aligned(PAGE_SIZE)
> > +static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __fixmap_bss;
> > +static pmd_t bm_pmd[PTRS_PER_PMD] __fixmap_bss __maybe_unused;
> > +static pud_t bm_pud[PTRS_PER_PUD] __fixmap_bss __maybe_unused;
> >
> >  static inline pte_t *fixmap_pte(unsigned long addr)
> >  {
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] arm64: Move the zero page to rodata
  2026-01-19 16:47 ` [PATCH 3/4] arm64: Move the zero page to rodata Ard Biesheuvel
@ 2026-01-23  6:43   ` Anshuman Khandual
  2026-01-23  6:44     ` Ard Biesheuvel
  0 siblings, 1 reply; 12+ messages in thread
From: Anshuman Khandual @ 2026-01-23  6:43 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening



On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> The zero page should contain only zero bytes, and so mapping it
> read-write is unnecessary. Move it to __ro_after_init instead.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/mm/mmu.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2a18637ecc15..d978b07ab7b8 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -68,7 +68,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
>   * Empty_zero_page is a special page that is used for zero-initialized data
>   * and COW.
>   */
> -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
> +unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
> +					__ro_after_init __aligned(PAGE_SIZE);
>  EXPORT_SYMBOL(empty_zero_page);
>  
>  static DEFINE_SPINLOCK(swapper_pgdir_lock);

A small nit - could this be the first patch in the series here ? Becasue it is
not related to other three patches.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] arm64: Move the zero page to rodata
  2026-01-23  6:43   ` Anshuman Khandual
@ 2026-01-23  6:44     ` Ard Biesheuvel
  0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-23  6:44 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Ard Biesheuvel, linux-kernel, linux-arm-kernel, will,
	catalin.marinas, mark.rutland, Ryan Roberts, Liz Prucka,
	Seth Jenkins, Kees Cook, linux-hardening

On Fri, 23 Jan 2026 at 07:43, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > The zero page should contain only zero bytes, and so mapping it
> > read-write is unnecessary. Move it to __ro_after_init instead.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/arm64/mm/mmu.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 2a18637ecc15..d978b07ab7b8 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -68,7 +68,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
> >   * Empty_zero_page is a special page that is used for zero-initialized data
> >   * and COW.
> >   */
> > -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
> > +unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
> > +                                     __ro_after_init __aligned(PAGE_SIZE);
> >  EXPORT_SYMBOL(empty_zero_page);
> >
> >  static DEFINE_SPINLOCK(swapper_pgdir_lock);
>
> A small nit - could this be the first patch in the series here ? Becasue it is
> not related to other three patches.
>

It is related to the other three patches: the 4th patch requires it.

> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

Thanks!


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map
  2026-01-19 16:47 ` [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map Ard Biesheuvel
@ 2026-01-23  6:52   ` Anshuman Khandual
  2026-01-23  7:27     ` Ard Biesheuvel
  0 siblings, 1 reply; 12+ messages in thread
From: Anshuman Khandual @ 2026-01-23  6:52 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-kernel
  Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
	Ard Biesheuvel, Ryan Roberts, Liz Prucka, Seth Jenkins, Kees Cook,
	linux-hardening



On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> The linear aliases of the kernel text and rodata are mapped read-only as
> well. Given that the contents of these regions are mostly identical to
> the version in the loadable image, mapping them read-only is a
> reasonable hardening measure.
> 
> Data and bss, however, are now also mapped read-only but the contents of
> these regions are more likely to contain data that we'd rather not leak.
> So let's unmap these entirely in the linear map when the kernel is
> running normally.
> 
> Only when going into hibernation or waking up from it do these regions
> need to be mapped, so take care of this using a PM notifier.

Just curious - why do we need them mapped while going into or coming back
from the hibernation ?

> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/mm/mmu.c | 35 ++++++++++++++++++--
>  1 file changed, 32 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index d978b07ab7b8..7b3ce9cafe64 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -24,6 +24,7 @@
>  #include <linux/mm.h>
>  #include <linux/vmalloc.h>
>  #include <linux/set_memory.h>
> +#include <linux/suspend.h>
>  #include <linux/kfence.h>
>  #include <linux/pkeys.h>
>  #include <linux/mm_inline.h>
> @@ -1024,13 +1025,13 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
>  				 prot, early_pgtable_alloc, flags);
>  }
>  
> -static void remap_linear_data_alias(void)
> +static void remap_linear_data_alias(bool unmap)
>  {
>  	extern const u8 __pgdir_start[];
>  
>  	update_mapping_prot(__pa_symbol(__init_end), (unsigned long)lm_alias(__init_end),
>  			    (unsigned long)__pgdir_start - (unsigned long)__init_end,
> -			    PAGE_KERNEL_RO);
> +			    unmap ? __pgprot(0) : PAGE_KERNEL_RO);
>  }
>  
>  void __init remap_linear_kernel_alias(void)
> @@ -1041,7 +1042,7 @@ void __init remap_linear_kernel_alias(void)
>  	update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text),
>  			    (unsigned long)__init_begin - (unsigned long)_text,
>  			    PAGE_KERNEL_RO);
> -	remap_linear_data_alias();
> +	remap_linear_data_alias(true);
>  }
>  
>  #ifdef CONFIG_KFENCE
> @@ -2257,3 +2258,31 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, unsigned long i
>  	return 0;
>  }
>  #endif
> +
> +#ifdef CONFIG_HIBERNATION
> +static int arm64_hibernate_pm_notify(struct notifier_block *nb,
> +				     unsigned long mode, void *unused)
> +{
> +	switch (mode) {
> +	case PM_HIBERNATION_PREPARE:
> +	case PM_RESTORE_PREPARE:
> +		remap_linear_data_alias(false);
> +		break;
> +	case PM_POST_HIBERNATION:
> +	case PM_POST_RESTORE:
> +		remap_linear_data_alias(true);
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static struct notifier_block arm64_hibernate_pm_notifier = {
> +	.notifier_call = arm64_hibernate_pm_notify,
> +};
> +
> +static int arm64_hibernate_register_pm_notifier(void)
> +{
> +	return register_pm_notifier(&arm64_hibernate_pm_notifier);
> +}
> +late_initcall(arm64_hibernate_register_pm_notifier);
> +#endif



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map
  2026-01-23  6:52   ` Anshuman Khandual
@ 2026-01-23  7:27     ` Ard Biesheuvel
  0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2026-01-23  7:27 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Ard Biesheuvel, linux-kernel, linux-arm-kernel, will,
	catalin.marinas, mark.rutland, Ryan Roberts, Liz Prucka,
	Seth Jenkins, Kees Cook, linux-hardening

On Fri, 23 Jan 2026 at 07:52, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 19/01/26 10:17 PM, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > The linear aliases of the kernel text and rodata are mapped read-only as
> > well. Given that the contents of these regions are mostly identical to
> > the version in the loadable image, mapping them read-only is a
> > reasonable hardening measure.
> >
> > Data and bss, however, are now also mapped read-only but the contents of
> > these regions are more likely to contain data that we'd rather not leak.
> > So let's unmap these entirely in the linear map when the kernel is
> > running normally.
> >
> > Only when going into hibernation or waking up from it do these regions
> > need to be mapped, so take care of this using a PM notifier.
>
> Just curious - why do we need them mapped while going into or coming back
> from the hibernation ?
>

Because the kernel image is preserved/restored via the linear map. For
some reason, it appears to be sufficient for this mapping to be
read-only but unmapping it entirely results in failures to resume from
hibernation.


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-01-23  7:28 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-19 16:47 [PATCH 0/4] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
2026-01-19 16:47 ` [PATCH 1/4] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
2026-01-23  6:12   ` Anshuman Khandual
2026-01-23  6:43     ` Ard Biesheuvel
2026-01-19 16:47 ` [PATCH 2/4] arm64: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
2026-01-23  6:32   ` Anshuman Khandual
2026-01-19 16:47 ` [PATCH 3/4] arm64: Move the zero page to rodata Ard Biesheuvel
2026-01-23  6:43   ` Anshuman Khandual
2026-01-23  6:44     ` Ard Biesheuvel
2026-01-19 16:47 ` [PATCH 4/4] arm64: Unmap kernel data/bss entirely from the linear map Ard Biesheuvel
2026-01-23  6:52   ` Anshuman Khandual
2026-01-23  7:27     ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox