* [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss
@ 2026-03-20 14:59 Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 01/13] arm64: Move the zero page to rodata Ard Biesheuvel
` (12 more replies)
0 siblings, 13 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
One of the reasons the lack of randomization of the linear map on arm64
is considered problematic is the fact that bootloaders adhering to the
original arm64 boot protocol may place the kernel at the base of DRAM,
and therefore at the base of the non-randomized linear map. This puts a
writable alias of the kernel's data and bss regions at a predictable
location, removing the need for an attacker to guess where KASLR mapped
the kernel.
Let's unmap this linear, writable alias entirely, so that knowing the
location of the linear alias does not give write access to the kernel's
data and bss regions.
Changes since v2:
- Keep bm_pte[] in the region that is remapped r/o or unmapped, as it is
only manipulated via its kernel alias
- Drop check that prohibits any manipulation of descriptors with the
CONT bit set
- Add Ryan's ack to a couple of patches
- Rebase onto v7.0-rc4
Changes since v1:
- Put zero page patch at the start of the series
- Tweak __map_memblock() API to respect existing table and contiguous
mappings, so that the logic to map the kernel alias can be simplified
- Stop abusing the MEMBLOCK_NOMAP flag to initially omit the kernel
linear alias from the linear map
- Some additional cleanup patches
- Use proper API [set_memory_valid()] to (un)map the linear alias of
data/bss.
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Liz Prucka <lizprucka@google.com>
Cc: Seth Jenkins <sethjenkins@google.com>
Cc: Kees Cook <kees@kernel.org>
Cc: linux-hardening@vger.kernel.org
Ard Biesheuvel (13):
arm64: Move the zero page to rodata
arm64: mm: Preserve existing table mappings when mapping DRAM
arm64: mm: Preserve non-contiguous descriptors when mapping DRAM
arm64: mm: Remove bogus stop condition from map_mem() loop
arm64: mm: Drop redundant pgd_t* argument from map_mem()
arm64: mm: Permit contiguous descriptors to be rewritten
arm64: mm: Use hierarchical XN mapping for the fixmap
arm64: kfence: Avoid NOMAP tricks when mapping the early pool
arm64: mm: Permit contiguous attribute for preliminary mappings
arm64: Move fixmap page tables to end of kernel image
arm64: mm: Don't abuse memblock NOMAP to check for overlaps
arm64: mm: Map the kernel data/bss read-only in the linear map
arm64: mm: Unmap kernel data/bss entirely from the linear map
arch/arm64/include/asm/pgtable.h | 4 +
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/kernel/vmlinux.lds.S | 6 +
arch/arm64/mm/fixmap.c | 8 +-
arch/arm64/mm/mmu.c | 138 +++++++++++---------
5 files changed, 90 insertions(+), 67 deletions(-)
base-commit: f338e77383789c0cae23ca3d48adcc5e9e137e3c
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v3 01/13] arm64: Move the zero page to rodata
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 02/13] arm64: mm: Preserve existing table mappings when mapping DRAM Ard Biesheuvel
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
The zero page should contain only zero bytes, and so mapping it
read-write is unnecessary. Combine it with reserved_pg_dir, which lives
in the read-only region of the kernel, and already serves a similar
purpose.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/vmlinux.lds.S | 1 +
arch/arm64/mm/mmu.c | 3 +--
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 2964aad0362e..2d021a576e50 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -229,6 +229,7 @@ SECTIONS
#endif
reserved_pg_dir = .;
+ empty_zero_page = .;
. += PAGE_SIZE;
swapper_pg_dir = .;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index a6a00accf4f9..795743913ce5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -66,9 +66,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
/*
* Empty_zero_page is a special page that is used for zero-initialized data
- * and COW.
+ * and COW. Defined in the linker script.
*/
-unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
static DEFINE_SPINLOCK(swapper_pgdir_lock);
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 02/13] arm64: mm: Preserve existing table mappings when mapping DRAM
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 01/13] arm64: Move the zero page to rodata Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 03/13] arm64: mm: Preserve non-contiguous descriptors " Ard Biesheuvel
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Instead of blindly overwriting an existing table entry when mapping DRAM
regions, take care not to replace a pre-existing table entry with a
block entry. This permits the logic of mapping the kernel's linear alias
to be simplified in a subsequent patch.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 795743913ce5..9927b55022d8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -262,7 +262,8 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
/* try section mapping first */
if (((addr | next | phys) & ~PMD_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
+ (flags & NO_BLOCK_MAPPINGS) == 0 &&
+ !pmd_table(old_pmd)) {
pmd_set_huge(pmdp, phys, prot);
/*
@@ -385,7 +386,8 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
*/
if (pud_sect_supported() &&
((addr | next | phys) & ~PUD_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
+ (flags & NO_BLOCK_MAPPINGS) == 0 &&
+ !pud_table(old_pud)) {
pud_set_huge(pudp, phys, prot);
/*
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 03/13] arm64: mm: Preserve non-contiguous descriptors when mapping DRAM
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 01/13] arm64: Move the zero page to rodata Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 02/13] arm64: mm: Preserve existing table mappings when mapping DRAM Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 04/13] arm64: mm: Remove bogus stop condition from map_mem() loop Ard Biesheuvel
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Instead of blindly overwriting existing live entries with the contiguous
bit cleared when mapping DRAM regions, check whether the contiguous
region in question starts with a descriptor that has the valid bit set
and the contiguous bit cleared, and in that case, leave the contiguous
bit unset on the entire region. This permits the logic of mapping the
kernel's linear alias to be simplified in a subsequent patch.
Note that not setting the contiguous bit on any of the descriptors in
the contiguous region can only result in an invalid configuration if it
was already invalid to begin with.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/pgtable.h | 4 ++++
arch/arm64/mm/mmu.c | 6 ++++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index b3e58735c49b..dc007043d86b 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -187,6 +187,10 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
* Returns true if the pte is valid and has the contiguous bit set.
*/
#define pte_valid_cont(pte) (pte_valid(pte) && pte_cont(pte))
+/*
+ * Returns true if the pte is valid and has the contiguous bit cleared.
+ */
+#define pte_valid_noncont(pte) (pte_valid(pte) && !pte_cont(pte))
/*
* Could the pte be present in the TLB? We must check mm_tlb_flush_pending
* so that we don't erroneously return false for pages that have been
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9927b55022d8..7f7d63009440 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -230,7 +230,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
/* use a contiguous mapping if the range is suitably aligned */
if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
- (flags & NO_CONT_MAPPINGS) == 0)
+ (flags & NO_CONT_MAPPINGS) == 0 &&
+ !pte_valid_noncont(__ptep_get(ptep)))
__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
init_pte(ptep, addr, next, phys, __prot);
@@ -330,7 +331,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
/* use a contiguous mapping if the range is suitably aligned */
if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
- (flags & NO_CONT_MAPPINGS) == 0)
+ (flags & NO_CONT_MAPPINGS) == 0 &&
+ !pte_valid_noncont(pmd_pte(READ_ONCE(*pmdp))))
__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
ret = init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 04/13] arm64: mm: Remove bogus stop condition from map_mem() loop
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (2 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 03/13] arm64: mm: Preserve non-contiguous descriptors " Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 05/13] arm64: mm: Drop redundant pgd_t* argument from map_mem() Ard Biesheuvel
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
The memblock API guarantees that start is not greater than or equal to
end, so there is no need to test it. And if were, it is doubtful that
breaking out of the loop would be a reasonable course of action here
(rather than attempting to map the remaining regions)
So let's drop this check.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 7f7d63009440..652fe2c52b5a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1157,8 +1157,6 @@ static void __init map_mem(pgd_t *pgdp)
/* map all the memory banks */
for_each_mem_range(i, &start, &end) {
- if (start >= end)
- break;
/*
* The linear map must allow allocation tags reading/writing
* if MTE is present. Otherwise, it has the same attributes as
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 05/13] arm64: mm: Drop redundant pgd_t* argument from map_mem()
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (3 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 04/13] arm64: mm: Remove bogus stop condition from map_mem() loop Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 06/13] arm64: mm: Permit contiguous descriptors to be rewritten Ard Biesheuvel
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
__map_memblock() and map_mem() always operate on swapper_pg_dir, so
there is no need to pass around a pgd_t pointer between them.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 25 ++++++++++----------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 652fe2c52b5a..744cf76f25aa 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1019,11 +1019,11 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
flush_tlb_kernel_range(virt, virt + size);
}
-static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
- phys_addr_t end, pgprot_t prot, int flags)
+static void __init __map_memblock(phys_addr_t start, phys_addr_t end,
+ pgprot_t prot, int flags)
{
- early_create_pgd_mapping(pgdp, start, __phys_to_virt(start), end - start,
- prot, early_pgtable_alloc, flags);
+ early_create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
+ end - start, prot, early_pgtable_alloc, flags);
}
void __init mark_linear_text_alias_ro(void)
@@ -1071,13 +1071,13 @@ static phys_addr_t __init arm64_kfence_alloc_pool(void)
return kfence_pool;
}
-static void __init arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp)
+static void __init arm64_kfence_map_pool(phys_addr_t kfence_pool)
{
if (!kfence_pool)
return;
/* KFENCE pool needs page-level mapping. */
- __map_memblock(pgdp, kfence_pool, kfence_pool + KFENCE_POOL_SIZE,
+ __map_memblock(kfence_pool, kfence_pool + KFENCE_POOL_SIZE,
pgprot_tagged(PAGE_KERNEL),
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
@@ -1113,11 +1113,11 @@ bool arch_kfence_init_pool(void)
#else /* CONFIG_KFENCE */
static inline phys_addr_t arm64_kfence_alloc_pool(void) { return 0; }
-static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) { }
+static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool) { }
#endif /* CONFIG_KFENCE */
-static void __init map_mem(pgd_t *pgdp)
+static void __init map_mem(void)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
phys_addr_t kernel_start = __pa_symbol(_text);
@@ -1162,7 +1162,7 @@ static void __init map_mem(pgd_t *pgdp)
* if MTE is present. Otherwise, it has the same attributes as
* PAGE_KERNEL.
*/
- __map_memblock(pgdp, start, end, pgprot_tagged(PAGE_KERNEL),
+ __map_memblock(start, end, pgprot_tagged(PAGE_KERNEL),
flags);
}
@@ -1176,10 +1176,9 @@ static void __init map_mem(pgd_t *pgdp)
* Note that contiguous mappings cannot be remapped in this way,
* so we should avoid them here.
*/
- __map_memblock(pgdp, kernel_start, kernel_end,
- PAGE_KERNEL, NO_CONT_MAPPINGS);
+ __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS);
memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
- arm64_kfence_map_pool(early_kfence_pool, pgdp);
+ arm64_kfence_map_pool(early_kfence_pool);
}
void mark_rodata_ro(void)
@@ -1401,7 +1400,7 @@ static void __init create_idmap(void)
void __init paging_init(void)
{
- map_mem(swapper_pg_dir);
+ map_mem();
memblock_allow_resize();
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 06/13] arm64: mm: Permit contiguous descriptors to be rewritten
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (4 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 05/13] arm64: mm: Drop redundant pgd_t* argument from map_mem() Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 07/13] arm64: mm: Use hierarchical XN mapping for the fixmap Ard Biesheuvel
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Currently, pgattr_change_is_safe() is overly pedantic when it comes to
descriptors with the contiguous hint attribute set, as it rejects
assignments even if the old and the new value are the same.
So relax the check to allow that.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 744cf76f25aa..6780236b6cf8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -140,10 +140,6 @@ bool pgattr_change_is_safe(pteval_t old, pteval_t new)
if (pte_pfn(__pte(old)) != pte_pfn(__pte(new)))
return false;
- /* live contiguous mappings may not be manipulated at all */
- if ((old | new) & PTE_CONT)
- return false;
-
/* Transitioning from Non-Global to Global is unsafe */
if (old & ~new & PTE_NG)
return false;
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 07/13] arm64: mm: Use hierarchical XN mapping for the fixmap
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (5 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 06/13] arm64: mm: Permit contiguous descriptors to be rewritten Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 08/13] arm64: kfence: Avoid NOMAP tricks when mapping the early pool Ard Biesheuvel
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Nothing in the fixmap or in its vicinity requires executable
permissions, and given that it is placed at exactly 1 GiB from the end
of the virtual address space, we can safely set the hierarchical XN
attributes on the level 2 table entries covering the fixmap, without
running the risk of inadvertently taking away the executable permissions
on an adjacent mappings.
This is a hardening measure that reduces the risk of the fixmap being
abused to create executable mappings in the kernel address space.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/fixmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c
index c5c5425791da..c3dd3c868cf5 100644
--- a/arch/arm64/mm/fixmap.c
+++ b/arch/arm64/mm/fixmap.c
@@ -48,7 +48,8 @@ static void __init early_fixmap_init_pte(pmd_t *pmdp, unsigned long addr)
if (pmd_none(pmd)) {
ptep = bm_pte[BM_PTE_TABLE_IDX(addr)];
__pmd_populate(pmdp, __pa_symbol(ptep),
- PMD_TYPE_TABLE | PMD_TABLE_AF);
+ PMD_TYPE_TABLE | PMD_TABLE_AF |
+ PMD_TABLE_PXN | PMD_TABLE_UXN);
}
}
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 08/13] arm64: kfence: Avoid NOMAP tricks when mapping the early pool
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (6 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 07/13] arm64: mm: Use hierarchical XN mapping for the fixmap Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 09/13] arm64: mm: Permit contiguous attribute for preliminary mappings Ard Biesheuvel
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Now that the map_mem() routines respect existing page mappings and
contiguous granule sized blocks with the contiguous bit cleared, there
is no longer a reason to play tricks with the memblock NOMAP attribute.
Instead, the kfence pool can be allocated and mapped with page
granularity first, and this granularity will be respected when the rest
of DRAM is mapped later, even if block and contiguous mappings are
allowed for the remainder of those mappings.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 25 ++++----------------
1 file changed, 5 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6780236b6cf8..1c434c242641 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1047,36 +1047,24 @@ static int __init parse_kfence_early_init(char *arg)
}
early_param("kfence.sample_interval", parse_kfence_early_init);
-static phys_addr_t __init arm64_kfence_alloc_pool(void)
+static void __init arm64_kfence_map_pool(void)
{
phys_addr_t kfence_pool;
if (!kfence_early_init)
- return 0;
+ return;
kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
if (!kfence_pool) {
pr_err("failed to allocate kfence pool\n");
kfence_early_init = false;
- return 0;
- }
-
- /* Temporarily mark as NOMAP. */
- memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE);
-
- return kfence_pool;
-}
-
-static void __init arm64_kfence_map_pool(phys_addr_t kfence_pool)
-{
- if (!kfence_pool)
return;
+ }
/* KFENCE pool needs page-level mapping. */
__map_memblock(kfence_pool, kfence_pool + KFENCE_POOL_SIZE,
pgprot_tagged(PAGE_KERNEL),
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
- memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
__kfence_pool = phys_to_virt(kfence_pool);
}
@@ -1108,8 +1096,7 @@ bool arch_kfence_init_pool(void)
}
#else /* CONFIG_KFENCE */
-static inline phys_addr_t arm64_kfence_alloc_pool(void) { return 0; }
-static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool) { }
+static inline void arm64_kfence_map_pool(void) { }
#endif /* CONFIG_KFENCE */
@@ -1119,7 +1106,6 @@ static void __init map_mem(void)
phys_addr_t kernel_start = __pa_symbol(_text);
phys_addr_t kernel_end = __pa_symbol(__init_begin);
phys_addr_t start, end;
- phys_addr_t early_kfence_pool;
int flags = NO_EXEC_MAPPINGS;
u64 i;
@@ -1136,7 +1122,7 @@ static void __init map_mem(void)
BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end) &&
pgd_index(_PAGE_OFFSET(VA_BITS_MIN)) != PTRS_PER_PGD - 1);
- early_kfence_pool = arm64_kfence_alloc_pool();
+ arm64_kfence_map_pool();
linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
@@ -1174,7 +1160,6 @@ static void __init map_mem(void)
*/
__map_memblock(kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS);
memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
- arm64_kfence_map_pool(early_kfence_pool);
}
void mark_rodata_ro(void)
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 09/13] arm64: mm: Permit contiguous attribute for preliminary mappings
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (7 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 08/13] arm64: kfence: Avoid NOMAP tricks when mapping the early pool Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 10/13] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
There are a few cases where we omit the contiguous hint for mappings
that start out as read-write and are remapped read-only later, on the
basis that manipulating live descriptors with the PTE_CONT attribute set
is unsafe. When support for the contiguous hint was added to the code,
the ARM ARM was ambiguous about this, and so we erred on the side of
caution.
In the meantime, this has been clarified [0], and regions that will be
remapped in their entirety can use the contiguous hint both in the
initial mapping as well as the one that replaces it. Note that this
requires that the logic that may be called to remap overlapping regions
respects existing valid descriptors that have the contiguous bit
cleared.
So omit the NO_CONT_MAPPINGS flag in places where it is unneeded.
Thanks to Ryan for the reference.
[0] RJQQTC
For a TLB lookup in a contiguous region mapped by translation table entries that
have consistent values for the Contiguous bit, but have the OA, attributes, or
permissions misprogrammed, that TLB lookup is permitted to produce an OA, access
permissions, and memory attributes that are consistent with any one of the
programmed translation table values.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 1c434c242641..b52254790fda 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -980,8 +980,7 @@ void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
&phys, virt);
return;
}
- early_create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
- NO_CONT_MAPPINGS);
+ early_create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, 0);
}
void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
@@ -1008,8 +1007,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
return;
}
- early_create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
- NO_CONT_MAPPINGS);
+ early_create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, 0);
/* flush the TLBs after updating live kernel mappings */
flush_tlb_kernel_range(virt, virt + size);
@@ -1155,10 +1153,8 @@ static void __init map_mem(void)
* alternative patching has completed). This makes the contents
* of the region accessible to subsystems such as hibernate,
* but protects it from inadvertent modification or execution.
- * Note that contiguous mappings cannot be remapped in this way,
- * so we should avoid them here.
*/
- __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS);
+ __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, 0);
memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
}
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 10/13] arm64: Move fixmap page tables to end of kernel image
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (8 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 09/13] arm64: mm: Permit contiguous attribute for preliminary mappings Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 11/13] arm64: mm: Don't abuse memblock NOMAP to check for overlaps Ard Biesheuvel
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Move the fixmap page tables out of the BSS section, and place them at
the end of the image, right before the init_pg_dir section where some of
the other statically allocated page tables live.
These page tables are currently the only data objects in vmlinux that
are meant to be accessed via the kernel image's linear alias, and so
placing them together allows the remainder of the data/bss section to be
remapped read-only or unmapped entirely.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
arch/arm64/mm/fixmap.c | 5 +++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 2d021a576e50..282516def39c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -335,6 +335,11 @@ SECTIONS
__pi___bss_start = __bss_start;
. = ALIGN(PAGE_SIZE);
+ .pgdir : {
+ __pgdir_start = .;
+ *(.fixmap_bss)
+ }
+
__pi_init_pg_dir = .;
. += INIT_DIR_SIZE;
__pi_init_pg_end = .;
diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c
index c3dd3c868cf5..30aba998cf38 100644
--- a/arch/arm64/mm/fixmap.c
+++ b/arch/arm64/mm/fixmap.c
@@ -31,9 +31,10 @@ static_assert(NR_BM_PMD_TABLES == 1);
#define BM_PTE_TABLE_IDX(addr) __BM_TABLE_IDX(addr, PMD_SHIFT)
+#define __fixmap_bss __section(".fixmap_bss") __aligned(PAGE_SIZE)
static pte_t bm_pte[NR_BM_PTE_TABLES][PTRS_PER_PTE] __page_aligned_bss;
-static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
-static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
+static pmd_t bm_pmd[PTRS_PER_PMD] __fixmap_bss __maybe_unused;
+static pud_t bm_pud[PTRS_PER_PUD] __fixmap_bss __maybe_unused;
static inline pte_t *fixmap_pte(unsigned long addr)
{
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 11/13] arm64: mm: Don't abuse memblock NOMAP to check for overlaps
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (9 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 10/13] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 12/13] arm64: mm: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 13/13] arm64: mm: Unmap kernel data/bss entirely from " Ard Biesheuvel
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
Now that the DRAM mapping routines respect existing table mappings and
contiguous block and page mappings, it is no longer needed to fiddle
with the memblock tables to set and clear the NOMAP attribute. Instead,
map the kernel text and rodata alias first, so that they will not be
added later when mapping the memblocks.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 23 ++++++--------------
1 file changed, 7 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b52254790fda..34ad45a2d95f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1128,12 +1128,14 @@ static void __init map_mem(void)
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
- * Take care not to create a writable alias for the
- * read-only text and rodata sections of the kernel image.
- * So temporarily mark them as NOMAP to skip mappings in
- * the following for-loop
+ * Map the linear alias of the [_text, __init_begin) interval
+ * as non-executable now, and remove the write permission in
+ * mark_linear_text_alias_ro() above (which will be called after
+ * alternative patching has completed). This makes the contents
+ * of the region accessible to subsystems such as hibernate,
+ * but protects it from inadvertent modification or execution.
*/
- memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+ __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, flags);
/* map all the memory banks */
for_each_mem_range(i, &start, &end) {
@@ -1145,17 +1147,6 @@ static void __init map_mem(void)
__map_memblock(start, end, pgprot_tagged(PAGE_KERNEL),
flags);
}
-
- /*
- * Map the linear alias of the [_text, __init_begin) interval
- * as non-executable now, and remove the write permission in
- * mark_linear_text_alias_ro() below (which will be called after
- * alternative patching has completed). This makes the contents
- * of the region accessible to subsystems such as hibernate,
- * but protects it from inadvertent modification or execution.
- */
- __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, 0);
- memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
}
void mark_rodata_ro(void)
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 12/13] arm64: mm: Map the kernel data/bss read-only in the linear map
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (10 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 11/13] arm64: mm: Don't abuse memblock NOMAP to check for overlaps Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 13/13] arm64: mm: Unmap kernel data/bss entirely from " Ard Biesheuvel
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
On systems where the bootloader adheres to the original arm64 boot
protocol, the placement of the kernel in the physical address space is
highly predictable, and this makes the placement of its linear alias in
the kernel virtual address space equally predictable, given the lack of
randomization of the linear map.
The linear aliases of the kernel text and rodata regions are already
mapped read-only, but the kernel data and bss are mapped read-write in
this region. This is not needed, so map them read-only as well.
Note that the statically allocated kernel page tables do need to be
modifiable via the linear map, so leave these mapped read-write.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/mm/mmu.c | 14 ++++++++++++--
2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 51b0d594239e..f7fe2bcbfd03 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -23,6 +23,7 @@ extern char __irqentry_text_start[], __irqentry_text_end[];
extern char __mmuoff_data_start[], __mmuoff_data_end[];
extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[];
+extern char __pgdir_start[];
static inline size_t entry_tramp_text_size(void)
{
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 34ad45a2d95f..5332f4ec743e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1102,7 +1102,9 @@ static void __init map_mem(void)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
phys_addr_t kernel_start = __pa_symbol(_text);
- phys_addr_t kernel_end = __pa_symbol(__init_begin);
+ phys_addr_t init_begin = __pa_symbol(__init_begin);
+ phys_addr_t init_end = __pa_symbol(__init_end);
+ phys_addr_t kernel_end = __pa_symbol(__pgdir_start);
phys_addr_t start, end;
int flags = NO_EXEC_MAPPINGS;
u64 i;
@@ -1135,7 +1137,10 @@ static void __init map_mem(void)
* of the region accessible to subsystems such as hibernate,
* but protects it from inadvertent modification or execution.
*/
- __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, flags);
+ __map_memblock(kernel_start, init_begin, PAGE_KERNEL, flags);
+
+ /* Map the kernel data/bss so it can be remapped later */
+ __map_memblock(init_end, kernel_end, PAGE_KERNEL, flags);
/* map all the memory banks */
for_each_mem_range(i, &start, &end) {
@@ -1147,6 +1152,11 @@ static void __init map_mem(void)
__map_memblock(start, end, pgprot_tagged(PAGE_KERNEL),
flags);
}
+
+ /* Map the kernel data/bss read-only in the linear map */
+ __map_memblock(init_end, kernel_end, PAGE_KERNEL_RO, flags);
+ flush_tlb_kernel_range((unsigned long)lm_alias(__init_end),
+ (unsigned long)lm_alias(__pgdir_start));
}
void mark_rodata_ro(void)
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 13/13] arm64: mm: Unmap kernel data/bss entirely from the linear map
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
` (11 preceding siblings ...)
2026-03-20 14:59 ` [PATCH v3 12/13] arm64: mm: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
@ 2026-03-20 14:59 ` Ard Biesheuvel
12 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2026-03-20 14:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-arm-kernel, will, catalin.marinas, mark.rutland,
Ard Biesheuvel, Ryan Roberts, Anshuman Khandual, Liz Prucka,
Seth Jenkins, Kees Cook, linux-hardening
From: Ard Biesheuvel <ardb@kernel.org>
The linear aliases of the kernel text and rodata are mapped read-only in
the linear map as well. Given that the contents of these regions are
mostly identical to the version in the loadable image, mapping them
read-only and leaving their contents visible is a reasonable hardening
measure.
Data and bss, however, are now also mapped read-only but the contents of
these regions are more likely to contain data that we'd rather not leak.
So let's unmap these entirely in the linear map when the kernel is
running normally.
When going into hibernation or waking up from it, these regions need to
be mapped, so map the region initially, and toggle the valid bit so
map/unmap the region as needed.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/mmu.c | 44 +++++++++++++++++---
1 file changed, 38 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5332f4ec743e..82a495563b60 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -24,6 +24,7 @@
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/set_memory.h>
+#include <linux/suspend.h>
#include <linux/kfence.h>
#include <linux/pkeys.h>
#include <linux/mm_inline.h>
@@ -1020,6 +1021,31 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end,
end - start, prot, early_pgtable_alloc, flags);
}
+static void remap_linear_data_alias(bool unmap)
+{
+ set_memory_valid((unsigned long)lm_alias(__init_end),
+ (unsigned long)(__pgdir_start - __init_end) / PAGE_SIZE,
+ !unmap);
+}
+
+static int arm64_hibernate_pm_notify(struct notifier_block *nb,
+ unsigned long mode, void *unused)
+{
+ switch (mode) {
+ default:
+ break;
+ case PM_POST_HIBERNATION:
+ case PM_POST_RESTORE:
+ remap_linear_data_alias(true);
+ break;
+ case PM_HIBERNATION_PREPARE:
+ case PM_RESTORE_PREPARE:
+ remap_linear_data_alias(false);
+ break;
+ }
+ return 0;
+}
+
void __init mark_linear_text_alias_ro(void)
{
/*
@@ -1028,6 +1054,16 @@ void __init mark_linear_text_alias_ro(void)
update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text),
(unsigned long)__init_begin - (unsigned long)_text,
PAGE_KERNEL_RO);
+
+ remap_linear_data_alias(true);
+
+ if (IS_ENABLED(CONFIG_HIBERNATION)) {
+ static struct notifier_block nb = {
+ .notifier_call = arm64_hibernate_pm_notify
+ };
+
+ register_pm_notifier(&nb);
+ }
}
#ifdef CONFIG_KFENCE
@@ -1140,7 +1176,8 @@ static void __init map_mem(void)
__map_memblock(kernel_start, init_begin, PAGE_KERNEL, flags);
/* Map the kernel data/bss so it can be remapped later */
- __map_memblock(init_end, kernel_end, PAGE_KERNEL, flags);
+ __map_memblock(init_end, kernel_end, PAGE_KERNEL,
+ flags | NO_BLOCK_MAPPINGS);
/* map all the memory banks */
for_each_mem_range(i, &start, &end) {
@@ -1152,11 +1189,6 @@ static void __init map_mem(void)
__map_memblock(start, end, pgprot_tagged(PAGE_KERNEL),
flags);
}
-
- /* Map the kernel data/bss read-only in the linear map */
- __map_memblock(init_end, kernel_end, PAGE_KERNEL_RO, flags);
- flush_tlb_kernel_range((unsigned long)lm_alias(__init_end),
- (unsigned long)lm_alias(__pgdir_start));
}
void mark_rodata_ro(void)
--
2.53.0.959.g497ff81fa9-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-03-20 15:00 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-20 14:59 [PATCH v3 00/13] arm64: Unmap linear alias of kernel data/bss Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 01/13] arm64: Move the zero page to rodata Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 02/13] arm64: mm: Preserve existing table mappings when mapping DRAM Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 03/13] arm64: mm: Preserve non-contiguous descriptors " Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 04/13] arm64: mm: Remove bogus stop condition from map_mem() loop Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 05/13] arm64: mm: Drop redundant pgd_t* argument from map_mem() Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 06/13] arm64: mm: Permit contiguous descriptors to be rewritten Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 07/13] arm64: mm: Use hierarchical XN mapping for the fixmap Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 08/13] arm64: kfence: Avoid NOMAP tricks when mapping the early pool Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 09/13] arm64: mm: Permit contiguous attribute for preliminary mappings Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 10/13] arm64: Move fixmap page tables to end of kernel image Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 11/13] arm64: mm: Don't abuse memblock NOMAP to check for overlaps Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 12/13] arm64: mm: Map the kernel data/bss read-only in the linear map Ard Biesheuvel
2026-03-20 14:59 ` [PATCH v3 13/13] arm64: mm: Unmap kernel data/bss entirely from " Ard Biesheuvel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox