linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN
@ 2022-11-11 17:11 Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 01/33] arm64: mm: Avoid SWAPPER_BLOCK_xxx constants in FDT fixmap logic Ard Biesheuvel
                   ` (32 more replies)
  0 siblings, 33 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The purpose of this series is to make the boot sequence more robust, and
more efficient, by trying to adhere to the following principles:
- do as little as possible before enabling the MMU and caches [1]
- do everything only once
- do everything from C code
- don't rely on mappings that are executable and writable at the same
  time [2]
- avoid handling external input while the kernel text is mapped
  writable.

[1] has been addressed already with the exception of some EFI specific
tweaks that have been sent out separately:
https://lore.kernel.org/all/20221108182204.2447664-1-ardb@kernel.org/

[2] this is a useful principle in general, but also a prerequisite for
allowing WXN to be enabled, which was the inspiration for this work.

The size of the series has ballooned a bit, but I think the first ~8
patches could be picked up piecemeal, while the remainder is being
discussed and maybe respun again (or rejected).

The bulk of the series covers changes that are needed so that the kernel
mapping is created only once, and before it is used to execute the
kernel. All C code that implements this and its dependencies is moved
into the early mini C runtime that was added recently for the early
KASLR init code.

The most intrusive changes of the series are the ones that modify the
early CPU feature override detection in idreg-override.c. This is needed
to get rid of absolute symbol references (which is not trivial in this
code, which is highly dependent on statically allocated data structures
containing pointers to functions and other global objects). It is also
needed to remove the dependency on kernel infrastructure for command
line parsing etc.

The resulting code can execute from the initial ID map, and populate the
feature override variables that have been moved into BSS (which is the
only part of the kernel image that is mapped writable in the initial ID
map)

Also, a new C implementation is provided to create the permanent kernel
mapping without the need to pivot through a temporary one based on block
mappings. This permanent mapping is created with all the necessary
attributes that support the optional features detected (and potentially
overridden) beforehand.

This means the kernel mapping code in mm/mmu.c can be retired, along
with the handling of the switch between the two and the copying of Kasan
shadow mappings and fixmap intermediate page table levels.

WXN
===
The final two patches cover the remaining changes that are needed to
enable WXN support, which is desirable for robustness, given that
writable, executable mappings of memory are too easy to subvert, and in
the kernel, we rarely rely on such mappings anyway.

Setting SCTLR_ELx.WXN makes all writable mappings implicitly
non-executable, and when set at EL1, it affects EL0 as well as EL1. This
means we need some diligence on the part of user space, but fortunately,
most JITs and other user space components that actually need to
manipulate the contents of their own executable code use split views on
the same memory, or switch between RW- and R-X and back. (One notable
exception is V8, which recently switched back to a full RWX mappings
based JIT)

So on the user space side, we need a couple of minor tweaks to validate
the mmap()/mprotect() arguments when WXN is in effect, and to handle any
faults that might occur on such mappings.

Changes since v6:
- radical overhaul of the boot code to move the feature override
  detection before the permanent mapping of the kernel is created
- rebase onto arm64/dynamic-scs and incorporate the initial dynamic SCS
  patching into the early C map_kernel() routine as well
- remove some false dependences on the SWAPPER_BLOCK_xxx constants, and
  rename them now that they no longer apply to swapper_pg_dir, but
  only to the initial ID map

(v5 was a subset of v4 without the WXN specific pieces)

Changes since v4: [0]
- don't move __ro_after_init section now that we no longer need to,
- don't complicate the asm kernel mapping routines further, but instead,
  merge the two existing passes into one implemented in C,
- deal with rodata=off on WXN enabled builds (i.e., turn off WXN as
  well),
- add some acks from Kees

Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>

Ard Biesheuvel (32):
  arm64: mm: Avoid SWAPPER_BLOCK_xxx constants in FDT fixmap logic
  arm64: mm: Avoid swapper block size when choosing vmemmap granularity
  arm64: kaslr: don't pretend KASLR is enabled if offset <
    MIN_KIMG_ALIGN
  arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti()
  arm64: kernel: Disable latent_entropy GCC plugin in early C runtime
  arm64: kernel: Add relocation check to code built under pi/
  arm64: kernel: Don't rely on objcopy to make code under pi/ __init
  arm64: head: move relocation handling to C code
  arm64: idreg-override: Omit non-NULL checks for override pointer
  arm64: idreg-override: Use relative references to override variables
  arm64: idreg-override: Use relative references to filter routines
  arm64: idreg-override: Avoid parameq() and parameqn()
  arm64: idreg-override: avoid strlen() to check for empty strings
  arm64: idreg-override: Avoid sprintf() for simple string concatenation
  arm64: idreg_override: Avoid kstrtou64() to parse a single hex digit
  arm64: idreg-override: Move to early mini C runtime
  arm64: kernel: Remove early fdt remap code
  arm64: head: Clear BSS and the kernel page tables in one go
  arm64: Move feature overrides into the BSS section
  arm64: head: Run feature override detection before mapping the kernel
  arm64: head: move dynamic shadow call stack patching into early C
    runtime
  arm64: kaslr: Use feature override instead of parsing the cmdline
    again
  arm64: idreg-override: Create a pseudo feature for rodata=off
  arm64: head: allocate more pages for the kernel mapping
  arm64: head: move memstart_offset_seed handling to C code
  arm64: head: Move early kernel mapping routines into C code
  arm64: mm: avoid fixmap for early swapper_pg_dir updates
  arm64: mm: omit redundant remap of kernel image
  arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()"
  arm64: mmu: Retire SWAPPER_BLOCK_xxx and related constants
  mm: add arch hook to validate mmap() prot flags
  arm64: mm: add support for WXN memory translation attribute

Marc Zyngier (1):
  arm64: Turn kaslr_feature_override into a generic SW feature override

 arch/arm64/Kconfig                      |  11 +
 arch/arm64/include/asm/cpufeature.h     |  15 +
 arch/arm64/include/asm/fixmap.h         |   5 +-
 arch/arm64/include/asm/kasan.h          |   2 -
 arch/arm64/include/asm/kernel-pgtable.h |  69 ++--
 arch/arm64/include/asm/memory.h         |  11 +
 arch/arm64/include/asm/mman.h           |  36 ++
 arch/arm64/include/asm/mmu.h            |   2 +-
 arch/arm64/include/asm/mmu_context.h    |  43 ++-
 arch/arm64/include/asm/scs.h            |  34 +-
 arch/arm64/include/asm/setup.h          |   3 -
 arch/arm64/kernel/Makefile              |   7 +-
 arch/arm64/kernel/cpufeature.c          |  32 +-
 arch/arm64/kernel/head.S                | 208 ++----------
 arch/arm64/kernel/idreg-override.c      | 321 ------------------
 arch/arm64/kernel/image-vars.h          |  28 ++
 arch/arm64/kernel/kaslr.c               |  10 +-
 arch/arm64/kernel/module.c              |   2 +-
 arch/arm64/kernel/pi/Makefile           |  24 +-
 arch/arm64/kernel/pi/idreg-override.c   | 356 ++++++++++++++++++++
 arch/arm64/kernel/pi/kaslr_early.c      |  70 +---
 arch/arm64/kernel/pi/map_kernel.c       | 306 +++++++++++++++++
 arch/arm64/kernel/{ => pi}/patch-scs.c  |  34 +-
 arch/arm64/kernel/pi/pi.h               |  12 +
 arch/arm64/kernel/pi/relacheck.c        | 104 ++++++
 arch/arm64/kernel/pi/relocate.c         |  63 ++++
 arch/arm64/kernel/setup.c               |  22 --
 arch/arm64/kernel/suspend.c             |   2 +-
 arch/arm64/kernel/vmlinux.lds.S         |  16 +-
 arch/arm64/mm/kasan_init.c              |  19 +-
 arch/arm64/mm/mmu.c                     | 160 +++------
 arch/arm64/mm/proc.S                    |   9 +-
 include/linux/mman.h                    |  15 +
 mm/mmap.c                               |   3 +
 34 files changed, 1188 insertions(+), 866 deletions(-)
 delete mode 100644 arch/arm64/kernel/idreg-override.c
 create mode 100644 arch/arm64/kernel/pi/idreg-override.c
 create mode 100644 arch/arm64/kernel/pi/map_kernel.c
 rename arch/arm64/kernel/{ => pi}/patch-scs.c (90%)
 create mode 100644 arch/arm64/kernel/pi/pi.h
 create mode 100644 arch/arm64/kernel/pi/relacheck.c
 create mode 100644 arch/arm64/kernel/pi/relocate.c

-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v7 01/33] arm64: mm: Avoid SWAPPER_BLOCK_xxx constants in FDT fixmap logic
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity Ard Biesheuvel
                   ` (31 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The FDT is permitted to be up to 2 MiB in size, and is mapped in the
fixmap region using the same granularity as we use for the initial ID
map, and up until recently, as we use for the preliminary mapping of the
kernel in swapper_pg_dir.

However, even though the constants are the same, the motivation is
different: on 4k pagesize configurations, the fixmap region only has 2
MiB's worth of level 3 PTE slots to begin with, and so mapping the FDT
down to pages in the fixmap would be wasteful, and this is why we use
block mappings in this case. This is also documented in the boot
protocol, i.e., adjacent regions must not be used by the platform in a
away that could result in the need for memory attributes that conflict
with the cacheable attributes used for the FDT block mapping.

For larger page sizes, using block mappings is unnecessary, and given
the potential issues caused by rounding, undesirable, so we use page
mappings in that case.

So to convey that this granularity is unrelated to the swapper block
size, and to allow us to rename or remove the associated constants in a
subsequent patch, use our own constants to define the granularity.

No functional change intended, although the FDT fixmap virtual address
will no longer be 2 MiB aligned on non-4k pages configurations.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/fixmap.h |  5 ++--
 arch/arm64/mm/mmu.c             | 27 ++++++++++----------
 2 files changed, 17 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index 71ed5fdf718bd0fd..d09654af5b1277c6 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -40,11 +40,12 @@ enum fixed_addresses {
 	 * maximum supported size, and put it at the top of the fixmap region.
 	 * The additional space ensures that any FDT that does not exceed
 	 * MAX_FDT_SIZE can be mapped regardless of whether it crosses any
-	 * 2 MB alignment boundaries.
+	 * 2 MB alignment boundaries on 4k pages configurations.
 	 *
 	 * Keep this at the top so it remains 2 MB aligned.
 	 */
-#define FIX_FDT_SIZE		(MAX_FDT_SIZE + SZ_2M)
+#define FIX_FDT_BSIZE		(MAX_FDT_SIZE >= PMD_SIZE ? PMD_SIZE : PAGE_SIZE)
+#define FIX_FDT_SIZE		(MAX_FDT_SIZE + FIX_FDT_BSIZE)
 	FIX_FDT_END,
 	FIX_FDT = FIX_FDT_END + FIX_FDT_SIZE / PAGE_SIZE - 1,
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9a7c38965154081e..757c2fe54d2e99f0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1373,22 +1373,23 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
 	 * allocate additional translation table pages, so that it is safe
 	 * to call create_mapping_noalloc() this early.
 	 *
-	 * On 64k pages, the FDT will be mapped using PTEs, so we need to
-	 * be in the same PMD as the rest of the fixmap.
-	 * On 4k pages, we'll use section mappings for the FDT so we only
-	 * have to be in the same PUD.
+	 * On 4k pages, the entire level 3 fixmap only covers 2 MiB, so we'll
+	 * need to use section mappings for the FDT, and these must be covered
+	 * by the same statically allocated PUD (bm_pud). Otherwise, the FDT
+	 * will be mapped using PTEs, so the entire mappings needs to fit into
+	 * a single PMD (bm_pmd).
 	 */
-	BUILD_BUG_ON(dt_virt_base % SZ_2M);
+	BUILD_BUG_ON(dt_virt_base % FIX_FDT_BSIZE);
 
-	BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
-		     __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
+	BUILD_BUG_ON((__fix_to_virt(FIX_FDT_END) ^ __fix_to_virt(FIX_BTMAP_BEGIN))
+		     & ~((FIX_FDT_BSIZE << (PAGE_SHIFT - 3)) - 1));
 
-	offset = dt_phys % SWAPPER_BLOCK_SIZE;
+	offset = dt_phys % FIX_FDT_BSIZE;
 	dt_virt = (void *)dt_virt_base + offset;
 
 	/* map the first chunk so we can read the size from the header */
-	create_mapping_noalloc(round_down(dt_phys, SWAPPER_BLOCK_SIZE),
-			dt_virt_base, SWAPPER_BLOCK_SIZE, prot);
+	create_mapping_noalloc(round_down(dt_phys, FIX_FDT_BSIZE),
+			       dt_virt_base, FIX_FDT_BSIZE, prot);
 
 	if (fdt_magic(dt_virt) != FDT_MAGIC)
 		return NULL;
@@ -1397,9 +1398,9 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
 	if (*size > MAX_FDT_SIZE)
 		return NULL;
 
-	if (offset + *size > SWAPPER_BLOCK_SIZE)
-		create_mapping_noalloc(round_down(dt_phys, SWAPPER_BLOCK_SIZE), dt_virt_base,
-			       round_up(offset + *size, SWAPPER_BLOCK_SIZE), prot);
+	if (offset + *size > FIX_FDT_BSIZE)
+		create_mapping_noalloc(round_down(dt_phys, FIX_FDT_BSIZE), dt_virt_base,
+				       round_up(offset + *size, FIX_FDT_BSIZE), prot);
 
 	return dt_virt;
 }
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 01/33] arm64: mm: Avoid SWAPPER_BLOCK_xxx constants in FDT fixmap logic Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-24  5:11   ` Anshuman Khandual
  2022-11-11 17:11 ` [PATCH v7 03/33] arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN Ard Biesheuvel
                   ` (30 subsequent siblings)
  32 siblings, 1 reply; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The logic to decide between PTE and PMD mappings in the vmemmap region
is currently based on the granularity of the initial ID map but those
things have little to do with each other.

The reason we use PMDs here on 4k pagesize kernels is because a struct
page array describing a single section of memory takes up at least the
size described by a PMD, and so mapping down to pages is pointless.

So use the correct conditional, and add a comment to clarify it.

This allows us to remove or rename the swapper block size related
constants in the future.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/mm/mmu.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 757c2fe54d2e99f0..0c35e1f195678695 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1196,7 +1196,12 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
 
-	if (!ARM64_KERNEL_USES_PMD_MAPS)
+	/*
+	 * Use page mappings for the vmemmap region if the area taken up by a
+	 * struct page array covering a single section is smaller than the area
+	 * covered by a PMD.
+	 */
+	if (SECTION_SIZE_BITS - VMEMMAP_SHIFT < PMD_SHIFT)
 		return vmemmap_populate_basepages(start, end, node, altmap);
 
 	do {
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 03/33] arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 01/33] arm64: mm: Avoid SWAPPER_BLOCK_xxx constants in FDT fixmap logic Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 04/33] arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() Ard Biesheuvel
                   ` (29 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Our virtual KASLR displacement consists of a fully randomized multiple
of 2 MiB, combined with an offset that is equal to the physical
placement modulo 2 MiB. This arrangement ensures that we can always use
2 MiB block mappings (or contiguous PTE mappings for 16k or 64k pages)
to map the kernel.

This means that a KASLR offset of less than 2 MiB is simply the product
of this physical displacement, and no randomization has actually taken
place. So let's avoid misreporting this case as 'KASLR enabled'.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/memory.h | 11 +++++++++++
 arch/arm64/kernel/cpufeature.c  |  2 +-
 arch/arm64/kernel/kaslr.c       |  2 +-
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 9dd08cd339c3f028..78e5163836a0ab95 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -180,6 +180,7 @@
 #include <linux/compiler.h>
 #include <linux/mmdebug.h>
 #include <linux/types.h>
+#include <asm/boot.h>
 #include <asm/bug.h>
 
 #if VA_BITS > 48
@@ -203,6 +204,16 @@ static inline unsigned long kaslr_offset(void)
 	return kimage_vaddr - KIMAGE_VADDR;
 }
 
+static inline bool kaslr_enabled(void)
+{
+	/*
+	 * The KASLR offset modulo MIN_KIMG_ALIGN is taken from the physical
+	 * placement of the image rather than from the seed, so a displacement
+	 * of less than MIN_KIMG_ALIGN means that no seed was provided.
+	 */
+	return kaslr_offset() >= MIN_KIMG_ALIGN;
+}
+
 /*
  * Allow all memory at the discovery stage. We will clip it later.
  */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b3f37e2209ad378f..ded7684b0a304edc 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1620,7 +1620,7 @@ bool kaslr_requires_kpti(void)
 			return false;
 	}
 
-	return kaslr_offset() > 0;
+	return kaslr_enabled();
 }
 
 static bool __meltdown_safe = true;
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 325455d16dbcb31a..e7477f21a4c9d062 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -41,7 +41,7 @@ static int __init kaslr_init(void)
 		return 0;
 	}
 
-	if (!kaslr_offset()) {
+	if (!kaslr_enabled()) {
 		pr_warn("KASLR disabled due to lack of seed\n");
 		return 0;
 	}
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 04/33] arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti()
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 03/33] arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 05/33] arm64: kernel: Disable latent_entropy GCC plugin in early C runtime Ard Biesheuvel
                   ` (28 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

ThunderX is an obsolete platform that shipped without support for the
EFI_RNG_PROTOCOL in its firmware. Now that we no longer misidentify
small KASLR offsets as randomization being enabled, we can drop the
explicit check for ThunderX as well, given that KASLR is known to be
unavailable.

Note that we never enable KPTI on these systems, in spite of what this
function returns. However, using non-global mappings for code regions is
what tickles the erratum on these cores, regardless of whether KPTI is
enabled or not, so non-global mappings should simply never be used here.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ded7684b0a304edc..fdbae2320b466d98 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1608,18 +1608,6 @@ bool kaslr_requires_kpti(void)
 			return false;
 	}
 
-	/*
-	 * Systems affected by Cavium erratum 24756 are incompatible
-	 * with KPTI.
-	 */
-	if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
-		extern const struct midr_range cavium_erratum_27456_cpus[];
-
-		if (is_midr_in_range_list(read_cpuid_id(),
-					  cavium_erratum_27456_cpus))
-			return false;
-	}
-
 	return kaslr_enabled();
 }
 
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 05/33] arm64: kernel: Disable latent_entropy GCC plugin in early C runtime
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 04/33] arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 06/33] arm64: kernel: Add relocation check to code built under pi/ Ard Biesheuvel
                   ` (27 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Avoid build issues in the early C code related to the latent_entropy GCC
plugin, by incorporating the C flags fragment that disables it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/pi/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index 4c0ea3cd4ea406b6..c844a0546d7f0e62 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -3,6 +3,7 @@
 
 KBUILD_CFLAGS	:= $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \
 		   -Os -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) \
+		   $(DISABLE_LATENT_ENTROPY_PLUGIN) \
 		   $(call cc-option,-mbranch-protection=none) \
 		   -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \
 		   -include $(srctree)/include/linux/hidden.h \
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 06/33] arm64: kernel: Add relocation check to code built under pi/
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 05/33] arm64: kernel: Disable latent_entropy GCC plugin in early C runtime Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 07/33] arm64: kernel: Don't rely on objcopy to make code under pi/ __init Ard Biesheuvel
                   ` (26 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The mini C runtime runs before relocations are processed, and so it
cannot rely on statically initialized pointer variables.

Add a check to ensure that such code does not get introduced by
accident, by going over the relocations in each object that operate on
data sections that are part of the executable image, and raising an
error if any relocations of type R_AARCH64_ABS64 exist. Note that such
relocations are permitted in other places (e.g., debug section) and can
never occur in compiler generated code sections, so only check sections
that have SHF_ALLOC set and SHF_EXECINSTR cleared.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/pi/Makefile    |   9 +-
 arch/arm64/kernel/pi/relacheck.c | 104 ++++++++++++++++++++
 2 files changed, 111 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index c844a0546d7f0e62..810fdae897601e88 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -22,11 +22,16 @@ KCSAN_SANITIZE	:= n
 UBSAN_SANITIZE	:= n
 KCOV_INSTRUMENT	:= n
 
+hostprogs	:= relacheck
+
+quiet_cmd_piobjcopy = $(quiet_cmd_objcopy)
+      cmd_piobjcopy = $(obj)/relacheck $< && $(cmd_objcopy)
+
 $(obj)/%.pi.o: OBJCOPYFLAGS := --prefix-symbols=__pi_ \
 			       --remove-section=.note.gnu.property \
 			       --prefix-alloc-sections=.init
-$(obj)/%.pi.o: $(obj)/%.o FORCE
-	$(call if_changed,objcopy)
+$(obj)/%.pi.o: $(obj)/%.o $(obj)/relacheck FORCE
+	$(call if_changed,piobjcopy)
 
 $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE
 	$(call if_changed_rule,cc_o_c)
diff --git a/arch/arm64/kernel/pi/relacheck.c b/arch/arm64/kernel/pi/relacheck.c
new file mode 100644
index 0000000000000000..1039259360c735d2
--- /dev/null
+++ b/arch/arm64/kernel/pi/relacheck.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022 - Google LLC
+ * Author: Ard Biesheuvel <ardb@google.com>
+ */
+
+#include <elf.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+#define HOST_ORDER ELFDATA2LSB
+#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define HOST_ORDER ELFDATA2MSB
+#endif
+
+static bool swap;
+
+static uint64_t swab_elfxword(uint64_t val)
+{
+	return swap ? __builtin_bswap64(val) : val;
+}
+
+static Elf64_Ehdr *ehdr;
+static Elf64_Shdr *shdr;
+
+static uint32_t swab_elfword(uint32_t val)
+{
+	return swap ? __builtin_bswap32(val) : val;
+}
+
+static uint16_t swab_elfhword(uint16_t val)
+{
+	return swap ? __builtin_bswap16(val) : val;
+}
+
+int main(int argc, char *argv[])
+{
+	struct stat stat;
+	int fd, ret;
+
+	if (argc < 2) {
+		fprintf(stderr, "file argument missing\n");
+		exit(EXIT_FAILURE);
+	}
+
+	fd = open(argv[1], O_RDWR);
+	if (fd < 0) {
+		fprintf(stderr, "failed to open %s\n", argv[1]);
+		exit(EXIT_FAILURE);
+	}
+
+	ret = fstat(fd, &stat);
+	if (ret < 0) {
+		fprintf(stderr, "failed to stat() %s\n", argv[1]);
+		exit(EXIT_FAILURE);
+	}
+
+	ehdr = mmap(0, stat.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
+	if (ehdr == MAP_FAILED) {
+		fprintf(stderr, "failed to mmap() %s\n", argv[1]);
+		exit(EXIT_FAILURE);
+	}
+
+	swap = ehdr->e_ident[EI_DATA] != HOST_ORDER;
+	shdr = (void *)ehdr + swab_elfxword(ehdr->e_shoff);
+
+	for (int i = 0; i < swab_elfhword(ehdr->e_shnum); i++) {
+		unsigned long info, flags;
+		const Elf64_Rela *rela;
+		int numrela;
+
+		if (swab_elfword(shdr[i].sh_type) != SHT_RELA)
+			continue;
+
+		/* only consider RELA sections operating on data */
+		info = swab_elfword(shdr[i].sh_info);
+		flags = swab_elfxword(shdr[info].sh_flags);
+		if ((flags & (SHF_ALLOC | SHF_EXECINSTR)) != SHF_ALLOC)
+			continue;
+
+		rela = (void *)ehdr + swab_elfxword(shdr[i].sh_offset);
+		numrela = swab_elfxword(shdr[i].sh_size) / sizeof(*rela);
+
+		for (int j = 0; j < numrela; j++) {
+			uint64_t info = swab_elfxword(rela[j].r_info);
+
+			if (ELF64_R_TYPE(info) == R_AARCH64_ABS64) {
+				fprintf(stderr,
+					"Absolute relocations detected in %s\n",
+					argv[1]);
+				exit(EXIT_FAILURE);
+			}
+		}
+	}
+	return 0;
+}
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 07/33] arm64: kernel: Don't rely on objcopy to make code under pi/ __init
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 06/33] arm64: kernel: Add relocation check to code built under pi/ Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 08/33] arm64: head: move relocation handling to C code Ard Biesheuvel
                   ` (25 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

We will add some code under pi/ that contains global variables that
should not end up in __initdata, as they will not be writable via the
initial ID map. So only rely on objcopy for making the libfdt code
__init, and use explicit annotations for the rest.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/pi/Makefile      |  6 ++++--
 arch/arm64/kernel/pi/kaslr_early.c | 16 +++++++++-------
 2 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index 810fdae897601e88..8aaf7cbac359ecdb 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -28,11 +28,13 @@ quiet_cmd_piobjcopy = $(quiet_cmd_objcopy)
       cmd_piobjcopy = $(obj)/relacheck $< && $(cmd_objcopy)
 
 $(obj)/%.pi.o: OBJCOPYFLAGS := --prefix-symbols=__pi_ \
-			       --remove-section=.note.gnu.property \
-			       --prefix-alloc-sections=.init
+			       --remove-section=.note.gnu.property
 $(obj)/%.pi.o: $(obj)/%.o $(obj)/relacheck FORCE
 	$(call if_changed,piobjcopy)
 
+# ensure that all the lib- code ends up as __init code and data
+$(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init
+
 $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE
 	$(call if_changed_rule,cc_o_c)
 
diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c
index 17bff6e399e46b0b..86ae0273c95016c6 100644
--- a/arch/arm64/kernel/pi/kaslr_early.c
+++ b/arch/arm64/kernel/pi/kaslr_early.c
@@ -16,7 +16,7 @@
 #include <asm/memory.h>
 
 /* taken from lib/string.c */
-static char *__strstr(const char *s1, const char *s2)
+static char *__init __strstr(const char *s1, const char *s2)
 {
 	size_t l1, l2;
 
@@ -32,7 +32,7 @@ static char *__strstr(const char *s1, const char *s2)
 	}
 	return NULL;
 }
-static bool cmdline_contains_nokaslr(const u8 *cmdline)
+static bool __init cmdline_contains_nokaslr(const u8 *cmdline)
 {
 	const u8 *str;
 
@@ -40,7 +40,7 @@ static bool cmdline_contains_nokaslr(const u8 *cmdline)
 	return str == cmdline || (str > cmdline && *(str - 1) == ' ');
 }
 
-static bool is_kaslr_disabled_cmdline(void *fdt)
+static bool __init is_kaslr_disabled_cmdline(void *fdt)
 {
 	if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
 		int node;
@@ -66,17 +66,19 @@ static bool is_kaslr_disabled_cmdline(void *fdt)
 	return cmdline_contains_nokaslr(CONFIG_CMDLINE);
 }
 
-static u64 get_kaslr_seed(void *fdt)
+static u64 __init get_kaslr_seed(void *fdt)
 {
+	static char const chosen_str[] __initconst = "chosen";
+	static char const seed_str[] __initconst = "kaslr-seed";
 	int node, len;
 	fdt64_t *prop;
 	u64 ret;
 
-	node = fdt_path_offset(fdt, "/chosen");
+	node = fdt_path_offset(fdt, chosen_str);
 	if (node < 0)
 		return 0;
 
-	prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
+	prop = fdt_getprop_w(fdt, node, seed_str, &len);
 	if (!prop || len != sizeof(u64))
 		return 0;
 
@@ -85,7 +87,7 @@ static u64 get_kaslr_seed(void *fdt)
 	return ret;
 }
 
-asmlinkage u64 kaslr_early_init(void *fdt)
+asmlinkage u64 __init kaslr_early_init(void *fdt)
 {
 	u64 seed;
 
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 08/33] arm64: head: move relocation handling to C code
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 07/33] arm64: kernel: Don't rely on objcopy to make code under pi/ __init Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 09/33] arm64: Turn kaslr_feature_override into a generic SW feature override Ard Biesheuvel
                   ` (24 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Now that we have a mini C runtime before the kernel mapping is up, we
can move the non-trivial relocation processing code out of head.S and
reimplement it in C.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/Makefile      |   3 +-
 arch/arm64/kernel/head.S        | 104 ++------------------
 arch/arm64/kernel/pi/Makefile   |   5 +-
 arch/arm64/kernel/pi/relocate.c |  61 ++++++++++++
 arch/arm64/kernel/vmlinux.lds.S |  12 ++-
 5 files changed, 81 insertions(+), 104 deletions(-)

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 8dd925f4a4c6d29e..a8717865fee5c296 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -65,7 +65,8 @@ obj-$(CONFIG_ACPI)			+= acpi.o
 obj-$(CONFIG_ACPI_NUMA)			+= acpi_numa.o
 obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL)	+= acpi_parking_protocol.o
 obj-$(CONFIG_PARAVIRT)			+= paravirt.o
-obj-$(CONFIG_RANDOMIZE_BASE)		+= kaslr.o pi/
+obj-$(CONFIG_RELOCATABLE)		+= pi/
+obj-$(CONFIG_RANDOMIZE_BASE)		+= kaslr.o
 obj-$(CONFIG_HIBERNATION)		+= hibernate.o hibernate-asm.o
 obj-$(CONFIG_ELF_CORE)			+= elfcore.o
 obj-$(CONFIG_KEXEC_CORE)		+= machine_kexec.o relocate_kernel.o	\
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 952e17bd1c0b4f91..998a3e066b2fdf0a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -80,7 +80,7 @@
 	 *  x20        primary_entry() .. __primary_switch()    CPU boot mode
 	 *  x21        primary_entry() .. start_kernel()        FDT pointer passed at boot in x0
 	 *  x22        create_idmap() .. start_kernel()         ID map VA of the DT blob
-	 *  x23        primary_entry() .. start_kernel()        physical misalignment/KASLR offset
+	 *  x23        __primary_switch()                       physical misalignment/KASLR offset
 	 *  x24        __primary_switch()                       linear map KASLR seed
 	 *  x25        primary_entry() .. start_kernel()        supported VA size
 	 *  x28        create_idmap()                           callee preserved temp register
@@ -338,7 +338,7 @@ SYM_FUNC_START_LOCAL(create_idmap)
 	/* Remap the kernel page tables r/w in the ID map */
 	adrp	x1, _text
 	adrp	x2, init_pg_dir
-	adrp	x3, init_pg_end
+	adrp	x3, _end
 	bic	x4, x2, #SWAPPER_BLOCK_SIZE - 1
 	mov	x5, SWAPPER_RW_MMUFLAGS
 	mov	x6, #SWAPPER_BLOCK_SHIFT
@@ -705,97 +705,6 @@ SYM_FUNC_START_LOCAL(__no_granule_support)
 	b	1b
 SYM_FUNC_END(__no_granule_support)
 
-#ifdef CONFIG_RELOCATABLE
-SYM_FUNC_START_LOCAL(__relocate_kernel)
-	/*
-	 * Iterate over each entry in the relocation table, and apply the
-	 * relocations in place.
-	 */
-	adr_l	x9, __rela_start
-	adr_l	x10, __rela_end
-	mov_q	x11, KIMAGE_VADDR		// default virtual offset
-	add	x11, x11, x23			// actual virtual offset
-
-0:	cmp	x9, x10
-	b.hs	1f
-	ldp	x12, x13, [x9], #24
-	ldr	x14, [x9, #-8]
-	cmp	w13, #R_AARCH64_RELATIVE
-	b.ne	0b
-	add	x14, x14, x23			// relocate
-	str	x14, [x12, x23]
-	b	0b
-
-1:
-#ifdef CONFIG_RELR
-	/*
-	 * Apply RELR relocations.
-	 *
-	 * RELR is a compressed format for storing relative relocations. The
-	 * encoded sequence of entries looks like:
-	 * [ AAAAAAAA BBBBBBB1 BBBBBBB1 ... AAAAAAAA BBBBBB1 ... ]
-	 *
-	 * i.e. start with an address, followed by any number of bitmaps. The
-	 * address entry encodes 1 relocation. The subsequent bitmap entries
-	 * encode up to 63 relocations each, at subsequent offsets following
-	 * the last address entry.
-	 *
-	 * The bitmap entries must have 1 in the least significant bit. The
-	 * assumption here is that an address cannot have 1 in lsb. Odd
-	 * addresses are not supported. Any odd addresses are stored in the RELA
-	 * section, which is handled above.
-	 *
-	 * Excluding the least significant bit in the bitmap, each non-zero
-	 * bit in the bitmap represents a relocation to be applied to
-	 * a corresponding machine word that follows the base address
-	 * word. The second least significant bit represents the machine
-	 * word immediately following the initial address, and each bit
-	 * that follows represents the next word, in linear order. As such,
-	 * a single bitmap can encode up to 63 relocations in a 64-bit object.
-	 *
-	 * In this implementation we store the address of the next RELR table
-	 * entry in x9, the address being relocated by the current address or
-	 * bitmap entry in x13 and the address being relocated by the current
-	 * bit in x14.
-	 */
-	adr_l	x9, __relr_start
-	adr_l	x10, __relr_end
-
-2:	cmp	x9, x10
-	b.hs	7f
-	ldr	x11, [x9], #8
-	tbnz	x11, #0, 3f			// branch to handle bitmaps
-	add	x13, x11, x23
-	ldr	x12, [x13]			// relocate address entry
-	add	x12, x12, x23
-	str	x12, [x13], #8			// adjust to start of bitmap
-	b	2b
-
-3:	mov	x14, x13
-4:	lsr	x11, x11, #1
-	cbz	x11, 6f
-	tbz	x11, #0, 5f			// skip bit if not set
-	ldr	x12, [x14]			// relocate bit
-	add	x12, x12, x23
-	str	x12, [x14]
-
-5:	add	x14, x14, #8			// move to next bit's address
-	b	4b
-
-6:	/*
-	 * Move to the next bitmap's address. 8 is the word size, and 63 is the
-	 * number of significant bits in a bitmap entry.
-	 */
-	add	x13, x13, #(8 * 63)
-	b	2b
-
-7:
-#endif
-	ret
-
-SYM_FUNC_END(__relocate_kernel)
-#endif
-
 SYM_FUNC_START_LOCAL(__primary_switch)
 	adrp	x1, reserved_pg_dir
 	adrp	x2, init_idmap_pg_dir
@@ -803,11 +712,11 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 #ifdef CONFIG_RELOCATABLE
 	adrp	x23, KERNEL_START
 	and	x23, x23, MIN_KIMG_ALIGN - 1
-#ifdef CONFIG_RANDOMIZE_BASE
-	mov	x0, x22
-	adrp	x1, init_pg_end
+	adrp	x1, early_init_stack
 	mov	sp, x1
 	mov	x29, xzr
+#ifdef CONFIG_RANDOMIZE_BASE
+	mov	x0, x22
 	bl	__pi_kaslr_early_init
 	and	x24, x0, #SZ_2M - 1		// capture memstart offset seed
 	bic	x0, x0, #SZ_2M - 1
@@ -820,7 +729,8 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 	adrp	x1, init_pg_dir
 	load_ttbr1 x1, x1, x2
 #ifdef CONFIG_RELOCATABLE
-	bl	__relocate_kernel
+	mov	x0, x23
+	bl	__pi_relocate_kernel
 #endif
 	ldr	x8, =__primary_switched
 	adrp	x0, KERNEL_START		// __pa(KERNEL_START)
diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index 8aaf7cbac359ecdb..e046c10606cb822e 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -38,5 +38,6 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init
 $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE
 	$(call if_changed_rule,cc_o_c)
 
-obj-y		:= kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o
-extra-y		:= $(patsubst %.pi.o,%.o,$(obj-y))
+obj-y				:= relocate.pi.o
+obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o
+extra-y				:= $(patsubst %.pi.o,%.o,$(obj-y))
diff --git a/arch/arm64/kernel/pi/relocate.c b/arch/arm64/kernel/pi/relocate.c
new file mode 100644
index 0000000000000000..c35cb918fa2a004a
--- /dev/null
+++ b/arch/arm64/kernel/pi/relocate.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// Copyright 2022 Google LLC
+// Author: Ard Biesheuvel <ardb@google.com>
+
+#include <linux/elf.h>
+#include <linux/init.h>
+#include <linux/types.h>
+
+extern const Elf64_Rela rela_start[], rela_end[];
+extern const u64 relr_start[], relr_end[];
+
+void __init relocate_kernel(u64 offset)
+{
+	u64 *place = NULL;
+
+	for (const Elf64_Rela *rela = rela_start; rela < rela_end; rela++) {
+		if (ELF64_R_TYPE(rela->r_info) != R_AARCH64_RELATIVE)
+			continue;
+		*(u64 *)(rela->r_offset + offset) = rela->r_addend + offset;
+	}
+
+	if (!IS_ENABLED(CONFIG_RELR) || !offset)
+		return;
+
+	/*
+	 * Apply RELR relocations.
+	 *
+	 * RELR is a compressed format for storing relative relocations. The
+	 * encoded sequence of entries looks like:
+	 * [ AAAAAAAA BBBBBBB1 BBBBBBB1 ... AAAAAAAA BBBBBB1 ... ]
+	 *
+	 * i.e. start with an address, followed by any number of bitmaps. The
+	 * address entry encodes 1 relocation. The subsequent bitmap entries
+	 * encode up to 63 relocations each, at subsequent offsets following
+	 * the last address entry.
+	 *
+	 * The bitmap entries must have 1 in the least significant bit. The
+	 * assumption here is that an address cannot have 1 in lsb. Odd
+	 * addresses are not supported. Any odd addresses are stored in the
+	 * RELA section, which is handled above.
+	 *
+	 * Excluding the least significant bit in the bitmap, each non-zero bit
+	 * in the bitmap represents a relocation to be applied to a
+	 * corresponding machine word that follows the base address word. The
+	 * second least significant bit represents the machine word immediately
+	 * following the initial address, and each bit that follows represents
+	 * the next word, in linear order. As such, a single bitmap can encode
+	 * up to 63 relocations in a 64-bit object.
+	 */
+	for (const u64 *relr = relr_start; relr < relr_end; relr++) {
+		if (!(*relr & 1)) {
+			place = (u64 *)(*relr + offset);
+			*place++ += offset;
+		} else {
+			for (u64 *p = place, r = *relr >> 1; r; p++, r >>= 1)
+				if (r & 1)
+					*p += offset;
+			place += 63;
+		}
+	}
+}
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c13dafc98b8400f..bebb88daf4c52039 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -270,15 +270,15 @@ SECTIONS
 	HYPERVISOR_RELOC_SECTION
 
 	.rela.dyn : ALIGN(8) {
-		__rela_start = .;
+		__pi_rela_start = .;
 		*(.rela .rela*)
-		__rela_end = .;
+		__pi_rela_end = .;
 	}
 
 	.relr.dyn : ALIGN(8) {
-		__relr_start = .;
+		__pi_relr_start = .;
 		*(.relr.dyn)
-		__relr_end = .;
+		__pi_relr_end = .;
 	}
 
 	. = ALIGN(SEGMENT_ALIGN);
@@ -317,6 +317,10 @@ SECTIONS
 	init_pg_dir = .;
 	. += INIT_DIR_SIZE;
 	init_pg_end = .;
+#ifdef CONFIG_RELOCATABLE
+	. += SZ_4K;		/* stack for the early relocation code */
+	early_init_stack = .;
+#endif
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__pecoff_data_size = ABSOLUTE(. - __initdata_begin);
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 09/33] arm64: Turn kaslr_feature_override into a generic SW feature override
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (7 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 08/33] arm64: head: move relocation handling to C code Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 10/33] arm64: idreg-override: Omit non-NULL checks for override pointer Ard Biesheuvel
                   ` (23 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

From: Marc Zyngier <maz@kernel.org>

Disabling KASLR from the command line is implemented as a feature
override. Repaint it slightly so that it can further be used as
more generic infrastructure for SW override purposes.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h |  4 ++++
 arch/arm64/kernel/cpufeature.c      |  2 ++
 arch/arm64/kernel/idreg-override.c  | 16 ++++++----------
 arch/arm64/kernel/kaslr.c           |  6 +++---
 4 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f73f11b5504254be..f44a7860636fd411 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -15,6 +15,8 @@
 #define MAX_CPU_FEATURES	128
 #define cpu_feature(x)		KERNEL_HWCAP_ ## x
 
+#define ARM64_SW_FEATURE_OVERRIDE_NOKASLR	0
+
 #ifndef __ASSEMBLY__
 
 #include <linux/bug.h>
@@ -914,6 +916,8 @@ extern struct arm64_ftr_override id_aa64smfr0_override;
 extern struct arm64_ftr_override id_aa64isar1_override;
 extern struct arm64_ftr_override id_aa64isar2_override;
 
+extern struct arm64_ftr_override arm64_sw_feature_override;
+
 u32 get_kvm_ipa_limit(void);
 void dump_cpu_features(void);
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index fdbae2320b466d98..ebd8cabffb105e15 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -644,6 +644,8 @@ struct arm64_ftr_override __ro_after_init id_aa64smfr0_override;
 struct arm64_ftr_override __ro_after_init id_aa64isar1_override;
 struct arm64_ftr_override __ro_after_init id_aa64isar2_override;
 
+struct arm64_ftr_override arm64_sw_feature_override;
+
 static const struct __ftr_reg_entry {
 	u32			sys_id;
 	struct arm64_ftr_reg 	*reg;
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 95133765ed29a0e4..4e8ef5e05db7a424 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -137,15 +137,11 @@ static const struct ftr_set_desc smfr0 __initconst = {
 	},
 };
 
-extern struct arm64_ftr_override kaslr_feature_override;
-
-static const struct ftr_set_desc kaslr __initconst = {
-	.name		= "kaslr",
-#ifdef CONFIG_RANDOMIZE_BASE
-	.override	= &kaslr_feature_override,
-#endif
+static const struct ftr_set_desc sw_features __initconst = {
+	.name		= "arm64_sw",
+	.override	= &arm64_sw_feature_override,
 	.fields		= {
-		FIELD("disabled", 0, NULL),
+		FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL),
 		{}
 	},
 };
@@ -157,7 +153,7 @@ static const struct ftr_set_desc * const regs[] __initconst = {
 	&isar1,
 	&isar2,
 	&smfr0,
-	&kaslr,
+	&sw_features,
 };
 
 static const struct {
@@ -174,7 +170,7 @@ static const struct {
 	  "id_aa64isar1.api=0 id_aa64isar1.apa=0 "
 	  "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0"	   },
 	{ "arm64.nomte",		"id_aa64pfr1.mte=0" },
-	{ "nokaslr",			"kaslr.disabled=1" },
+	{ "nokaslr",			"arm64_sw.nokaslr=1" },
 };
 
 static int __init find_field(const char *cmdline,
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index e7477f21a4c9d062..5d4ce7f5f157bb3f 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -23,8 +23,6 @@
 u64 __ro_after_init module_alloc_base;
 u16 __initdata memstart_offset_seed;
 
-struct arm64_ftr_override kaslr_feature_override __initdata;
-
 static int __init kaslr_init(void)
 {
 	u64 module_range;
@@ -36,7 +34,9 @@ static int __init kaslr_init(void)
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
 
-	if (kaslr_feature_override.val & kaslr_feature_override.mask & 0xf) {
+	if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val &
+						 arm64_sw_feature_override.mask,
+						 ARM64_SW_FEATURE_OVERRIDE_NOKASLR)) {
 		pr_info("KASLR disabled on command line\n");
 		return 0;
 	}
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 10/33] arm64: idreg-override: Omit non-NULL checks for override pointer
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (8 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 09/33] arm64: Turn kaslr_feature_override into a generic SW feature override Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 11/33] arm64: idreg-override: Use relative references to override variables Ard Biesheuvel
                   ` (22 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Now that override pointers are always set, we can drop the various
non-NULL checks that we have in the code.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 4e8ef5e05db7a424..d7fc813ba5913e27 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -195,9 +195,6 @@ static void __init match_options(const char *cmdline)
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		int f;
 
-		if (!regs[i]->override)
-			continue;
-
 		for (f = 0; strlen(regs[i]->fields[f].name); f++) {
 			u64 shift = regs[i]->fields[f].shift;
 			u64 width = regs[i]->fields[f].width ?: 4;
@@ -298,10 +295,8 @@ asmlinkage void __init init_feature_override(u64 boot_status)
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
-		if (regs[i]->override) {
-			regs[i]->override->val  = 0;
-			regs[i]->override->mask = 0;
-		}
+		regs[i]->override->val  = 0;
+		regs[i]->override->mask = 0;
 	}
 
 	__boot_status = boot_status;
@@ -309,9 +304,8 @@ asmlinkage void __init init_feature_override(u64 boot_status)
 	parse_cmdline();
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
-		if (regs[i]->override)
-			dcache_clean_inval_poc((unsigned long)regs[i]->override,
-					    (unsigned long)regs[i]->override +
-					    sizeof(*regs[i]->override));
+		dcache_clean_inval_poc((unsigned long)regs[i]->override,
+				       (unsigned long)regs[i]->override +
+				       sizeof(*regs[i]->override));
 	}
 }
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 11/33] arm64: idreg-override: Use relative references to override variables
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (9 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 10/33] arm64: idreg-override: Omit non-NULL checks for override pointer Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 12/33] arm64: idreg-override: Use relative references to filter routines Ard Biesheuvel
                   ` (21 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

To prepare the idreg-override for running in a context where statically
initialized absolute symbol references are not permitted, use place
relative relocations to refer to the 'override' global variables in each
feature override descriptor set, and populate the regs[] array using
relative references as well.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 144 +++++++++-----------
 1 file changed, 63 insertions(+), 81 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index d7fc813ba5913e27..f8ae7f6d0d9b4fd0 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -6,6 +6,7 @@
  * Author: Marc Zyngier <maz@kernel.org>
  */
 
+#include <linux/build_bug.h>
 #include <linux/ctype.h>
 #include <linux/kernel.h>
 #include <linux/libfdt.h>
@@ -22,18 +23,29 @@
 static u64 __boot_status __initdata;
 
 struct ftr_set_desc {
-	char 				name[FTR_DESC_NAME_LEN];
-	struct arm64_ftr_override	*override;
+	s32		override_offset; 	// must remain first
+	char 		name[FTR_DESC_NAME_LEN];
 	struct {
-		char			name[FTR_DESC_FIELD_LEN];
-		u8			shift;
-		u8			width;
-		bool			(*filter)(u64 val);
-	} 				fields[];
+		char	name[FTR_DESC_FIELD_LEN];
+		u8	shift;
+		u8	width;
+		bool	(*filter)(u64 val);
+	} 		fields[];
 };
 
+static_assert(offsetof(struct ftr_set_desc, override_offset) == 0);
+
 #define FIELD(n, s, f)	{ .name = n, .shift = s, .width = 4, .filter = f }
 
+#define DEFINE_OVERRIDE(__idx, __id, __name, __ovr, ...)		\
+	asmlinkage const struct ftr_set_desc __initconst __id = {	\
+		.name	= __name,					\
+		.fields = { __VA_ARGS__ },				\
+	};								\
+	asm(".globl " #__ovr "; "					\
+	    ".reloc " #__id ", R_AARCH64_PREL32, " #__ovr "; "		\
+	    ".reloc regs + (4 * " #__idx "), R_AARCH64_PREL32, " #__id)
+
 static bool __init mmfr1_vh_filter(u64 val)
 {
 	/*
@@ -46,14 +58,9 @@ static bool __init mmfr1_vh_filter(u64 val)
 		 val == 0);
 }
 
-static const struct ftr_set_desc mmfr1 __initconst = {
-	.name		= "id_aa64mmfr1",
-	.override	= &id_aa64mmfr1_override,
-	.fields		= {
+DEFINE_OVERRIDE(0, mmfr1, "id_aa64mmfr1", id_aa64mmfr1_override,
 		FIELD("vh", ID_AA64MMFR1_EL1_VH_SHIFT, mmfr1_vh_filter),
-		{}
-	},
-};
+		{});
 
 static bool __init pfr0_sve_filter(u64 val)
 {
@@ -70,14 +77,9 @@ static bool __init pfr0_sve_filter(u64 val)
 	return true;
 }
 
-static const struct ftr_set_desc pfr0 __initconst = {
-	.name		= "id_aa64pfr0",
-	.override	= &id_aa64pfr0_override,
-	.fields		= {
+DEFINE_OVERRIDE(1, pfr0, "id_aa64pfr0", id_aa64pfr0_override,
 	        FIELD("sve", ID_AA64PFR0_EL1_SVE_SHIFT, pfr0_sve_filter),
-		{}
-	},
-};
+		{});
 
 static bool __init pfr1_sme_filter(u64 val)
 {
@@ -94,67 +96,46 @@ static bool __init pfr1_sme_filter(u64 val)
 	return true;
 }
 
-static const struct ftr_set_desc pfr1 __initconst = {
-	.name		= "id_aa64pfr1",
-	.override	= &id_aa64pfr1_override,
-	.fields		= {
+DEFINE_OVERRIDE(2, pfr1, "id_aa64pfr1", id_aa64pfr1_override,
 		FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ),
 		FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL),
 		FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter),
-		{}
-	},
-};
+		{});
 
-static const struct ftr_set_desc isar1 __initconst = {
-	.name		= "id_aa64isar1",
-	.override	= &id_aa64isar1_override,
-	.fields		= {
+DEFINE_OVERRIDE(3, isar1, "id_aa64isar1", id_aa64isar1_override,
 		FIELD("gpi", ID_AA64ISAR1_EL1_GPI_SHIFT, NULL),
 		FIELD("gpa", ID_AA64ISAR1_EL1_GPA_SHIFT, NULL),
 		FIELD("api", ID_AA64ISAR1_EL1_API_SHIFT, NULL),
 		FIELD("apa", ID_AA64ISAR1_EL1_APA_SHIFT, NULL),
-		{}
-	},
-};
+		{});
 
-static const struct ftr_set_desc isar2 __initconst = {
-	.name		= "id_aa64isar2",
-	.override	= &id_aa64isar2_override,
-	.fields		= {
+DEFINE_OVERRIDE(4, isar2, "id_aa64isar2", id_aa64isar2_override,
 		FIELD("gpa3", ID_AA64ISAR2_EL1_GPA3_SHIFT, NULL),
 		FIELD("apa3", ID_AA64ISAR2_EL1_APA3_SHIFT, NULL),
-		{}
-	},
-};
+		{});
 
-static const struct ftr_set_desc smfr0 __initconst = {
-	.name		= "id_aa64smfr0",
-	.override	= &id_aa64smfr0_override,
-	.fields		= {
+DEFINE_OVERRIDE(5, smfr0, "id_aa64smfr0", id_aa64smfr0_override,
 		/* FA64 is a one bit field... :-/ */
 		{ "fa64", ID_AA64SMFR0_EL1_FA64_SHIFT, 1, },
-		{}
-	},
-};
+		{});
 
-static const struct ftr_set_desc sw_features __initconst = {
-	.name		= "arm64_sw",
-	.override	= &arm64_sw_feature_override,
-	.fields		= {
+DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override,
 		FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL),
-		{}
-	},
-};
+		{});
 
-static const struct ftr_set_desc * const regs[] __initconst = {
-	&mmfr1,
-	&pfr0,
-	&pfr1,
-	&isar1,
-	&isar2,
-	&smfr0,
-	&sw_features,
-};
+/*
+ * regs[] is populated by R_AARCH64_PREL32 directives invisible to the compiler
+ * so it cannot be static or const, or the compiler might try to use constant
+ * propagation on the values.
+ */
+asmlinkage s32 regs[7] __initdata = { [0 ... ARRAY_SIZE(regs) - 1] = S32_MAX };
+
+static struct arm64_ftr_override * __init reg_override(int i)
+{
+	const struct ftr_set_desc *reg = offset_to_ptr(&regs[i]);
+
+	return offset_to_ptr(&reg->override_offset);
+}
 
 static const struct {
 	char	alias[FTR_ALIAS_NAME_LEN];
@@ -193,15 +174,16 @@ static void __init match_options(const char *cmdline)
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
+		const struct ftr_set_desc *reg = offset_to_ptr(&regs[i]);
 		int f;
 
-		for (f = 0; strlen(regs[i]->fields[f].name); f++) {
-			u64 shift = regs[i]->fields[f].shift;
-			u64 width = regs[i]->fields[f].width ?: 4;
+		for (f = 0; strlen(reg->fields[f].name); f++) {
+			u64 shift = reg->fields[f].shift;
+			u64 width = reg->fields[f].width ?: 4;
 			u64 mask = GENMASK_ULL(shift + width - 1, shift);
 			u64 v;
 
-			if (find_field(cmdline, regs[i], f, &v))
+			if (find_field(cmdline, reg, f, &v))
 				continue;
 
 			/*
@@ -209,16 +191,16 @@ static void __init match_options(const char *cmdline)
 			 * it by setting the value to the all-ones while
 			 * clearing the mask... Yes, this is fragile.
 			 */
-			if (regs[i]->fields[f].filter &&
-			    !regs[i]->fields[f].filter(v)) {
-				regs[i]->override->val  |= mask;
-				regs[i]->override->mask &= ~mask;
+			if (reg->fields[f].filter &&
+			    !reg->fields[f].filter(v)) {
+				reg_override(i)->val  |= mask;
+				reg_override(i)->mask &= ~mask;
 				continue;
 			}
 
-			regs[i]->override->val  &= ~mask;
-			regs[i]->override->val  |= (v << shift) & mask;
-			regs[i]->override->mask |= mask;
+			reg_override(i)->val  &= ~mask;
+			reg_override(i)->val  |= (v << shift) & mask;
+			reg_override(i)->mask |= mask;
 
 			return;
 		}
@@ -295,8 +277,8 @@ asmlinkage void __init init_feature_override(u64 boot_status)
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
-		regs[i]->override->val  = 0;
-		regs[i]->override->mask = 0;
+		reg_override(i)->val  = 0;
+		reg_override(i)->mask = 0;
 	}
 
 	__boot_status = boot_status;
@@ -304,8 +286,8 @@ asmlinkage void __init init_feature_override(u64 boot_status)
 	parse_cmdline();
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
-		dcache_clean_inval_poc((unsigned long)regs[i]->override,
-				       (unsigned long)regs[i]->override +
-				       sizeof(*regs[i]->override));
+		dcache_clean_inval_poc((unsigned long)reg_override(i),
+				       (unsigned long)reg_override(i) +
+				       sizeof(struct arm64_ftr_override));
 	}
 }
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 12/33] arm64: idreg-override: Use relative references to filter routines
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (10 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 11/33] arm64: idreg-override: Use relative references to override variables Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 13/33] arm64: idreg-override: Avoid parameq() and parameqn() Ard Biesheuvel
                   ` (20 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

To avoid statically initialized pointer variables, which need runtime
relocation and therefore prevent this code from being used before
relocations have been processed, tweak the static declarations so that
relative references are used instead. This means we will be doing the
job of the compiler and calculate where exactly the relocation needs to
point, so add some asserts to ensure we notice when we get it wrong.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 63 +++++++++++++-------
 1 file changed, 41 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index f8ae7f6d0d9b4fd0..01eed0eaba7c1cdd 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -15,8 +15,8 @@
 #include <asm/cpufeature.h>
 #include <asm/setup.h>
 
-#define FTR_DESC_NAME_LEN	20
-#define FTR_DESC_FIELD_LEN	10
+#define FTR_DESC_NAME_LEN	20	// must remain multiple of 4
+#define FTR_DESC_FIELD_LEN	10	// must remain multiple of 4 +/- 2
 #define FTR_ALIAS_NAME_LEN	30
 #define FTR_ALIAS_OPTION_LEN	116
 
@@ -26,16 +26,20 @@ struct ftr_set_desc {
 	s32		override_offset; 	// must remain first
 	char 		name[FTR_DESC_NAME_LEN];
 	struct {
+		s32	filter_offset;		// must remain first
 		char	name[FTR_DESC_FIELD_LEN];
 		u8	shift;
 		u8	width;
-		bool	(*filter)(u64 val);
 	} 		fields[];
 };
 
 static_assert(offsetof(struct ftr_set_desc, override_offset) == 0);
+static_assert(offsetof(struct ftr_set_desc, fields[0].filter_offset) ==
+	      4 + FTR_DESC_NAME_LEN);
+static_assert(offsetof(struct ftr_set_desc, fields[1].filter_offset) ==
+	      4 + FTR_DESC_NAME_LEN + 4 + FTR_DESC_FIELD_LEN + 2);
 
-#define FIELD(n, s, f)	{ .name = n, .shift = s, .width = 4, .filter = f }
+#define FIELD(n, s)	{ .name = n, .shift = s, .width = 4 }
 
 #define DEFINE_OVERRIDE(__idx, __id, __name, __ovr, ...)		\
 	asmlinkage const struct ftr_set_desc __initconst __id = {	\
@@ -46,7 +50,12 @@ static_assert(offsetof(struct ftr_set_desc, override_offset) == 0);
 	    ".reloc " #__id ", R_AARCH64_PREL32, " #__ovr "; "		\
 	    ".reloc regs + (4 * " #__idx "), R_AARCH64_PREL32, " #__id)
 
-static bool __init mmfr1_vh_filter(u64 val)
+#define DEFINE_OVERRIDE_FILTER(__id, __idx, __filter)			      \
+	asm(".reloc " #__id " + 4 + " __stringify(FTR_DESC_NAME_LEN)	      \
+	    "  + " #__idx " * (4 + " __stringify(FTR_DESC_FIELD_LEN) " + 2)," \
+	    "R_AARCH64_PREL32, " #__filter)
+
+asmlinkage bool __init mmfr1_vh_filter(u64 val)
 {
 	/*
 	 * If we ever reach this point while running VHE, we're
@@ -59,10 +68,11 @@ static bool __init mmfr1_vh_filter(u64 val)
 }
 
 DEFINE_OVERRIDE(0, mmfr1, "id_aa64mmfr1", id_aa64mmfr1_override,
-		FIELD("vh", ID_AA64MMFR1_EL1_VH_SHIFT, mmfr1_vh_filter),
+		FIELD("vh", ID_AA64MMFR1_EL1_VH_SHIFT),
 		{});
+DEFINE_OVERRIDE_FILTER(mmfr1, 0, mmfr1_vh_filter);
 
-static bool __init pfr0_sve_filter(u64 val)
+asmlinkage bool __init pfr0_sve_filter(u64 val)
 {
 	/*
 	 * Disabling SVE also means disabling all the features that
@@ -78,10 +88,11 @@ static bool __init pfr0_sve_filter(u64 val)
 }
 
 DEFINE_OVERRIDE(1, pfr0, "id_aa64pfr0", id_aa64pfr0_override,
-	        FIELD("sve", ID_AA64PFR0_EL1_SVE_SHIFT, pfr0_sve_filter),
+	        FIELD("sve", ID_AA64PFR0_EL1_SVE_SHIFT),
 		{});
+DEFINE_OVERRIDE_FILTER(pfr0, 0, pfr0_sve_filter);
 
-static bool __init pfr1_sme_filter(u64 val)
+asmlinkage bool __init pfr1_sme_filter(u64 val)
 {
 	/*
 	 * Similarly to SVE, disabling SME also means disabling all
@@ -97,30 +108,31 @@ static bool __init pfr1_sme_filter(u64 val)
 }
 
 DEFINE_OVERRIDE(2, pfr1, "id_aa64pfr1", id_aa64pfr1_override,
-		FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ),
-		FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL),
-		FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter),
+		FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT ),
+		FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT),
+		FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT),
 		{});
+DEFINE_OVERRIDE_FILTER(pfr1, 2, pfr1_sme_filter);
 
 DEFINE_OVERRIDE(3, isar1, "id_aa64isar1", id_aa64isar1_override,
-		FIELD("gpi", ID_AA64ISAR1_EL1_GPI_SHIFT, NULL),
-		FIELD("gpa", ID_AA64ISAR1_EL1_GPA_SHIFT, NULL),
-		FIELD("api", ID_AA64ISAR1_EL1_API_SHIFT, NULL),
-		FIELD("apa", ID_AA64ISAR1_EL1_APA_SHIFT, NULL),
+		FIELD("gpi", ID_AA64ISAR1_EL1_GPI_SHIFT),
+		FIELD("gpa", ID_AA64ISAR1_EL1_GPA_SHIFT),
+		FIELD("api", ID_AA64ISAR1_EL1_API_SHIFT),
+		FIELD("apa", ID_AA64ISAR1_EL1_APA_SHIFT),
 		{});
 
 DEFINE_OVERRIDE(4, isar2, "id_aa64isar2", id_aa64isar2_override,
-		FIELD("gpa3", ID_AA64ISAR2_EL1_GPA3_SHIFT, NULL),
-		FIELD("apa3", ID_AA64ISAR2_EL1_APA3_SHIFT, NULL),
+		FIELD("gpa3", ID_AA64ISAR2_EL1_GPA3_SHIFT),
+		FIELD("apa3", ID_AA64ISAR2_EL1_APA3_SHIFT),
 		{});
 
 DEFINE_OVERRIDE(5, smfr0, "id_aa64smfr0", id_aa64smfr0_override,
 		/* FA64 is a one bit field... :-/ */
-		{ "fa64", ID_AA64SMFR0_EL1_FA64_SHIFT, 1, },
+		{ 0, "fa64", ID_AA64SMFR0_EL1_FA64_SHIFT, 1, },
 		{});
 
 DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override,
-		FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL),
+		FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR),
 		{});
 
 /*
@@ -169,6 +181,13 @@ static int __init find_field(const char *cmdline,
 	return kstrtou64(cmdline + len, 0, v);
 }
 
+static const void * __init get_filter(const struct ftr_set_desc *reg, int idx)
+{
+	const s32 *offset = &reg->fields[idx].filter_offset;
+
+	return *offset ? offset_to_ptr(offset) : NULL;
+}
+
 static void __init match_options(const char *cmdline)
 {
 	int i;
@@ -181,6 +200,7 @@ static void __init match_options(const char *cmdline)
 			u64 shift = reg->fields[f].shift;
 			u64 width = reg->fields[f].width ?: 4;
 			u64 mask = GENMASK_ULL(shift + width - 1, shift);
+			bool (*filter)(u64) = get_filter(reg, f);
 			u64 v;
 
 			if (find_field(cmdline, reg, f, &v))
@@ -191,8 +211,7 @@ static void __init match_options(const char *cmdline)
 			 * it by setting the value to the all-ones while
 			 * clearing the mask... Yes, this is fragile.
 			 */
-			if (reg->fields[f].filter &&
-			    !reg->fields[f].filter(v)) {
+			if (filter && !filter(v)) {
 				reg_override(i)->val  |= mask;
 				reg_override(i)->mask &= ~mask;
 				continue;
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 13/33] arm64: idreg-override: Avoid parameq() and parameqn()
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (11 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 12/33] arm64: idreg-override: Use relative references to filter routines Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 14/33] arm64: idreg-override: avoid strlen() to check for empty strings Ard Biesheuvel
                   ` (19 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The only way parameq() and parameqn() deviate from the ordinary string
and memory routines is that they ignore the difference between dashes
and underscores.

Since we copy each command line argument into a buffer before passing it
to parameq() and parameqn() numerous times, let's just convert all
dashes to underscores just once, and update the alias array accordingly.

This also helps reduce the dependency on kernel APIs that are no longer
available once we move this code into the early mini C runtime.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 26 ++++++++++++--------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 01eed0eaba7c1cdd..b3288e827a6bbec3 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -153,8 +153,8 @@ static const struct {
 	char	alias[FTR_ALIAS_NAME_LEN];
 	char	feature[FTR_ALIAS_OPTION_LEN];
 } aliases[] __initconst = {
-	{ "kvm-arm.mode=nvhe",		"id_aa64mmfr1.vh=0" },
-	{ "kvm-arm.mode=protected",	"id_aa64mmfr1.vh=0" },
+	{ "kvm_arm.mode=nvhe",		"id_aa64mmfr1.vh=0" },
+	{ "kvm_arm.mode=protected",	"id_aa64mmfr1.vh=0" },
 	{ "arm64.nosve",		"id_aa64pfr0.sve=0 id_aa64pfr1.sme=0" },
 	{ "arm64.nosme",		"id_aa64pfr1.sme=0" },
 	{ "arm64.nobti",		"id_aa64pfr1.bt=0" },
@@ -175,7 +175,7 @@ static int __init find_field(const char *cmdline,
 	len = snprintf(opt, ARRAY_SIZE(opt), "%s.%s=",
 		       reg->name, reg->fields[f].name);
 
-	if (!parameqn(cmdline, opt, len))
+	if (memcmp(cmdline, opt, len))
 		return -1;
 
 	return kstrtou64(cmdline + len, 0, v);
@@ -235,23 +235,29 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases)
 
 		cmdline = skip_spaces(cmdline);
 
-		for (len = 0; cmdline[len] && !isspace(cmdline[len]); len++);
+		/* terminate on "--" appearing on the command line by itself */
+		if (cmdline[0] == '-' && cmdline[1] == '-' && isspace(cmdline[2]))
+			return;
+
+		for (len = 0; cmdline[len] && !isspace(cmdline[len]); len++) {
+			if (len >= sizeof(buf) - 1)
+				break;
+			if (cmdline[len] == '-')
+				buf[len] = '_';
+			else
+				buf[len] = cmdline[len];
+		}
 		if (!len)
 			return;
 
-		len = min(len, ARRAY_SIZE(buf) - 1);
-		strncpy(buf, cmdline, len);
 		buf[len] = 0;
 
-		if (strcmp(buf, "--") == 0)
-			return;
-
 		cmdline += len;
 
 		match_options(buf);
 
 		for (i = 0; parse_aliases && i < ARRAY_SIZE(aliases); i++)
-			if (parameq(buf, aliases[i].alias))
+			if (!memcmp(buf, aliases[i].alias, len + 1))
 				__parse_cmdline(aliases[i].feature, false);
 	} while (1);
 }
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 14/33] arm64: idreg-override: avoid strlen() to check for empty strings
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (12 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 13/33] arm64: idreg-override: Avoid parameq() and parameqn() Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 15/33] arm64: idreg-override: Avoid sprintf() for simple string concatenation Ard Biesheuvel
                   ` (18 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

strlen() is a costly way to decide whether a string is empty, as in that
case, the first character will be NUL so we can check for that directly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index b3288e827a6bbec3..97ec832d87d44f64 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -196,7 +196,7 @@ static void __init match_options(const char *cmdline)
 		const struct ftr_set_desc *reg = offset_to_ptr(&regs[i]);
 		int f;
 
-		for (f = 0; strlen(reg->fields[f].name); f++) {
+		for (f = 0; reg->fields[f].name[0] != '\0'; f++) {
 			u64 shift = reg->fields[f].shift;
 			u64 width = reg->fields[f].width ?: 4;
 			u64 mask = GENMASK_ULL(shift + width - 1, shift);
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 15/33] arm64: idreg-override: Avoid sprintf() for simple string concatenation
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (13 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 14/33] arm64: idreg-override: avoid strlen() to check for empty strings Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 16/33] arm64: idreg_override: Avoid kstrtou64() to parse a single hex digit Ard Biesheuvel
                   ` (17 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Instead of using sprintf() with the "%s.%s=" format, where the first
string argument is always the same in the inner loop of match_options(),
use simple memcpy() for string concatenation, and move the first copy to
the outer loop. This removes the dependency on sprintf(), which will be
difficult to fulfil when we move this code into the early mini C
runtime.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 97ec832d87d44f64..a5299aa1d1711adc 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -166,14 +166,15 @@ static const struct {
 	{ "nokaslr",			"arm64_sw.nokaslr=1" },
 };
 
-static int __init find_field(const char *cmdline,
+static int __init find_field(const char *cmdline, char *opt, int len,
 			     const struct ftr_set_desc *reg, int f, u64 *v)
 {
-	char opt[FTR_DESC_NAME_LEN + FTR_DESC_FIELD_LEN + 2];
-	int len;
+	int flen = strlen(reg->fields[f].name);
 
-	len = snprintf(opt, ARRAY_SIZE(opt), "%s.%s=",
-		       reg->name, reg->fields[f].name);
+	// append '<fieldname>=' to obtain '<name>.<fieldname>='
+	memcpy(opt + len, reg->fields[f].name, flen);
+	len += flen;
+	opt[len++] = '=';
 
 	if (memcmp(cmdline, opt, len))
 		return -1;
@@ -190,12 +191,18 @@ static const void * __init get_filter(const struct ftr_set_desc *reg, int idx)
 
 static void __init match_options(const char *cmdline)
 {
+	char opt[FTR_DESC_NAME_LEN + FTR_DESC_FIELD_LEN + 2];
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		const struct ftr_set_desc *reg = offset_to_ptr(&regs[i]);
+		int len = strlen(reg->name);
 		int f;
 
+		// set opt[] to '<name>.'
+		memcpy(opt, reg->name, len);
+		opt[len++] = '.';
+
 		for (f = 0; reg->fields[f].name[0] != '\0'; f++) {
 			u64 shift = reg->fields[f].shift;
 			u64 width = reg->fields[f].width ?: 4;
@@ -203,7 +210,7 @@ static void __init match_options(const char *cmdline)
 			bool (*filter)(u64) = get_filter(reg, f);
 			u64 v;
 
-			if (find_field(cmdline, reg, f, &v))
+			if (find_field(cmdline, opt, len, reg, f, &v))
 				continue;
 
 			/*
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 16/33] arm64: idreg_override: Avoid kstrtou64() to parse a single hex digit
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (14 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 15/33] arm64: idreg-override: Avoid sprintf() for simple string concatenation Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 17/33] arm64: idreg-override: Move to early mini C runtime Ard Biesheuvel
                   ` (16 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

All ID register value overrides are =0 with the exception of the nokaslr
pseudo feature which uses =1. In order to remove the dependency on
kstrtou64(), which is part of the core kernel and no longer usable once
we move idreg-override into the early mini C runtime, let's just parse a
single hex digit (with optional leading 0x) and set the output value
accordingly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 27 +++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index a5299aa1d1711adc..7e3eb48f5c83d539 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -170,6 +170,7 @@ static int __init find_field(const char *cmdline, char *opt, int len,
 			     const struct ftr_set_desc *reg, int f, u64 *v)
 {
 	int flen = strlen(reg->fields[f].name);
+	const char *p;
 
 	// append '<fieldname>=' to obtain '<name>.<fieldname>='
 	memcpy(opt + len, reg->fields[f].name, flen);
@@ -179,7 +180,31 @@ static int __init find_field(const char *cmdline, char *opt, int len,
 	if (memcmp(cmdline, opt, len))
 		return -1;
 
-	return kstrtou64(cmdline + len, 0, v);
+	p = cmdline + len;
+
+	// skip "0x" if it comes next
+	if (p[0] == '0' && (p[1] == 'x' || p[1] == 'X'))
+		p += 2;
+
+	// check whether the RHS is a single non-whitespace character
+	if (*p == '\0' || (p[1] && !isspace(p[1])))
+		return -1;
+
+	// only accept a single hex character as the value
+	switch (*p) {
+	case '0' ... '9':
+		*v = *p - '0';
+		break;
+	case 'a' ... 'f':
+		*v = *p - 'a' + 10;
+		break;
+	case 'A' ... 'F':
+		*v = *p - 'A' + 10;
+		break;
+	default:
+		return -1;
+	}
+	return 0;
 }
 
 static const void * __init get_filter(const struct ftr_set_desc *reg, int idx)
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 17/33] arm64: idreg-override: Move to early mini C runtime
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (15 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 16/33] arm64: idreg_override: Avoid kstrtou64() to parse a single hex digit Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 18/33] arm64: kernel: Remove early fdt remap code Ard Biesheuvel
                   ` (15 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

We will want to parse the ID register overrides even earlier, so that we
can take them into account before creating the kernel mapping. So
migrate the code and make it work in the context of the early C runtime.
We will move the invocation to an earlier stage in a subsequent patch.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/Makefile                  |  4 +---
 arch/arm64/kernel/head.S                    |  5 ++--
 arch/arm64/kernel/image-vars.h              | 10 ++++++++
 arch/arm64/kernel/pi/Makefile               |  5 ++--
 arch/arm64/kernel/{ => pi}/idreg-override.c | 24 +++++++++++---------
 5 files changed, 29 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index a8717865fee5c296..2b003834c320c20e 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -33,8 +33,7 @@ obj-y			:= debug-monitors.o entry.o irq.o fpsimd.o		\
 			   return_address.o cpuinfo.o cpu_errata.o		\
 			   cpufeature.o alternative.o cacheinfo.o		\
 			   smp.o smp_spin_table.o topology.o smccc-call.o	\
-			   syscall.o proton-pack.o idreg-override.o idle.o	\
-			   patching.o
+			   syscall.o proton-pack.o idle.o patching.o pi/
 
 targets			+= efi-entry.o
 
@@ -65,7 +64,6 @@ obj-$(CONFIG_ACPI)			+= acpi.o
 obj-$(CONFIG_ACPI_NUMA)			+= acpi_numa.o
 obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL)	+= acpi_parking_protocol.o
 obj-$(CONFIG_PARAVIRT)			+= paravirt.o
-obj-$(CONFIG_RELOCATABLE)		+= pi/
 obj-$(CONFIG_RANDOMIZE_BASE)		+= kaslr.o
 obj-$(CONFIG_HIBERNATION)		+= hibernate.o hibernate-asm.o
 obj-$(CONFIG_ELF_CORE)			+= elfcore.o
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 998a3e066b2fdf0a..786b7bd79a4026e9 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -458,10 +458,9 @@ SYM_FUNC_START_LOCAL(__primary_switched)
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 	bl	kasan_early_init
 #endif
-	mov	x0, x21				// pass FDT address in x0
-	bl	early_fdt_map			// Try mapping the FDT early
 	mov	x0, x20				// pass the full boot status
-	bl	init_feature_override		// Parse cpu feature overrides
+	mov	x1, x22				// pass the low FDT mapping
+	bl	__pi_init_feature_override	// Parse cpu feature overrides
 #ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS
 	bl	scs_patch_vmlinux
 #endif
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 8151412653de209c..6ff6efbc1ce98ba6 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -41,6 +41,16 @@ PROVIDE(__pi___memcpy			= __pi_memcpy);
 PROVIDE(__pi___memmove			= __pi_memmove);
 PROVIDE(__pi___memset			= __pi_memset);
 
+PROVIDE(__pi_id_aa64isar1_override	= id_aa64isar1_override);
+PROVIDE(__pi_id_aa64isar2_override	= id_aa64isar2_override);
+PROVIDE(__pi_id_aa64mmfr1_override	= id_aa64mmfr1_override);
+PROVIDE(__pi_id_aa64pfr0_override	= id_aa64pfr0_override);
+PROVIDE(__pi_id_aa64pfr1_override	= id_aa64pfr1_override);
+PROVIDE(__pi_id_aa64smfr0_override	= id_aa64smfr0_override);
+PROVIDE(__pi_id_aa64zfr0_override	= id_aa64zfr0_override);
+PROVIDE(__pi_arm64_sw_feature_override	= arm64_sw_feature_override);
+PROVIDE(__pi__ctype			= _ctype);
+
 #ifdef CONFIG_KVM
 
 /*
diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index e046c10606cb822e..47d3ffcff3ac9a03 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -38,6 +38,7 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init
 $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE
 	$(call if_changed_rule,cc_o_c)
 
-obj-y				:= relocate.pi.o
-obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o
+obj-y				:= idreg-override.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o
+obj-$(CONFIG_RELOCATABLE)	+= relocate.pi.o
+obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr_early.pi.o
 extra-y				:= $(patsubst %.pi.o,%.o,$(obj-y))
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
similarity index 95%
rename from arch/arm64/kernel/idreg-override.c
rename to arch/arm64/kernel/pi/idreg-override.c
index 7e3eb48f5c83d539..86d994424779bc0d 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/pi/idreg-override.c
@@ -294,16 +294,11 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases)
 	} while (1);
 }
 
-static __init const u8 *get_bootargs_cmdline(void)
+static __init const u8 *get_bootargs_cmdline(const void *fdt)
 {
 	const u8 *prop;
-	void *fdt;
 	int node;
 
-	fdt = get_early_fdt_ptr();
-	if (!fdt)
-		return NULL;
-
 	node = fdt_path_offset(fdt, "/chosen");
 	if (node < 0)
 		return NULL;
@@ -315,9 +310,9 @@ static __init const u8 *get_bootargs_cmdline(void)
 	return strlen(prop) ? prop : NULL;
 }
 
-static __init void parse_cmdline(void)
+static __init void parse_cmdline(const void *fdt)
 {
-	const u8 *prop = get_bootargs_cmdline();
+	const u8 *prop = get_bootargs_cmdline(fdt);
 
 	if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop)
 		__parse_cmdline(CONFIG_CMDLINE, true);
@@ -327,9 +322,9 @@ static __init void parse_cmdline(void)
 }
 
 /* Keep checkers quiet */
-void init_feature_override(u64 boot_status);
+void init_feature_override(u64 boot_status, const void *fdt);
 
-asmlinkage void __init init_feature_override(u64 boot_status)
+asmlinkage void __init init_feature_override(u64 boot_status, const void *fdt)
 {
 	int i;
 
@@ -340,7 +335,7 @@ asmlinkage void __init init_feature_override(u64 boot_status)
 
 	__boot_status = boot_status;
 
-	parse_cmdline();
+	parse_cmdline(fdt);
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		dcache_clean_inval_poc((unsigned long)reg_override(i),
@@ -348,3 +343,10 @@ asmlinkage void __init init_feature_override(u64 boot_status)
 				       sizeof(struct arm64_ftr_override));
 	}
 }
+
+char * __init skip_spaces(const char *str)
+{
+	while (isspace(*str))
+		++str;
+	return (char *)str;
+}
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 18/33] arm64: kernel: Remove early fdt remap code
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (16 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 17/33] arm64: idreg-override: Move to early mini C runtime Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 19/33] arm64: head: Clear BSS and the kernel page tables in one go Ard Biesheuvel
                   ` (14 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The early FDT remap code is no longer used so let's drop it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/setup.h |  3 ---
 arch/arm64/kernel/setup.c      | 15 ---------------
 2 files changed, 18 deletions(-)

diff --git a/arch/arm64/include/asm/setup.h b/arch/arm64/include/asm/setup.h
index f4af547ef54caa70..acc5e00bf3b0fafb 100644
--- a/arch/arm64/include/asm/setup.h
+++ b/arch/arm64/include/asm/setup.h
@@ -7,9 +7,6 @@
 
 #include <uapi/asm/setup.h>
 
-void *get_early_fdt_ptr(void);
-void early_fdt_map(u64 dt_phys);
-
 /*
  * These two variables are used in the head.S file.
  */
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 12cfe9d0d3fac10d..37e0ba95afc3e045 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -165,21 +165,6 @@ static void __init smp_build_mpidr_hash(void)
 		pr_warn("Large number of MPIDR hash buckets detected\n");
 }
 
-static void *early_fdt_ptr __initdata;
-
-void __init *get_early_fdt_ptr(void)
-{
-	return early_fdt_ptr;
-}
-
-asmlinkage void __init early_fdt_map(u64 dt_phys)
-{
-	int fdt_size;
-
-	early_fixmap_init();
-	early_fdt_ptr = fixmap_remap_fdt(dt_phys, &fdt_size, PAGE_KERNEL);
-}
-
 static void __init setup_machine_fdt(phys_addr_t dt_phys)
 {
 	int size;
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 19/33] arm64: head: Clear BSS and the kernel page tables in one go
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (17 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 18/33] arm64: kernel: Remove early fdt remap code Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 20/33] arm64: Move feature overrides into the BSS section Ard Biesheuvel
                   ` (13 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

We will move the CPU feature overrides into BSS in a subsequent patch,
and this requires that BSS is zeroed before the feature override
detection code runs. So let's map BSS read-write in the ID map, and zero
it via this mapping.

Since the kernel page tables are right next to it, and also zeroed via
the ID map, let's drop the separate clear_page_tables() function, and
just zero everything in one go.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S | 33 +++++++-------------
 1 file changed, 11 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 786b7bd79a4026e9..0e7aaa65ea174efc 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -126,17 +126,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 	b	dcache_inval_poc		// tail call
 SYM_CODE_END(preserve_boot_args)
 
-SYM_FUNC_START_LOCAL(clear_page_tables)
-	/*
-	 * Clear the init page tables.
-	 */
-	adrp	x0, init_pg_dir
-	adrp	x1, init_pg_end
-	sub	x2, x1, x0
-	mov	x1, xzr
-	b	__pi_memset			// tail call
-SYM_FUNC_END(clear_page_tables)
-
 /*
  * Macro to populate page table entries, these entries can be pointers to the next level
  * or last level entries pointing to physical memory.
@@ -335,9 +324,9 @@ SYM_FUNC_START_LOCAL(create_idmap)
 
 	map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT
 
-	/* Remap the kernel page tables r/w in the ID map */
+	/* Remap BSS and the kernel page tables r/w in the ID map */
 	adrp	x1, _text
-	adrp	x2, init_pg_dir
+	adrp	x2, __bss_start
 	adrp	x3, _end
 	bic	x4, x2, #SWAPPER_BLOCK_SIZE - 1
 	mov	x5, SWAPPER_RW_MMUFLAGS
@@ -437,14 +426,6 @@ SYM_FUNC_START_LOCAL(__primary_switched)
 	mov	x0, x20
 	bl	set_cpu_boot_mode_flag
 
-	// Clear BSS
-	adr_l	x0, __bss_start
-	mov	x1, xzr
-	adr_l	x2, __bss_stop
-	sub	x2, x2, x0
-	bl	__pi_memset
-	dsb	ishst				// Make zero page visible to PTW
-
 #if VA_BITS > 48
 	adr_l	x8, vabits_actual		// Set this early so KASAN early init
 	str	x25, [x8]			// ... observes the correct value
@@ -708,6 +689,15 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 	adrp	x1, reserved_pg_dir
 	adrp	x2, init_idmap_pg_dir
 	bl	__enable_mmu
+
+	// Clear BSS
+	adrp	x0, __bss_start
+	mov	x1, xzr
+	adrp	x2, init_pg_end
+	sub	x2, x2, x0
+	bl	__pi_memset
+	dsb	ishst				// Make zero page visible to PTW
+
 #ifdef CONFIG_RELOCATABLE
 	adrp	x23, KERNEL_START
 	and	x23, x23, MIN_KIMG_ALIGN - 1
@@ -722,7 +712,6 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 	orr	x23, x23, x0			// record kernel offset
 #endif
 #endif
-	bl	clear_page_tables
 	bl	create_kernel_mapping
 
 	adrp	x1, init_pg_dir
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 20/33] arm64: Move feature overrides into the BSS section
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (18 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 19/33] arm64: head: Clear BSS and the kernel page tables in one go Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 21/33] arm64: head: Run feature override detection before mapping the kernel Ard Biesheuvel
                   ` (12 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

In order to allow the CPU feature override detection code to run even
earlier, move the feature override global variables into BSS, which is
the only part of the static kernel image that is mapped read-write in
the initial ID map.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ebd8cabffb105e15..08ab04dc9393652a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -636,13 +636,13 @@ static const struct arm64_ftr_bits ftr_raz[] = {
 #define ARM64_FTR_REG(id, table)		\
 	__ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override)
 
-struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override;
-struct arm64_ftr_override __ro_after_init id_aa64pfr0_override;
-struct arm64_ftr_override __ro_after_init id_aa64pfr1_override;
-struct arm64_ftr_override __ro_after_init id_aa64zfr0_override;
-struct arm64_ftr_override __ro_after_init id_aa64smfr0_override;
-struct arm64_ftr_override __ro_after_init id_aa64isar1_override;
-struct arm64_ftr_override __ro_after_init id_aa64isar2_override;
+struct arm64_ftr_override id_aa64mmfr1_override;
+struct arm64_ftr_override id_aa64pfr0_override;
+struct arm64_ftr_override id_aa64pfr1_override;
+struct arm64_ftr_override id_aa64zfr0_override;
+struct arm64_ftr_override id_aa64smfr0_override;
+struct arm64_ftr_override id_aa64isar1_override;
+struct arm64_ftr_override id_aa64isar2_override;
 
 struct arm64_ftr_override arm64_sw_feature_override;
 
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 21/33] arm64: head: Run feature override detection before mapping the kernel
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (19 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 20/33] arm64: Move feature overrides into the BSS section Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 22/33] arm64: head: move dynamic shadow call stack patching into early C runtime Ard Biesheuvel
                   ` (11 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

To permit the feature overrides to be taken into account before the
KASLR init code runs and the kernel mapping is created, move the
detection code to an earlier stage in the boot.

In a subsequent patch, this will be taken advantage of by merging the
preliminary and permanent mappings of the kernel text and data into a
single one that gets created and relocated before start_kernel() is
called.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S              | 13 +++++++------
 arch/arm64/kernel/pi/idreg-override.c |  2 +-
 arch/arm64/kernel/vmlinux.lds.S       |  4 +---
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 0e7aaa65ea174efc..9ea7f4e355ef5849 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -439,9 +439,6 @@ SYM_FUNC_START_LOCAL(__primary_switched)
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 	bl	kasan_early_init
 #endif
-	mov	x0, x20				// pass the full boot status
-	mov	x1, x22				// pass the low FDT mapping
-	bl	__pi_init_feature_override	// Parse cpu feature overrides
 #ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS
 	bl	scs_patch_vmlinux
 #endif
@@ -698,12 +695,16 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 	bl	__pi_memset
 	dsb	ishst				// Make zero page visible to PTW
 
-#ifdef CONFIG_RELOCATABLE
-	adrp	x23, KERNEL_START
-	and	x23, x23, MIN_KIMG_ALIGN - 1
 	adrp	x1, early_init_stack
 	mov	sp, x1
 	mov	x29, xzr
+	mov	x0, x20				// pass the full boot status
+	mov	x1, x22				// pass the low FDT mapping
+	bl	__pi_init_feature_override	// Parse cpu feature overrides
+
+#ifdef CONFIG_RELOCATABLE
+	adrp	x23, KERNEL_START
+	and	x23, x23, MIN_KIMG_ALIGN - 1
 #ifdef CONFIG_RANDOMIZE_BASE
 	mov	x0, x22
 	bl	__pi_kaslr_early_init
diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
index 86d994424779bc0d..c21d1e9f43a11ba7 100644
--- a/arch/arm64/kernel/pi/idreg-override.c
+++ b/arch/arm64/kernel/pi/idreg-override.c
@@ -20,7 +20,7 @@
 #define FTR_ALIAS_NAME_LEN	30
 #define FTR_ALIAS_OPTION_LEN	116
 
-static u64 __boot_status __initdata;
+static u64 __boot_status;
 
 struct ftr_set_desc {
 	s32		override_offset; 	// must remain first
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index bebb88daf4c52039..3f86a0db2952600c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -317,10 +317,8 @@ SECTIONS
 	init_pg_dir = .;
 	. += INIT_DIR_SIZE;
 	init_pg_end = .;
-#ifdef CONFIG_RELOCATABLE
-	. += SZ_4K;		/* stack for the early relocation code */
+	. += SZ_4K;		/* stack for the early C runtime */
 	early_init_stack = .;
-#endif
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__pecoff_data_size = ABSOLUTE(. - __initdata_begin);
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 22/33] arm64: head: move dynamic shadow call stack patching into early C runtime
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (20 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 21/33] arm64: head: Run feature override detection before mapping the kernel Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 23/33] arm64: kaslr: Use feature override instead of parsing the cmdline again Ard Biesheuvel
                   ` (10 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Once we update the early kernel mapping code to only map the kernel once
with the right permissions, we can no longer perform code patching via
this mapping.

So move this code to an earlier stage of the boot, right after applying
the relocations.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/scs.h           |  2 +-
 arch/arm64/kernel/Makefile             |  2 --
 arch/arm64/kernel/head.S               |  8 ++++---
 arch/arm64/kernel/module.c             |  2 +-
 arch/arm64/kernel/pi/Makefile          | 10 ++++----
 arch/arm64/kernel/{ => pi}/patch-scs.c | 24 ++++++++++----------
 6 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h
index ff7da1268a52ab79..d20dcb2a1a5284ce 100644
--- a/arch/arm64/include/asm/scs.h
+++ b/arch/arm64/include/asm/scs.h
@@ -71,7 +71,7 @@ static inline void dynamic_scs_init(void)
 static inline void dynamic_scs_init(void) {}
 #endif
 
-int scs_patch(const u8 eh_frame[], int size);
+int __pi_scs_patch(const u8 eh_frame[], int size);
 
 #endif /* __ASSEMBLY __ */
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2b003834c320c20e..35354365f5f39f3f 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -79,8 +79,6 @@ obj-$(CONFIG_ARM64_PTR_AUTH)		+= pointer_auth.o
 obj-$(CONFIG_ARM64_MTE)			+= mte.o
 obj-y					+= vdso-wrap.o
 obj-$(CONFIG_COMPAT_VDSO)		+= vdso32-wrap.o
-obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS)	+= patch-scs.o
-CFLAGS_patch-scs.o			+= -mbranch-protection=none
 
 # Force dependency (vdso*-wrap.S includes vdso.so through incbin)
 $(obj)/vdso-wrap.o: $(obj)/vdso/vdso.so
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 9ea7f4e355ef5849..5f1476c0f3a33d75 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -438,9 +438,6 @@ SYM_FUNC_START_LOCAL(__primary_switched)
 #endif
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 	bl	kasan_early_init
-#endif
-#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS
-	bl	scs_patch_vmlinux
 #endif
 	mov	x0, x20
 	bl	finalise_el2			// Prefer VHE if possible
@@ -720,6 +717,11 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 #ifdef CONFIG_RELOCATABLE
 	mov	x0, x23
 	bl	__pi_relocate_kernel
+#endif
+#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS
+	ldr	x0, =__eh_frame_start
+	ldr	x1, =__eh_frame_end
+	bl	__pi_scs_patch_vmlinux
 #endif
 	ldr	x8, =__primary_switched
 	adrp	x0, KERNEL_START		// __pa(KERNEL_START)
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fa7b3228944b325e..ea505022737d531d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -519,7 +519,7 @@ int module_finalize(const Elf_Ehdr *hdr,
 	if (scs_is_dynamic()) {
 		s = find_section(hdr, sechdrs, ".init.eh_frame");
 		if (s)
-			scs_patch((void *)s->sh_addr, s->sh_size);
+			__pi_scs_patch((void *)s->sh_addr, s->sh_size);
 	}
 
 	return module_init_ftrace_plt(hdr, sechdrs, me);
diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index 47d3ffcff3ac9a03..293a04a5dc3ef516 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -38,7 +38,9 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init
 $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE
 	$(call if_changed_rule,cc_o_c)
 
-obj-y				:= idreg-override.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o
-obj-$(CONFIG_RELOCATABLE)	+= relocate.pi.o
-obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr_early.pi.o
-extra-y				:= $(patsubst %.pi.o,%.o,$(obj-y))
+obj-y					:= idreg-override.pi.o \
+					   lib-fdt.pi.o lib-fdt_ro.pi.o
+obj-$(CONFIG_RELOCATABLE)		+= relocate.pi.o
+obj-$(CONFIG_RANDOMIZE_BASE)		+= kaslr_early.pi.o
+obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS)	+= patch-scs.pi.o
+extra-y					:= $(patsubst %.pi.o,%.o,$(obj-y))
diff --git a/arch/arm64/kernel/patch-scs.c b/arch/arm64/kernel/pi/patch-scs.c
similarity index 91%
rename from arch/arm64/kernel/patch-scs.c
rename to arch/arm64/kernel/pi/patch-scs.c
index 1b3da02d5b741bc3..d15833df10d3d4c6 100644
--- a/arch/arm64/kernel/patch-scs.c
+++ b/arch/arm64/kernel/pi/patch-scs.c
@@ -4,14 +4,11 @@
  * Author: Ard Biesheuvel <ardb@google.com>
  */
 
-#include <linux/bug.h>
 #include <linux/errno.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
-#include <linux/printk.h>
 #include <linux/types.h>
 
-#include <asm/cacheflush.h>
 #include <asm/scs.h>
 
 //
@@ -81,7 +78,11 @@ static void __always_inline scs_patch_loc(u64 loc)
 		 */
 		return;
 	}
-	dcache_clean_pou(loc, loc + sizeof(u32));
+	if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_CLEAN_CACHE))
+		asm("dc civac, %0" :: "r"(loc));
+	else
+		asm(ALTERNATIVE("dc cvau, %0", "nop", ARM64_HAS_CACHE_IDC)
+		    :: "r"(loc));
 }
 
 /*
@@ -128,9 +129,9 @@ struct eh_frame {
 	};
 };
 
-static int noinstr scs_handle_fde_frame(const struct eh_frame *frame,
-					bool fde_has_augmentation_data,
-					int code_alignment_factor)
+static int scs_handle_fde_frame(const struct eh_frame *frame,
+				bool fde_has_augmentation_data,
+				int code_alignment_factor)
 {
 	int size = frame->size - offsetof(struct eh_frame, opcodes) + 4;
 	u64 loc = (u64)offset_to_ptr(&frame->initial_loc);
@@ -196,14 +197,13 @@ static int noinstr scs_handle_fde_frame(const struct eh_frame *frame,
 			break;
 
 		default:
-			pr_err("unhandled opcode: %02x in FDE frame %lx\n", opcode[-1], (uintptr_t)frame);
 			return -ENOEXEC;
 		}
 	}
 	return 0;
 }
 
-int noinstr scs_patch(const u8 eh_frame[], int size)
+int scs_patch(const u8 eh_frame[], int size)
 {
 	const u8 *p = eh_frame;
 
@@ -246,12 +246,12 @@ int noinstr scs_patch(const u8 eh_frame[], int size)
 	return 0;
 }
 
-asmlinkage void __init scs_patch_vmlinux(void)
+asmlinkage void __init scs_patch_vmlinux(const u8 start[], const u8 end[])
 {
 	if (!should_patch_pac_into_scs())
 		return;
 
-	WARN_ON(scs_patch(__eh_frame_start, __eh_frame_end - __eh_frame_start));
-	icache_inval_all_pou();
+	scs_patch(start, end - start);
+	asm("ic ialluis");
 	isb();
 }
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 23/33] arm64: kaslr: Use feature override instead of parsing the cmdline again
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (21 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 22/33] arm64: head: move dynamic shadow call stack patching into early C runtime Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 24/33] arm64: idreg-override: Create a pseudo feature for rodata=off Ard Biesheuvel
                   ` (9 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The early kaslr code open codes the detection of 'nokaslr' on the kernel
command line, which is no longer necessary now that the feature
detection code, which also looks for the same string, executes before
this code.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/pi/kaslr_early.c | 54 +-------------------
 1 file changed, 2 insertions(+), 52 deletions(-)

diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c
index 86ae0273c95016c6..934e95fbd4278d0b 100644
--- a/arch/arm64/kernel/pi/kaslr_early.c
+++ b/arch/arm64/kernel/pi/kaslr_early.c
@@ -15,57 +15,6 @@
 #include <asm/archrandom.h>
 #include <asm/memory.h>
 
-/* taken from lib/string.c */
-static char *__init __strstr(const char *s1, const char *s2)
-{
-	size_t l1, l2;
-
-	l2 = strlen(s2);
-	if (!l2)
-		return (char *)s1;
-	l1 = strlen(s1);
-	while (l1 >= l2) {
-		l1--;
-		if (!memcmp(s1, s2, l2))
-			return (char *)s1;
-		s1++;
-	}
-	return NULL;
-}
-static bool __init cmdline_contains_nokaslr(const u8 *cmdline)
-{
-	const u8 *str;
-
-	str = __strstr(cmdline, "nokaslr");
-	return str == cmdline || (str > cmdline && *(str - 1) == ' ');
-}
-
-static bool __init is_kaslr_disabled_cmdline(void *fdt)
-{
-	if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
-		int node;
-		const u8 *prop;
-
-		node = fdt_path_offset(fdt, "/chosen");
-		if (node < 0)
-			goto out;
-
-		prop = fdt_getprop(fdt, node, "bootargs", NULL);
-		if (!prop)
-			goto out;
-
-		if (cmdline_contains_nokaslr(prop))
-			return true;
-
-		if (IS_ENABLED(CONFIG_CMDLINE_EXTEND))
-			goto out;
-
-		return false;
-	}
-out:
-	return cmdline_contains_nokaslr(CONFIG_CMDLINE);
-}
-
 static u64 __init get_kaslr_seed(void *fdt)
 {
 	static char const chosen_str[] __initconst = "chosen";
@@ -91,7 +40,8 @@ asmlinkage u64 __init kaslr_early_init(void *fdt)
 {
 	u64 seed;
 
-	if (is_kaslr_disabled_cmdline(fdt))
+	if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val,
+						 ARM64_SW_FEATURE_OVERRIDE_NOKASLR))
 		return 0;
 
 	seed = get_kaslr_seed(fdt);
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 24/33] arm64: idreg-override: Create a pseudo feature for rodata=off
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (22 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 23/33] arm64: kaslr: Use feature override instead of parsing the cmdline again Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 25/33] arm64: head: allocate more pages for the kernel mapping Ard Biesheuvel
                   ` (8 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Add rodata=off to the set of kernel command line options that is parsed
early using the CPU feature override detection code, so we can easily
refer to it when creating the kernel mapping.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h   | 1 +
 arch/arm64/kernel/pi/idreg-override.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f44a7860636fd411..b8c7a2d13bbe44e2 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -16,6 +16,7 @@
 #define cpu_feature(x)		KERNEL_HWCAP_ ## x
 
 #define ARM64_SW_FEATURE_OVERRIDE_NOKASLR	0
+#define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF	4
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
index c21d1e9f43a11ba7..e524d36d8fc1526f 100644
--- a/arch/arm64/kernel/pi/idreg-override.c
+++ b/arch/arm64/kernel/pi/idreg-override.c
@@ -133,6 +133,7 @@ DEFINE_OVERRIDE(5, smfr0, "id_aa64smfr0", id_aa64smfr0_override,
 
 DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override,
 		FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR),
+		FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF),
 		{});
 
 /*
@@ -164,6 +165,7 @@ static const struct {
 	  "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0"	   },
 	{ "arm64.nomte",		"id_aa64pfr1.mte=0" },
 	{ "nokaslr",			"arm64_sw.nokaslr=1" },
+	{ "rodata=off",			"arm64_sw.rodataoff=1" },
 };
 
 static int __init find_field(const char *cmdline, char *opt, int len,
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 25/33] arm64: head: allocate more pages for the kernel mapping
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (23 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 24/33] arm64: idreg-override: Create a pseudo feature for rodata=off Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 26/33] arm64: head: move memstart_offset_seed handling to C code Ard Biesheuvel
                   ` (7 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

In preparation for switching to an early kernel mapping routine that
maps each segment according to its precise boundaries, and with the
correct attributes, let's allocate some extra pages for page tables for
the 4k page size configuration. This is necessary because the start and
end of each segment may not be aligned to the block size, and so we'll
need an extra page table at each segment boundary.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/kernel-pgtable.h | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 32d14f481f0c3f37..ed0db7fc0022d34e 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -85,7 +85,7 @@
 			+ EARLY_PGDS((vstart), (vend), add) 	/* each PGDIR needs a next level page table */	\
 			+ EARLY_PUDS((vstart), (vend), add)	/* each PUD needs a next level page table */	\
 			+ EARLY_PMDS((vstart), (vend), add))	/* each PMD needs a next level page table */
-#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
+#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES))
 
 /* the initial ID map may need two extra pages if it needs to be extended */
 #if VA_BITS < 48
@@ -106,6 +106,15 @@
 #define SWAPPER_TABLE_SHIFT	PMD_SHIFT
 #endif
 
+/* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */
+#define KERNEL_SEGMENT_COUNT	5
+
+#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
+#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
+#else
+#define EARLY_SEGMENT_EXTRA_PAGES 0
+#endif
+
 /*
  * Initial memory map attributes.
  */
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 26/33] arm64: head: move memstart_offset_seed handling to C code
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (24 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 25/33] arm64: head: allocate more pages for the kernel mapping Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 27/33] arm64: head: Move early kernel mapping routines into " Ard Biesheuvel
                   ` (6 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Now that we can set BSS variables from the early code running from the
ID map, we can set memstart_offset_seed directly from the C code that
derives the value instead of passing it back and forth between C and asm
code.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S           | 7 -------
 arch/arm64/kernel/image-vars.h     | 2 ++
 arch/arm64/kernel/kaslr.c          | 2 +-
 arch/arm64/kernel/pi/kaslr_early.c | 6 +++++-
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 5f1476c0f3a33d75..4b88ca8766133fd3 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -81,7 +81,6 @@
 	 *  x21        primary_entry() .. start_kernel()        FDT pointer passed at boot in x0
 	 *  x22        create_idmap() .. start_kernel()         ID map VA of the DT blob
 	 *  x23        __primary_switch()                       physical misalignment/KASLR offset
-	 *  x24        __primary_switch()                       linear map KASLR seed
 	 *  x25        primary_entry() .. start_kernel()        supported VA size
 	 *  x28        create_idmap()                           callee preserved temp register
 	 */
@@ -431,11 +430,6 @@ SYM_FUNC_START_LOCAL(__primary_switched)
 	str	x25, [x8]			// ... observes the correct value
 	dc	civac, x8			// Make visible to booting secondaries
 #endif
-
-#ifdef CONFIG_RANDOMIZE_BASE
-	adrp	x5, memstart_offset_seed	// Save KASLR linear map seed
-	strh	w24, [x5, :lo12:memstart_offset_seed]
-#endif
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 	bl	kasan_early_init
 #endif
@@ -705,7 +699,6 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 #ifdef CONFIG_RANDOMIZE_BASE
 	mov	x0, x22
 	bl	__pi_kaslr_early_init
-	and	x24, x0, #SZ_2M - 1		// capture memstart offset seed
 	bic	x0, x0, #SZ_2M - 1
 	orr	x23, x23, x0			// record kernel offset
 #endif
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 6ff6efbc1ce98ba6..6c6dd100a9cbadf8 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -41,6 +41,8 @@ PROVIDE(__pi___memcpy			= __pi_memcpy);
 PROVIDE(__pi___memmove			= __pi_memmove);
 PROVIDE(__pi___memset			= __pi_memset);
 
+PROVIDE(__pi_memstart_offset_seed	= memstart_offset_seed);
+
 PROVIDE(__pi_id_aa64isar1_override	= id_aa64isar1_override);
 PROVIDE(__pi_id_aa64isar2_override	= id_aa64isar2_override);
 PROVIDE(__pi_id_aa64mmfr1_override	= id_aa64mmfr1_override);
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 5d4ce7f5f157bb3f..37a9deed2aec9297 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -21,7 +21,7 @@
 #include <asm/setup.h>
 
 u64 __ro_after_init module_alloc_base;
-u16 __initdata memstart_offset_seed;
+u16 memstart_offset_seed;
 
 static int __init kaslr_init(void)
 {
diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c
index 934e95fbd4278d0b..c46bccd593f2ff6b 100644
--- a/arch/arm64/kernel/pi/kaslr_early.c
+++ b/arch/arm64/kernel/pi/kaslr_early.c
@@ -15,6 +15,8 @@
 #include <asm/archrandom.h>
 #include <asm/memory.h>
 
+extern u16 memstart_offset_seed;
+
 static u64 __init get_kaslr_seed(void *fdt)
 {
 	static char const chosen_str[] __initconst = "chosen";
@@ -51,6 +53,8 @@ asmlinkage u64 __init kaslr_early_init(void *fdt)
 			return 0;
 	}
 
+	memstart_offset_seed = seed & U16_MAX;
+
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
@@ -58,5 +62,5 @@ asmlinkage u64 __init kaslr_early_init(void *fdt)
 	 * the lower and upper quarters to avoid colliding with other
 	 * allocations.
 	 */
-	return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 0));
+	return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 16));
 }
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 27/33] arm64: head: Move early kernel mapping routines into C code
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (25 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 26/33] arm64: head: move memstart_offset_seed handling to C code Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 28/33] arm64: mm: avoid fixmap for early swapper_pg_dir updates Ard Biesheuvel
                   ` (5 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The asm version of the kernel mapping code works fine for creating a
coarse grained identity map, but for mapping the kernel down to its
exact boundaries with the right attributes, it is not suitable. This is
why we create a preliminary RWX kernel mapping first, and then rebuild
it from scratch later on.

So let's reimplement this in C, in a way that will make it unnecessary
to create the kernel page tables yet another time in paging_init().

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/scs.h          |  32 +--
 arch/arm64/kernel/head.S              |  52 +---
 arch/arm64/kernel/image-vars.h        |  16 ++
 arch/arm64/kernel/pi/Makefile         |   2 +-
 arch/arm64/kernel/pi/idreg-override.c |  24 +-
 arch/arm64/kernel/pi/kaslr_early.c    |  12 +-
 arch/arm64/kernel/pi/map_kernel.c     | 277 ++++++++++++++++++++
 arch/arm64/kernel/pi/patch-scs.c      |  16 +-
 arch/arm64/kernel/pi/pi.h             |  12 +
 arch/arm64/kernel/pi/relocate.c       |   2 +
 arch/arm64/kernel/setup.c             |   7 -
 arch/arm64/kernel/vmlinux.lds.S       |   6 +-
 arch/arm64/mm/proc.S                  |   1 +
 13 files changed, 338 insertions(+), 121 deletions(-)

diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h
index d20dcb2a1a5284ce..74744f4d6820f5e1 100644
--- a/arch/arm64/include/asm/scs.h
+++ b/arch/arm64/include/asm/scs.h
@@ -32,37 +32,11 @@
 #include <asm/cpufeature.h>
 
 #ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS
-static inline bool should_patch_pac_into_scs(void)
-{
-	u64 reg;
-
-	/*
-	 * We only enable the shadow call stack dynamically if we are running
-	 * on a system that does not implement PAC or BTI. PAC and SCS provide
-	 * roughly the same level of protection, and BTI relies on the PACIASP
-	 * instructions serving as landing pads, preventing us from patching
-	 * those instructions into something else.
-	 */
-	reg = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
-	if (SYS_FIELD_GET(ID_AA64ISAR1_EL1, APA, reg) |
-	    SYS_FIELD_GET(ID_AA64ISAR1_EL1, API, reg))
-		return false;
-
-	reg = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
-	if (SYS_FIELD_GET(ID_AA64ISAR2_EL1, APA3, reg))
-		return false;
-
-	if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) {
-		reg = read_sysreg_s(SYS_ID_AA64PFR1_EL1);
-		if (reg & (0xf << ID_AA64PFR1_EL1_BT_SHIFT))
-			return false;
-	}
-	return true;
-}
-
 static inline void dynamic_scs_init(void)
 {
-	if (should_patch_pac_into_scs()) {
+	extern bool __pi_dynamic_scs_is_enabled;
+
+	if (__pi_dynamic_scs_is_enabled) {
 		pr_info("Enabling dynamic shadow call stack\n");
 		static_branch_enable(&dynamic_scs_enabled);
 	}
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4b88ca8766133fd3..6e730a0be1e8196d 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -80,7 +80,6 @@
 	 *  x20        primary_entry() .. __primary_switch()    CPU boot mode
 	 *  x21        primary_entry() .. start_kernel()        FDT pointer passed at boot in x0
 	 *  x22        create_idmap() .. start_kernel()         ID map VA of the DT blob
-	 *  x23        __primary_switch()                       physical misalignment/KASLR offset
 	 *  x25        primary_entry() .. start_kernel()        supported VA size
 	 *  x28        create_idmap()                           callee preserved temp register
 	 */
@@ -356,24 +355,6 @@ SYM_FUNC_START_LOCAL(create_idmap)
 	ret	x28
 SYM_FUNC_END(create_idmap)
 
-SYM_FUNC_START_LOCAL(create_kernel_mapping)
-	adrp	x0, init_pg_dir
-	mov_q	x5, KIMAGE_VADDR		// compile time __va(_text)
-#ifdef CONFIG_RELOCATABLE
-	add	x5, x5, x23			// add KASLR displacement
-#endif
-	adrp	x6, _end			// runtime __pa(_end)
-	adrp	x3, _text			// runtime __pa(_text)
-	sub	x6, x6, x3			// _end - _text
-	add	x6, x6, x5			// runtime __va(_end)
-	mov	x7, SWAPPER_RW_MMUFLAGS
-
-	map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14
-
-	dsb	ishst				// sync with page table walker
-	ret
-SYM_FUNC_END(create_kernel_mapping)
-
 	/*
 	 * Initialize CPU registers with task-specific and cpu-specific context.
 	 *
@@ -678,44 +659,13 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 	adrp	x2, init_idmap_pg_dir
 	bl	__enable_mmu
 
-	// Clear BSS
-	adrp	x0, __bss_start
-	mov	x1, xzr
-	adrp	x2, init_pg_end
-	sub	x2, x2, x0
-	bl	__pi_memset
-	dsb	ishst				// Make zero page visible to PTW
-
 	adrp	x1, early_init_stack
 	mov	sp, x1
 	mov	x29, xzr
 	mov	x0, x20				// pass the full boot status
 	mov	x1, x22				// pass the low FDT mapping
-	bl	__pi_init_feature_override	// Parse cpu feature overrides
-
-#ifdef CONFIG_RELOCATABLE
-	adrp	x23, KERNEL_START
-	and	x23, x23, MIN_KIMG_ALIGN - 1
-#ifdef CONFIG_RANDOMIZE_BASE
-	mov	x0, x22
-	bl	__pi_kaslr_early_init
-	bic	x0, x0, #SZ_2M - 1
-	orr	x23, x23, x0			// record kernel offset
-#endif
-#endif
-	bl	create_kernel_mapping
+	bl	__pi_early_map_kernel		// Map and relocate the kernel
 
-	adrp	x1, init_pg_dir
-	load_ttbr1 x1, x1, x2
-#ifdef CONFIG_RELOCATABLE
-	mov	x0, x23
-	bl	__pi_relocate_kernel
-#endif
-#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS
-	ldr	x0, =__eh_frame_start
-	ldr	x1, =__eh_frame_end
-	bl	__pi_scs_patch_vmlinux
-#endif
 	ldr	x8, =__primary_switched
 	adrp	x0, KERNEL_START		// __pa(KERNEL_START)
 	br	x8
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 6c6dd100a9cbadf8..88f864f28f03630c 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -51,8 +51,24 @@ PROVIDE(__pi_id_aa64pfr1_override	= id_aa64pfr1_override);
 PROVIDE(__pi_id_aa64smfr0_override	= id_aa64smfr0_override);
 PROVIDE(__pi_id_aa64zfr0_override	= id_aa64zfr0_override);
 PROVIDE(__pi_arm64_sw_feature_override	= arm64_sw_feature_override);
+PROVIDE(__pi_arm64_use_ng_mappings	= arm64_use_ng_mappings);
 PROVIDE(__pi__ctype			= _ctype);
 
+PROVIDE(__pi_init_pg_dir		= init_pg_dir);
+PROVIDE(__pi_init_pg_end		= init_pg_end);
+PROVIDE(__pi__end			= _end);
+
+PROVIDE(__pi__text			= _text);
+PROVIDE(__pi__stext               	= _stext);
+PROVIDE(__pi__etext               	= _etext);
+PROVIDE(__pi___start_rodata       	= __start_rodata);
+PROVIDE(__pi___inittext_begin     	= __inittext_begin);
+PROVIDE(__pi___inittext_end       	= __inittext_end);
+PROVIDE(__pi___initdata_begin     	= __initdata_begin);
+PROVIDE(__pi___initdata_end       	= __initdata_end);
+PROVIDE(__pi__data                	= _data);
+PROVIDE(__pi___bss_start		= __bss_start);
+
 #ifdef CONFIG_KVM
 
 /*
diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile
index 293a04a5dc3ef516..ac27f1cac6b89684 100644
--- a/arch/arm64/kernel/pi/Makefile
+++ b/arch/arm64/kernel/pi/Makefile
@@ -38,7 +38,7 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init
 $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE
 	$(call if_changed_rule,cc_o_c)
 
-obj-y					:= idreg-override.pi.o \
+obj-y					:= idreg-override.pi.o map_kernel.pi.o \
 					   lib-fdt.pi.o lib-fdt_ro.pi.o
 obj-$(CONFIG_RELOCATABLE)		+= relocate.pi.o
 obj-$(CONFIG_RANDOMIZE_BASE)		+= kaslr_early.pi.o
diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
index e524d36d8fc1526f..d0ce3dc4e07aaf4d 100644
--- a/arch/arm64/kernel/pi/idreg-override.c
+++ b/arch/arm64/kernel/pi/idreg-override.c
@@ -15,6 +15,8 @@
 #include <asm/cpufeature.h>
 #include <asm/setup.h>
 
+#include "pi.h"
+
 #define FTR_DESC_NAME_LEN	20	// must remain multiple of 4
 #define FTR_DESC_FIELD_LEN	10	// must remain multiple of 4 +/- 2
 #define FTR_ALIAS_NAME_LEN	30
@@ -296,37 +298,35 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases)
 	} while (1);
 }
 
-static __init const u8 *get_bootargs_cmdline(const void *fdt)
+static __init const u8 *get_bootargs_cmdline(const void *fdt, int node)
 {
+	static char const bootargs[] __initconst = "bootargs";
 	const u8 *prop;
-	int node;
 
-	node = fdt_path_offset(fdt, "/chosen");
 	if (node < 0)
 		return NULL;
 
-	prop = fdt_getprop(fdt, node, "bootargs", NULL);
+	prop = fdt_getprop(fdt, node, bootargs, NULL);
 	if (!prop)
 		return NULL;
 
 	return strlen(prop) ? prop : NULL;
 }
 
-static __init void parse_cmdline(const void *fdt)
+static __init void parse_cmdline(const void *fdt, int chosen)
 {
-	const u8 *prop = get_bootargs_cmdline(fdt);
+	static char const cmdline[] __initconst = CONFIG_CMDLINE;
+	const u8 *prop = get_bootargs_cmdline(fdt, chosen);
 
 	if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop)
-		__parse_cmdline(CONFIG_CMDLINE, true);
+		__parse_cmdline(cmdline, true);
 
 	if (!IS_ENABLED(CONFIG_CMDLINE_FORCE) && prop)
 		__parse_cmdline(prop, true);
 }
 
-/* Keep checkers quiet */
-void init_feature_override(u64 boot_status, const void *fdt);
-
-asmlinkage void __init init_feature_override(u64 boot_status, const void *fdt)
+void __init init_feature_override(u64 boot_status, const void *fdt,
+				  int chosen)
 {
 	int i;
 
@@ -337,7 +337,7 @@ asmlinkage void __init init_feature_override(u64 boot_status, const void *fdt)
 
 	__boot_status = boot_status;
 
-	parse_cmdline(fdt);
+	parse_cmdline(fdt, chosen);
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		dcache_clean_inval_poc((unsigned long)reg_override(i),
diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c
index c46bccd593f2ff6b..965515f7f1809808 100644
--- a/arch/arm64/kernel/pi/kaslr_early.c
+++ b/arch/arm64/kernel/pi/kaslr_early.c
@@ -15,17 +15,17 @@
 #include <asm/archrandom.h>
 #include <asm/memory.h>
 
+#include "pi.h"
+
 extern u16 memstart_offset_seed;
 
-static u64 __init get_kaslr_seed(void *fdt)
+static u64 __init get_kaslr_seed(void *fdt, int node)
 {
-	static char const chosen_str[] __initconst = "chosen";
 	static char const seed_str[] __initconst = "kaslr-seed";
-	int node, len;
 	fdt64_t *prop;
 	u64 ret;
+	int len;
 
-	node = fdt_path_offset(fdt, chosen_str);
 	if (node < 0)
 		return 0;
 
@@ -38,7 +38,7 @@ static u64 __init get_kaslr_seed(void *fdt)
 	return ret;
 }
 
-asmlinkage u64 __init kaslr_early_init(void *fdt)
+u64 __init kaslr_early_init(void *fdt, int chosen)
 {
 	u64 seed;
 
@@ -46,7 +46,7 @@ asmlinkage u64 __init kaslr_early_init(void *fdt)
 						 ARM64_SW_FEATURE_OVERRIDE_NOKASLR))
 		return 0;
 
-	seed = get_kaslr_seed(fdt);
+	seed = get_kaslr_seed(fdt, chosen);
 	if (!seed) {
 		if (!__early_cpu_has_rndr() ||
 		    !__arm64_rndr((unsigned long *)&seed))
diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c
new file mode 100644
index 0000000000000000..c5c6eebef684f81d
--- /dev/null
+++ b/arch/arm64/kernel/pi/map_kernel.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// Copyright 2022 Google LLC
+// Author: Ard Biesheuvel <ardb@google.com>
+
+#include <linux/init.h>
+#include <linux/libfdt.h>
+#include <linux/linkage.h>
+#include <linux/types.h>
+#include <linux/sizes.h>
+#include <linux/string.h>
+
+#include <asm/memory.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
+
+#include "pi.h"
+
+extern const u8 __eh_frame_start[], __eh_frame_end[];
+
+extern void idmap_cpu_replace_ttbr1(void *pgdir);
+
+static void __init map_range(pgd_t **pgd, u64 start, u64 end, u64 pa,
+			     pgprot_t prot, int level, pte_t *tbl,
+			     bool may_use_cont)
+{
+	u64 cmask = (level == 3) ? CONT_PTE_SIZE - 1 : U64_MAX;
+	u64 protval = pgprot_val(prot) & ~PTE_TYPE_MASK;
+	int lshift = (3 - level) * (PAGE_SHIFT - 3);
+	u64 lmask = (PAGE_SIZE << lshift) - 1;
+
+	/* Advance tbl to the entry that covers start */
+	tbl += (start >> (lshift + PAGE_SHIFT)) % BIT(PAGE_SHIFT - 3);
+
+	/*
+	 * Set the right block/page bits for this level unless we are
+	 * clearing the mapping
+	 */
+	if (protval)
+		protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE;
+
+	while (start < end) {
+		u64 next = min((start | lmask) + 1, end);
+
+		if (level < 3 && (start | next | pa) & lmask) {
+			/*
+			 * This chunk needs a finer grained mapping. Put down
+			 * a table mapping if necessary and recurse.
+			 */
+			if (pte_none(*tbl)) {
+				*tbl = __pte(__phys_to_pte_val((u64)*pgd) |
+					     PMD_TYPE_TABLE | PMD_TABLE_UXN);
+				*pgd += PTRS_PER_PTE;
+			}
+			map_range(pgd, start, next, pa, prot, level + 1,
+				  (pte_t *)__pte_to_phys(*tbl), may_use_cont);
+		} else {
+			/*
+			 * Start a contiguous range if start and pa are
+			 * suitably aligned
+			 */
+			if (((start | pa) & cmask) == 0 && may_use_cont)
+				protval |= PTE_CONT;
+
+			/*
+			 * Clear the contiguous attribute if the remaining
+			 * range does not cover a contiguous block
+			 */
+			if ((end & ~cmask) <= start)
+				protval &= ~PTE_CONT;
+
+			/* Put down a block or page mapping */
+			*tbl = __pte(__phys_to_pte_val(pa) | protval);
+		}
+		pa += next - start;
+		start = next;
+		tbl++;
+	}
+}
+
+static void __init map_segment(pgd_t **pgd, u64 va_offset, void *start,
+			       void *end, pgprot_t prot, bool may_use_cont)
+{
+	map_range(pgd, ((u64)start + va_offset) & ~PAGE_OFFSET,
+		  ((u64)end + va_offset) & ~PAGE_OFFSET, (u64)start,
+		  prot, 4 - CONFIG_PGTABLE_LEVELS, (pte_t *)init_pg_dir,
+		  may_use_cont);
+}
+
+static void __init unmap_segment(u64 va_offset, void *start, void *end)
+{
+	map_range(NULL, ((u64)start + va_offset) & ~PAGE_OFFSET,
+		  ((u64)end + va_offset) & ~PAGE_OFFSET, (u64)start,
+		  __pgprot(0), 4 - CONFIG_PGTABLE_LEVELS, (pte_t *)init_pg_dir,
+		  false);
+}
+
+static bool __init arm64_early_this_cpu_has_bti(void)
+{
+	u64 pfr1;
+
+	if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL))
+		return false;
+
+	pfr1 = read_sysreg(ID_AA64PFR1_EL1);
+	pfr1 &= ~id_aa64pfr1_override.mask;
+	pfr1 |= id_aa64pfr1_override.val;
+
+	return cpuid_feature_extract_unsigned_field(pfr1,
+						    ID_AA64PFR1_EL1_BT_SHIFT);
+}
+
+static bool __init arm64_early_this_cpu_has_e0pd(void)
+{
+	u64 mmfr2;
+
+	if (!IS_ENABLED(CONFIG_ARM64_E0PD))
+		return false;
+
+	mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+	return cpuid_feature_extract_unsigned_field(mmfr2,
+						    ID_AA64MMFR2_EL1_E0PD_SHIFT);
+}
+
+static bool __init arm64_early_this_cpu_has_pac(void)
+{
+	u64 isar1, isar2;
+	u8 feat;
+
+	if (!IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL))
+		return false;
+
+	isar1 = read_sysreg(ID_AA64ISAR1_EL1);
+	isar1 &= ~id_aa64isar1_override.mask;
+	isar1 |= id_aa64isar1_override.val;
+	feat = cpuid_feature_extract_unsigned_field(isar1,
+						    ID_AA64ISAR1_EL1_APA_SHIFT);
+	if (feat)
+		return true;
+
+	feat = cpuid_feature_extract_unsigned_field(isar1,
+						    ID_AA64ISAR1_EL1_API_SHIFT);
+	if (feat)
+		return true;
+
+	isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
+	isar2 &= ~id_aa64isar2_override.mask;
+	isar2 |= id_aa64isar2_override.val;
+	feat = cpuid_feature_extract_unsigned_field(isar2,
+						    ID_AA64ISAR2_EL1_APA3_SHIFT);
+	return feat;
+}
+
+static void __init map_kernel(u64 kaslr_offset, u64 va_offset)
+{
+	bool enable_scs = IS_ENABLED(CONFIG_UNWIND_PATCH_PAC_INTO_SCS);
+	bool twopass = IS_ENABLED(CONFIG_RELOCATABLE);
+	pgd_t *pgdp = (void *)init_pg_dir + PAGE_SIZE;
+	pgprot_t text_prot = PAGE_KERNEL_ROX;
+	pgprot_t data_prot = PAGE_KERNEL;
+	pgprot_t prot;
+
+	/*
+	 * External debuggers may need to write directly to the text mapping to
+	 * install SW breakpoints. Allow this (only) when explicitly requested
+	 * with rodata=off.
+	 */
+	if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val,
+						 ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF))
+		text_prot = PAGE_KERNEL_EXEC;
+
+	/*
+	 * We only enable the shadow call stack dynamically if we are running
+	 * on a system that does not implement PAC or BTI. PAC and SCS provide
+	 * roughly the same level of protection, and BTI relies on the PACIASP
+	 * instructions serving as landing pads, preventing us from patching
+	 * those instructions into something else.
+	 */
+	if (arm64_early_this_cpu_has_pac())
+		enable_scs = false;
+
+	if (arm64_early_this_cpu_has_bti()) {
+		enable_scs = false;
+
+		/*
+		 * If we have a CPU that supports BTI and a kernel built for
+		 * BTI then mark the kernel executable text as guarded pages
+		 * now so we don't have to rewrite the page tables later.
+		 */
+		text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP);
+	}
+
+	/* Map all code read-write on the first pass if needed */
+	twopass |= enable_scs;
+	prot = twopass ? data_prot : text_prot;
+
+	map_segment(&pgdp, va_offset, _stext, _etext, prot, !twopass);
+	map_segment(&pgdp, va_offset, __start_rodata, __inittext_begin, data_prot, false);
+	map_segment(&pgdp, va_offset, __inittext_begin, __inittext_end, prot, false);
+	map_segment(&pgdp, va_offset, __initdata_begin, __initdata_end, data_prot, false);
+	map_segment(&pgdp, va_offset, _data, _end, data_prot, true);
+	dsb(ishst);
+
+	idmap_cpu_replace_ttbr1(init_pg_dir);
+
+	if (twopass) {
+		if (IS_ENABLED(CONFIG_RELOCATABLE))
+			relocate_kernel(kaslr_offset);
+
+		if (enable_scs) {
+			scs_patch(__eh_frame_start + va_offset,
+				  __eh_frame_end - __eh_frame_start);
+			asm("ic ialluis");
+
+			dynamic_scs_is_enabled = true;
+		}
+
+		/*
+		 * Unmap the text region before remapping it, to avoid
+		 * potential TLB conflicts when creating the contiguous
+		 * descriptors.
+		 */
+		unmap_segment(va_offset, _stext, _etext);
+		dsb(ishst);
+		isb();
+		__tlbi(vmalle1);
+		isb();
+
+		/*
+		 * Remap these segments with different permissions
+		 * No new page table allocations should be needed
+		 */
+		map_segment(NULL, va_offset, _stext, _etext, text_prot, true);
+		map_segment(NULL, va_offset, __inittext_begin, __inittext_end,
+			    text_prot, false);
+		dsb(ishst);
+	}
+}
+
+asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt)
+{
+	static char const chosen_str[] __initconst = "/chosen";
+	int chosen = fdt_path_offset(fdt, chosen_str);
+	u64 va_base, pa_base = (u64)&_text;
+	u64 kaslr_offset = pa_base % MIN_KIMG_ALIGN;
+
+	/* Clear BSS and the initial page tables */
+	memset(__bss_start, 0, (u64)init_pg_end - (u64)__bss_start);
+
+	/* Parse the command line for CPU feature overrides */
+	init_feature_override(boot_status, fdt, chosen);
+
+	/*
+	 * The virtual KASLR displacement modulo 2MiB is decided by the
+	 * physical placement of the image, as otherwise, we might not be able
+	 * to create the early kernel mapping using 2 MiB block descriptors. So
+	 * take the low bits of the KASLR offset from the physical address, and
+	 * fill in the high bits from the seed.
+	 */
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+		u64 kaslr_seed = kaslr_early_init(fdt, chosen);
+
+		kaslr_offset |= kaslr_seed & ~(MIN_KIMG_ALIGN - 1);
+
+		/*
+		 * Assume that any CPU that does not implement E0PD needs KPTI
+		 * to ensure that KASLR randomized addresses will not leak.
+		 * This means we need to use non-global mappings for the kernel
+		 * text and data.
+		 */
+		if (kaslr_seed && !arm64_early_this_cpu_has_e0pd())
+			arm64_use_ng_mappings = true;
+	}
+
+	va_base = KIMAGE_VADDR + kaslr_offset;
+	map_kernel(kaslr_offset, va_base - pa_base);
+}
diff --git a/arch/arm64/kernel/pi/patch-scs.c b/arch/arm64/kernel/pi/patch-scs.c
index d15833df10d3d4c6..bfeeaffc1e233083 100644
--- a/arch/arm64/kernel/pi/patch-scs.c
+++ b/arch/arm64/kernel/pi/patch-scs.c
@@ -11,6 +11,10 @@
 
 #include <asm/scs.h>
 
+#include "pi.h"
+
+bool dynamic_scs_is_enabled;
+
 //
 // This minimal DWARF CFI parser is partially based on the code in
 // arch/arc/kernel/unwind.c, and on the document below:
@@ -46,8 +50,6 @@
 #define DW_CFA_GNU_negative_offset_extended 0x2f
 #define DW_CFA_hi_user                      0x3f
 
-extern const u8 __eh_frame_start[], __eh_frame_end[];
-
 enum {
 	PACIASP		= 0xd503233f,
 	AUTIASP		= 0xd50323bf,
@@ -245,13 +247,3 @@ int scs_patch(const u8 eh_frame[], int size)
 	}
 	return 0;
 }
-
-asmlinkage void __init scs_patch_vmlinux(const u8 start[], const u8 end[])
-{
-	if (!should_patch_pac_into_scs())
-		return;
-
-	scs_patch(start, end - start);
-	asm("ic ialluis");
-	isb();
-}
diff --git a/arch/arm64/kernel/pi/pi.h b/arch/arm64/kernel/pi/pi.h
new file mode 100644
index 0000000000000000..c0b00199e3dc80e0
--- /dev/null
+++ b/arch/arm64/kernel/pi/pi.h
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// Copyright 2022 Google LLC
+// Author: Ard Biesheuvel <ardb@google.com>
+
+#include <linux/types.h>
+
+extern bool dynamic_scs_is_enabled;
+
+void init_feature_override(u64 boot_status, const void *fdt, int chosen);
+u64 kaslr_early_init(void *fdt, int chosen);
+void relocate_kernel(u64 offset);
+int scs_patch(const u8 eh_frame[], int size);
diff --git a/arch/arm64/kernel/pi/relocate.c b/arch/arm64/kernel/pi/relocate.c
index c35cb918fa2a004a..06e3cc6cdd6a68ca 100644
--- a/arch/arm64/kernel/pi/relocate.c
+++ b/arch/arm64/kernel/pi/relocate.c
@@ -6,6 +6,8 @@
 #include <linux/init.h>
 #include <linux/types.h>
 
+#include "pi.h"
+
 extern const Elf64_Rela rela_start[], rela_end[];
 extern const u64 relr_start[], relr_end[];
 
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 37e0ba95afc3e045..149e7c3ee1321363 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -280,13 +280,6 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
 
 	*cmdline_p = boot_command_line;
 
-	/*
-	 * If know now we are going to need KPTI then use non-global
-	 * mappings from the start, avoiding the cost of rewriting
-	 * everything later.
-	 */
-	arm64_use_ng_mappings = kaslr_requires_kpti();
-
 	early_fixmap_init();
 	early_ioremap_init();
 
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 3f86a0db2952600c..33c3e59233f6fa6e 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -124,9 +124,9 @@ jiffies = jiffies_64;
 #ifdef CONFIG_UNWIND_TABLES
 #define UNWIND_DATA_SECTIONS				\
 	.eh_frame : {					\
-		__eh_frame_start = .;			\
+		__pi___eh_frame_start = .;		\
 		*(.eh_frame)				\
-		__eh_frame_end = .;			\
+		__pi___eh_frame_end = .;		\
 	}
 #else
 #define UNWIND_DATA_SECTIONS
@@ -313,7 +313,7 @@ SECTIONS
 
 	BSS_SECTION(SBSS_ALIGN, 0, 0)
 
-	. = ALIGN(PAGE_SIZE);
+	. = ALIGN(SEGMENT_ALIGN);
 	init_pg_dir = .;
 	. += INIT_DIR_SIZE;
 	init_pg_end = .;
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index b9ecbbae1e1abca1..f0db2c05e797aeed 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -201,6 +201,7 @@ SYM_TYPED_FUNC_START(idmap_cpu_replace_ttbr1)
 
 	ret
 SYM_FUNC_END(idmap_cpu_replace_ttbr1)
+SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
 	.popsection
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 28/33] arm64: mm: avoid fixmap for early swapper_pg_dir updates
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (26 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 27/33] arm64: head: Move early kernel mapping routines into " Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 29/33] arm64: mm: omit redundant remap of kernel image Ard Biesheuvel
                   ` (4 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Early in the boot, when .rodata is still writable, we can poke
swapper_pg_dir entries directly, and there is no need to go through the
fixmap. After a future patch, we will enter the kernel with
swapper_pg_dir already active, and early swapper_pg_dir updates for
creating the fixmap page table hierarchy itself cannot go through the
fixmap for obvious reaons. So let's keep track of whether rodata is
writable, and update the descriptor directly in that case.

As the same reasoning applies to early KASAN init, make the function
noinstr as well.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/mm/mmu.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 0c35e1f195678695..68e66b979fc3ac5d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -58,6 +58,8 @@ EXPORT_SYMBOL(kimage_voffset);
 
 u32 __boot_cpu_mode[] = { BOOT_CPU_MODE_EL2, BOOT_CPU_MODE_EL1 };
 
+static bool rodata_is_rw __ro_after_init = true;
+
 /*
  * The booting CPU updates the failed status @__early_cpu_boot_status,
  * with MMU turned off.
@@ -78,10 +80,21 @@ static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
 static DEFINE_SPINLOCK(swapper_pgdir_lock);
 static DEFINE_MUTEX(fixmap_lock);
 
-void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd)
+void noinstr set_swapper_pgd(pgd_t *pgdp, pgd_t pgd)
 {
 	pgd_t *fixmap_pgdp;
 
+	/*
+	 * Don't bother with the fixmap if swapper_pg_dir is still mapped
+	 * writable in the kernel mapping.
+	 */
+	if (rodata_is_rw) {
+		WRITE_ONCE(*pgdp, pgd);
+		dsb(ishst);
+		isb();
+		return;
+	}
+
 	spin_lock(&swapper_pgdir_lock);
 	fixmap_pgdp = pgd_set_fixmap(__pa_symbol(pgdp));
 	WRITE_ONCE(*fixmap_pgdp, pgd);
@@ -615,6 +628,7 @@ void mark_rodata_ro(void)
 	 * to cover NOTES and EXCEPTION_TABLE.
 	 */
 	section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
+	WRITE_ONCE(rodata_is_rw, false);
 	update_mapping_prot(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
 
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 29/33] arm64: mm: omit redundant remap of kernel image
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (27 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 28/33] arm64: mm: avoid fixmap for early swapper_pg_dir updates Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 30/33] arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" Ard Biesheuvel
                   ` (3 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Now that the early kernel mapping is created with all the right
attributes and segment boundaries, there is no longer a need to recreate
it and switch to it. This also means we no longer have to copy the kasan
shadow or some parts of the fixmap from one set of page tables to the
other.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/kasan.h    |   2 -
 arch/arm64/include/asm/mmu.h      |   2 +-
 arch/arm64/kernel/image-vars.h    |   2 +-
 arch/arm64/kernel/pi/map_kernel.c |   9 +-
 arch/arm64/mm/kasan_init.c        |  15 ---
 arch/arm64/mm/mmu.c               | 110 +++-----------------
 6 files changed, 22 insertions(+), 118 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 12d5f47f7dbec628..ab52688ac4bd43b6 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -36,12 +36,10 @@ void kasan_init(void);
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
 #define KASAN_SHADOW_START      _KASAN_SHADOW_START(vabits_actual)
 
-void kasan_copy_shadow(pgd_t *pgdir);
 asmlinkage void kasan_early_init(void);
 
 #else
 static inline void kasan_init(void) { }
-static inline void kasan_copy_shadow(pgd_t *pgdir) { }
 #endif
 
 #endif
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 48f8466a4be92ac3..a93d495d6e8c94a2 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -73,7 +73,7 @@ extern void mark_linear_text_alias_ro(void);
 extern bool kaslr_requires_kpti(void);
 
 #define INIT_MM_CONTEXT(name)	\
-	.pgd = init_pg_dir,
+	.pgd = swapper_pg_dir,
 
 #endif	/* !__ASSEMBLY__ */
 #endif
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 88f864f28f03630c..5bd878f414d85366 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -56,7 +56,7 @@ PROVIDE(__pi__ctype			= _ctype);
 
 PROVIDE(__pi_init_pg_dir		= init_pg_dir);
 PROVIDE(__pi_init_pg_end		= init_pg_end);
-PROVIDE(__pi__end			= _end);
+PROVIDE(__pi_swapper_pg_dir		= swapper_pg_dir);
 
 PROVIDE(__pi__text			= _text);
 PROVIDE(__pi__stext               	= _stext);
diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c
index c5c6eebef684f81d..4b604b104460c3ef 100644
--- a/arch/arm64/kernel/pi/map_kernel.c
+++ b/arch/arm64/kernel/pi/map_kernel.c
@@ -198,7 +198,8 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset)
 	map_segment(&pgdp, va_offset, __start_rodata, __inittext_begin, data_prot, false);
 	map_segment(&pgdp, va_offset, __inittext_begin, __inittext_end, prot, false);
 	map_segment(&pgdp, va_offset, __initdata_begin, __initdata_end, data_prot, false);
-	map_segment(&pgdp, va_offset, _data, _end, data_prot, true);
+	map_segment(&pgdp, va_offset, _data, init_pg_dir, data_prot, true);
+	/* omit [init_pg_dir, _end] - it doesn't need a kernel mapping */
 	dsb(ishst);
 
 	idmap_cpu_replace_ttbr1(init_pg_dir);
@@ -233,8 +234,12 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset)
 		map_segment(NULL, va_offset, _stext, _etext, text_prot, true);
 		map_segment(NULL, va_offset, __inittext_begin, __inittext_end,
 			    text_prot, false);
-		dsb(ishst);
 	}
+
+	/* Copy the root page table to its final location */
+	memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PGD_SIZE);
+	dsb(ishst);
+	idmap_cpu_replace_ttbr1(swapper_pg_dir);
 }
 
 asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index e969e68de005fd2a..df98f496539f0e39 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -184,21 +184,6 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end,
 	kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false);
 }
 
-/*
- * Copy the current shadow region into a new pgdir.
- */
-void __init kasan_copy_shadow(pgd_t *pgdir)
-{
-	pgd_t *pgdp, *pgdp_new, *pgdp_end;
-
-	pgdp = pgd_offset_k(KASAN_SHADOW_START);
-	pgdp_end = pgd_offset_k(KASAN_SHADOW_END);
-	pgdp_new = pgd_offset_pgd(pgdir, KASAN_SHADOW_START);
-	do {
-		set_pgd(pgdp_new, READ_ONCE(*pgdp));
-	} while (pgdp++, pgdp_new++, pgdp != pgdp_end);
-}
-
 static void __init clear_pgds(unsigned long start,
 			unsigned long end)
 {
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 68e66b979fc3ac5d..6942255056aed5ae 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -635,9 +635,9 @@ void mark_rodata_ro(void)
 	debug_checkwx();
 }
 
-static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
-				      pgprot_t prot, struct vm_struct *vma,
-				      int flags, unsigned long vm_flags)
+static void __init declare_vma(struct vm_struct *vma,
+			       void *va_start, void *va_end,
+			       unsigned long vm_flags)
 {
 	phys_addr_t pa_start = __pa_symbol(va_start);
 	unsigned long size = va_end - va_start;
@@ -645,9 +645,6 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
 	BUG_ON(!PAGE_ALIGNED(pa_start));
 	BUG_ON(!PAGE_ALIGNED(size));
 
-	__create_pgd_mapping(pgdp, pa_start, (unsigned long)va_start, size, prot,
-			     early_pgtable_alloc, flags);
-
 	if (!(vm_flags & VM_NO_GUARD))
 		size += PAGE_SIZE;
 
@@ -692,87 +689,17 @@ core_initcall(map_entry_trampoline);
 #endif
 
 /*
- * Open coded check for BTI, only for use to determine configuration
- * for early mappings for before the cpufeature code has run.
+ * Declare the VMA areas for the kernel
  */
-static bool arm64_early_this_cpu_has_bti(void)
+static void __init declare_kernel_vmas(void)
 {
-	u64 pfr1;
-
-	if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL))
-		return false;
-
-	pfr1 = __read_sysreg_by_encoding(SYS_ID_AA64PFR1_EL1);
-	return cpuid_feature_extract_unsigned_field(pfr1,
-						    ID_AA64PFR1_EL1_BT_SHIFT);
-}
-
-/*
- * Create fine-grained mappings for the kernel.
- */
-static void __init map_kernel(pgd_t *pgdp)
-{
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
-				vmlinux_initdata, vmlinux_data;
-
-	/*
-	 * External debuggers may need to write directly to the text
-	 * mapping to install SW breakpoints. Allow this (only) when
-	 * explicitly requested with rodata=off.
-	 */
-	pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
-
-	/*
-	 * If we have a CPU that supports BTI and a kernel built for
-	 * BTI then mark the kernel executable text as guarded pages
-	 * now so we don't have to rewrite the page tables later.
-	 */
-	if (arm64_early_this_cpu_has_bti())
-		text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP);
+	static struct vm_struct vmlinux_seg[KERNEL_SEGMENT_COUNT];
 
-	/*
-	 * Only rodata will be remapped with different permissions later on,
-	 * all other segments are allowed to use contiguous mappings.
-	 */
-	map_kernel_segment(pgdp, _stext, _etext, text_prot, &vmlinux_text, 0,
-			   VM_NO_GUARD);
-	map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL,
-			   &vmlinux_rodata, NO_CONT_MAPPINGS, VM_NO_GUARD);
-	map_kernel_segment(pgdp, __inittext_begin, __inittext_end, text_prot,
-			   &vmlinux_inittext, 0, VM_NO_GUARD);
-	map_kernel_segment(pgdp, __initdata_begin, __initdata_end, PAGE_KERNEL,
-			   &vmlinux_initdata, 0, VM_NO_GUARD);
-	map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0);
-
-	if (!READ_ONCE(pgd_val(*pgd_offset_pgd(pgdp, FIXADDR_START)))) {
-		/*
-		 * The fixmap falls in a separate pgd to the kernel, and doesn't
-		 * live in the carveout for the swapper_pg_dir. We can simply
-		 * re-use the existing dir for the fixmap.
-		 */
-		set_pgd(pgd_offset_pgd(pgdp, FIXADDR_START),
-			READ_ONCE(*pgd_offset_k(FIXADDR_START)));
-	} else if (CONFIG_PGTABLE_LEVELS > 3) {
-		pgd_t *bm_pgdp;
-		p4d_t *bm_p4dp;
-		pud_t *bm_pudp;
-		/*
-		 * The fixmap shares its top level pgd entry with the kernel
-		 * mapping. This can really only occur when we are running
-		 * with 16k/4 levels, so we can simply reuse the pud level
-		 * entry instead.
-		 */
-		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
-		bm_pgdp = pgd_offset_pgd(pgdp, FIXADDR_START);
-		bm_p4dp = p4d_offset(bm_pgdp, FIXADDR_START);
-		bm_pudp = pud_set_fixmap_offset(bm_p4dp, FIXADDR_START);
-		pud_populate(&init_mm, bm_pudp, lm_alias(bm_pmd));
-		pud_clear_fixmap();
-	} else {
-		BUG();
-	}
-
-	kasan_copy_shadow(pgdp);
+	declare_vma(&vmlinux_seg[0], _stext, _etext, VM_NO_GUARD);
+	declare_vma(&vmlinux_seg[1], __start_rodata, __inittext_begin, VM_NO_GUARD);
+	declare_vma(&vmlinux_seg[2], __inittext_begin, __inittext_end, VM_NO_GUARD);
+	declare_vma(&vmlinux_seg[3], __initdata_begin, __initdata_end, VM_NO_GUARD);
+	declare_vma(&vmlinux_seg[4], _data, _end, 0);
 }
 
 static void __init create_idmap(void)
@@ -807,25 +734,14 @@ static void __init create_idmap(void)
 
 void __init paging_init(void)
 {
-	pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir));
-	extern pgd_t init_idmap_pg_dir[];
-
 	idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(VA_BITS_MIN - 1, 0));
 
-	map_kernel(pgdp);
-	map_mem(pgdp);
-
-	pgd_clear_fixmap();
-
-	cpu_replace_ttbr1(lm_alias(swapper_pg_dir), init_idmap_pg_dir);
-	init_mm.pgd = swapper_pg_dir;
-
-	memblock_phys_free(__pa_symbol(init_pg_dir),
-			   __pa_symbol(init_pg_end) - __pa_symbol(init_pg_dir));
+	map_mem(swapper_pg_dir);
 
 	memblock_allow_resize();
 
 	create_idmap();
+	declare_kernel_vmas();
 }
 
 /*
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 30/33] arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()"
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (28 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 29/33] arm64: mm: omit redundant remap of kernel image Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:11 ` [PATCH v7 31/33] arm64: mmu: Retire SWAPPER_BLOCK_xxx and related constants Ard Biesheuvel
                   ` (2 subsequent siblings)
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

This reverts commit 1682c45b920643c, which is no longer needed now that
we create the permanent kernel mapping directly during early boot.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/mmu_context.h | 13 ++++---------
 arch/arm64/kernel/cpufeature.c       |  2 +-
 arch/arm64/kernel/suspend.c          |  2 +-
 arch/arm64/mm/kasan_init.c           |  4 ++--
 4 files changed, 8 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index d3f8b5df0c1fe315..3c80c34f14e152d9 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -105,18 +105,13 @@ static inline void cpu_uninstall_idmap(void)
 		cpu_switch_mm(mm->pgd, mm);
 }
 
-static inline void __cpu_install_idmap(pgd_t *idmap)
+static inline void cpu_install_idmap(void)
 {
 	cpu_set_reserved_ttbr0();
 	local_flush_tlb_all();
 	cpu_set_idmap_tcr_t0sz();
 
-	cpu_switch_mm(lm_alias(idmap), &init_mm);
-}
-
-static inline void cpu_install_idmap(void)
-{
-	__cpu_install_idmap(idmap_pg_dir);
+	cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm);
 }
 
 /*
@@ -147,7 +142,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz)
  * Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD,
  * avoiding the possibility of conflicting TLB entries being allocated.
  */
-static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap)
+static inline void cpu_replace_ttbr1(pgd_t *pgdp)
 {
 	typedef void (ttbr_replace_func)(phys_addr_t);
 	extern ttbr_replace_func idmap_cpu_replace_ttbr1;
@@ -170,7 +165,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap)
 
 	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
-	__cpu_install_idmap(idmap);
+	cpu_install_idmap();
 	replace_phys(ttbr1);
 	cpu_uninstall_idmap();
 }
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 08ab04dc9393652a..eca9df123a8b354b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3348,7 +3348,7 @@ subsys_initcall_sync(init_32bit_el0_mask);
 
 static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
 {
-	cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir);
+	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 }
 
 /*
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index 8b02d310838f9240..033cd080af680c2e 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -54,7 +54,7 @@ void notrace __cpu_suspend_exit(void)
 
 	/* Restore CnP bit in TTBR1_EL1 */
 	if (system_supports_cnp())
-		cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir);
+		cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
 	/*
 	 * PSTATE was not saved over suspend/resume, re-enable any detected
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index df98f496539f0e39..7e32f21fb8e1e227 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -221,7 +221,7 @@ static void __init kasan_init_shadow(void)
 	 */
 	memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir));
 	dsb(ishst);
-	cpu_replace_ttbr1(lm_alias(tmp_pg_dir), idmap_pg_dir);
+	cpu_replace_ttbr1(lm_alias(tmp_pg_dir));
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
@@ -265,7 +265,7 @@ static void __init kasan_init_shadow(void)
 				PAGE_KERNEL_RO));
 
 	memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
-	cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir);
+	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 }
 
 static void __init kasan_init_depth(void)
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 31/33] arm64: mmu: Retire SWAPPER_BLOCK_xxx and related constants
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (29 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 30/33] arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" Ard Biesheuvel
@ 2022-11-11 17:11 ` Ard Biesheuvel
  2022-11-11 17:12 ` [PATCH v7 32/33] mm: add arch hook to validate mmap() prot flags Ard Biesheuvel
  2022-11-11 17:12 ` [PATCH v7 33/33] arm64: mm: add support for WXN memory translation attribute Ard Biesheuvel
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:11 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Initially, the kernel mapping as well as the ID map used block mappings
on 4k pagesize configurations, but this hasn't been the case for a long
time.

Currently, only the initial ID map uses the larger granularity, to
simplify the early mapping code, which is implemented in assembler. The
permanent ID map as well as the kernel mapping (which is now created
only once) always map the kernel down to pages.

This means the SWAPPER_BLOCK_xxx and related constants are no longer
named appropriately, so let's rename them to INIT_IDMAP_BLOCK_xxx
instead.

Get rid of a stale comment while at it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/kernel-pgtable.h | 60 +++++++-------------
 arch/arm64/kernel/head.S                | 36 ++++++------
 arch/arm64/mm/proc.S                    |  2 +-
 3 files changed, 40 insertions(+), 58 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index ed0db7fc0022d34e..4278cd088347fefd 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -19,28 +19,13 @@
  * 64K (section size = 512M).
  */
 #ifdef CONFIG_ARM64_4K_PAGES
-#define ARM64_KERNEL_USES_PMD_MAPS 1
+#define INIT_IDMAP_USES_PMD_MAPS	1
+#define INIT_IDMAP_TABLE_LEVELS		(CONFIG_PGTABLE_LEVELS - 1)
 #else
-#define ARM64_KERNEL_USES_PMD_MAPS 0
+#define INIT_IDMAP_USES_PMD_MAPS	0
+#define INIT_IDMAP_TABLE_LEVELS		(CONFIG_PGTABLE_LEVELS)
 #endif
 
-/*
- * The idmap and swapper page tables need some space reserved in the kernel
- * image. Both require pgd, pud (4 levels only) and pmd tables to (section)
- * map the kernel. With the 64K page configuration, swapper and idmap need to
- * map to pte level. The swapper also maps the FDT (see __create_page_tables
- * for more information). Note that the number of ID map translation levels
- * could be increased on the fly if system RAM is out of reach for the default
- * VA range, so pages required to map highest possible PA are reserved in all
- * cases.
- */
-#if ARM64_KERNEL_USES_PMD_MAPS
-#define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS - 1)
-#else
-#define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
-#endif
-
-
 /*
  * If KASLR is enabled, then an offset K is added to the kernel address
  * space. The bottom 21 bits of this offset are zero to guarantee 2MB
@@ -69,14 +54,14 @@
 
 #define EARLY_PGDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT, add))
 
-#if SWAPPER_PGTABLE_LEVELS > 3
+#if INIT_IDMAP_TABLE_LEVELS > 3
 #define EARLY_PUDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT, add))
 #else
 #define EARLY_PUDS(vstart, vend, add) (0)
 #endif
 
-#if SWAPPER_PGTABLE_LEVELS > 2
-#define EARLY_PMDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, SWAPPER_TABLE_SHIFT, add))
+#if INIT_IDMAP_TABLE_LEVELS > 2
+#define EARLY_PMDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, INIT_IDMAP_TABLE_SHIFT, add))
 #else
 #define EARLY_PMDS(vstart, vend, add) (0)
 #endif
@@ -93,23 +78,23 @@
 #else
 #define INIT_IDMAP_DIR_SIZE	(INIT_IDMAP_DIR_PAGES * PAGE_SIZE)
 #endif
-#define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
+#define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + INIT_IDMAP_BLOCK_SIZE, 1)
 
 /* Initial memory map size */
-#if ARM64_KERNEL_USES_PMD_MAPS
-#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
-#define SWAPPER_BLOCK_SIZE	PMD_SIZE
-#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
+#if INIT_IDMAP_USES_PMD_MAPS
+#define INIT_IDMAP_BLOCK_SHIFT	PMD_SHIFT
+#define INIT_IDMAP_BLOCK_SIZE	PMD_SIZE
+#define INIT_IDMAP_TABLE_SHIFT	PUD_SHIFT
 #else
-#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
-#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
-#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
+#define INIT_IDMAP_BLOCK_SHIFT	PAGE_SHIFT
+#define INIT_IDMAP_BLOCK_SIZE	PAGE_SIZE
+#define INIT_IDMAP_TABLE_SHIFT	PMD_SHIFT
 #endif
 
 /* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */
 #define KERNEL_SEGMENT_COUNT	5
 
-#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
+#if INIT_IDMAP_BLOCK_SIZE > SEGMENT_ALIGN
 #define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
 #else
 #define EARLY_SEGMENT_EXTRA_PAGES 0
@@ -118,15 +103,12 @@
 /*
  * Initial memory map attributes.
  */
-#define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-
-#if ARM64_KERNEL_USES_PMD_MAPS
-#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
-#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
+#if INIT_IDMAP_USES_PMD_MAPS
+#define INIT_IDMAP_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define INIT_IDMAP_RX_MMUFLAGS	(INIT_IDMAP_RW_MMUFLAGS | PMD_SECT_RDONLY)
 #else
-#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
-#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
+#define INIT_IDMAP_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define INIT_IDMAP_RX_MMUFLAGS	(INIT_IDMAP_RW_MMUFLAGS | PTE_RDONLY)
 #endif
 
 /*
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6e730a0be1e8196d..3bc96ef82f0f74e4 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -214,23 +214,23 @@ SYM_CODE_END(preserve_boot_args)
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
 
-#if SWAPPER_PGTABLE_LEVELS > 3
+#if INIT_IDMAP_TABLE_LEVELS > 3
 	compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
 	mov \sv, \rtbl
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
 #endif
 
-#if SWAPPER_PGTABLE_LEVELS > 2
-	compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
+#if INIT_IDMAP_TABLE_LEVELS > 2
+	compute_indices \vstart, \vend, #INIT_IDMAP_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
 	mov \sv, \rtbl
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
 #endif
 
-	compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
-	bic \rtbl, \phys, #SWAPPER_BLOCK_SIZE - 1
-	populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
+	compute_indices \vstart, \vend, #INIT_IDMAP_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
+	bic \rtbl, \phys, #INIT_IDMAP_BLOCK_SIZE - 1
+	populate_entries \tbl, \rtbl, \istart, \iend, \flags, #INIT_IDMAP_BLOCK_SIZE, \tmp
 	.endm
 
 /*
@@ -317,8 +317,8 @@ SYM_FUNC_START_LOCAL(create_idmap)
 #endif
 	adrp	x0, init_idmap_pg_dir
 	adrp	x3, _text
-	adrp	x6, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE
-	mov	x7, SWAPPER_RX_MMUFLAGS
+	adrp	x6, _end + MAX_FDT_SIZE + INIT_IDMAP_BLOCK_SIZE
+	mov	x7, INIT_IDMAP_RX_MMUFLAGS
 
 	map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT
 
@@ -326,20 +326,20 @@ SYM_FUNC_START_LOCAL(create_idmap)
 	adrp	x1, _text
 	adrp	x2, __bss_start
 	adrp	x3, _end
-	bic	x4, x2, #SWAPPER_BLOCK_SIZE - 1
-	mov	x5, SWAPPER_RW_MMUFLAGS
-	mov	x6, #SWAPPER_BLOCK_SHIFT
+	bic	x4, x2, #INIT_IDMAP_BLOCK_SIZE - 1
+	mov	x5, INIT_IDMAP_RW_MMUFLAGS
+	mov	x6, #INIT_IDMAP_BLOCK_SHIFT
 	bl	remap_region
 
 	/* Remap the FDT after the kernel image */
 	adrp	x1, _text
-	adrp	x22, _end + SWAPPER_BLOCK_SIZE
-	bic	x2, x22, #SWAPPER_BLOCK_SIZE - 1
-	bfi	x22, x21, #0, #SWAPPER_BLOCK_SHIFT		// remapped FDT address
-	add	x3, x2, #MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE
-	bic	x4, x21, #SWAPPER_BLOCK_SIZE - 1
-	mov	x5, SWAPPER_RW_MMUFLAGS
-	mov	x6, #SWAPPER_BLOCK_SHIFT
+	adrp	x22, _end + INIT_IDMAP_BLOCK_SIZE
+	bic	x2, x22, #INIT_IDMAP_BLOCK_SIZE - 1
+	bfi	x22, x21, #0, #INIT_IDMAP_BLOCK_SHIFT		// remapped FDT address
+	add	x3, x2, #MAX_FDT_SIZE + INIT_IDMAP_BLOCK_SIZE
+	bic	x4, x21, #INIT_IDMAP_BLOCK_SIZE - 1
+	mov	x5, INIT_IDMAP_RW_MMUFLAGS
+	mov	x6, #INIT_IDMAP_BLOCK_SHIFT
 	bl	remap_region
 
 	/*
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index f0db2c05e797aeed..b596a39394ba5363 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -206,7 +206,7 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 
-#define KPTI_NG_PTE_FLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
+#define KPTI_NG_PTE_FLAGS  (PTE_ATTRINDX(MT_NORMAL) | PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
 
 	.pushsection ".idmap.text", "awx"
 
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 32/33] mm: add arch hook to validate mmap() prot flags
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (30 preceding siblings ...)
  2022-11-11 17:11 ` [PATCH v7 31/33] arm64: mmu: Retire SWAPPER_BLOCK_xxx and related constants Ard Biesheuvel
@ 2022-11-11 17:12 ` Ard Biesheuvel
  2022-11-11 17:12 ` [PATCH v7 33/33] arm64: mm: add support for WXN memory translation attribute Ard Biesheuvel
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:12 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

Add a hook to permit architectures to perform validation on the prot
flags passed to mmap(), like arch_validate_prot() does for mprotect().
This will be used by arm64 to reject PROT_WRITE+PROT_EXEC mappings on
configurations that run with WXN enabled.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 include/linux/mman.h | 15 +++++++++++++++
 mm/mmap.c            |  3 +++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/mman.h b/include/linux/mman.h
index 58b3abd457a38df4..53ac72310ce0935d 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -120,6 +120,21 @@ static inline bool arch_validate_flags(unsigned long flags)
 #define arch_validate_flags arch_validate_flags
 #endif
 
+#ifndef arch_validate_mmap_prot
+/*
+ * This is called from mmap(), which ignores unknown prot bits so the default
+ * is to accept anything.
+ *
+ * Returns true if the prot flags are valid
+ */
+static inline bool arch_validate_mmap_prot(unsigned long prot,
+					   unsigned long addr)
+{
+	return true;
+}
+#define arch_validate_mmap_prot arch_validate_mmap_prot
+#endif
+
 /*
  * Optimisation macro.  It is equivalent to:
  *      (x & bit1) ? bit2 : 0
diff --git a/mm/mmap.c b/mm/mmap.c
index 2def55555e05f103..cb82740b7527680b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1262,6 +1262,9 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 		if (!(file && path_noexec(&file->f_path)))
 			prot |= PROT_EXEC;
 
+	if (!arch_validate_mmap_prot(prot, addr))
+		return -EACCES;
+
 	/* force arch specific MAP_FIXED handling in get_unmapped_area */
 	if (flags & MAP_FIXED_NOREPLACE)
 		flags |= MAP_FIXED;
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v7 33/33] arm64: mm: add support for WXN memory translation attribute
  2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
                   ` (31 preceding siblings ...)
  2022-11-11 17:12 ` [PATCH v7 32/33] mm: add arch hook to validate mmap() prot flags Ard Biesheuvel
@ 2022-11-11 17:12 ` Ard Biesheuvel
  32 siblings, 0 replies; 35+ messages in thread
From: Ard Biesheuvel @ 2022-11-11 17:12 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
	Kees Cook, Catalin Marinas, Mark Brown, Anshuman Khandual

The AArch64 virtual memory system supports a global WXN control, which
can be enabled to make all writable mappings implicitly no-exec. This is
a useful hardening feature, as it prevents mistakes in managing page
table permissions from being exploited to attack the system.

When enabled at EL1, the restrictions apply to both EL1 and EL0. EL1 is
completely under our control, and has been cleaned up to allow WXN to be
enabled from boot onwards. EL0 is not under our control, but given that
widely deployed security features such as selinux or PaX already limit
the ability of user space to create mappings that are writable and
executable at the same time, the impact of enabling this for EL0 is
expected to be limited. (For this reason, common user space libraries
that have a legitimate need for manipulating executable code already
carry fallbacks such as [0].)

If enabled at compile time, the feature can still be disabled at boot if
needed, by passing arm64.nowxn on the kernel command line.

[0] https://github.com/libffi/libffi/blob/master/src/closures.c#L440

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig                    | 11 ++++++
 arch/arm64/include/asm/cpufeature.h   | 10 ++++++
 arch/arm64/include/asm/mman.h         | 36 ++++++++++++++++++++
 arch/arm64/include/asm/mmu_context.h  | 30 +++++++++++++++-
 arch/arm64/kernel/pi/idreg-override.c |  4 ++-
 arch/arm64/kernel/pi/map_kernel.c     | 24 +++++++++++++
 arch/arm64/mm/proc.S                  |  6 ++++
 7 files changed, 119 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 170832f31eff4567..79ec4bc05694acec 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1509,6 +1509,17 @@ config RODATA_FULL_DEFAULT_ENABLED
 	  This requires the linear region to be mapped down to pages,
 	  which may adversely affect performance in some cases.
 
+config ARM64_WXN
+	bool "Enable WXN attribute so all writable mappings are non-exec"
+	help
+	  Set the WXN bit in the SCTLR system register so that all writable
+	  mappings are treated as if the PXN/UXN bit is set as well.
+	  If this is set to Y, it can still be disabled at runtime by
+	  passing 'arm64.nowxn' on the kernel command line.
+
+	  This should only be set if no software needs to be supported that
+	  relies on being able to execute from writable mappings.
+
 config ARM64_SW_TTBR0_PAN
 	bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
 	help
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b8c7a2d13bbe44e2..4b5c639a5a0a7fab 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -17,6 +17,7 @@
 
 #define ARM64_SW_FEATURE_OVERRIDE_NOKASLR	0
 #define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF	4
+#define ARM64_SW_FEATURE_OVERRIDE_NOWXN		8
 
 #ifndef __ASSEMBLY__
 
@@ -919,6 +920,15 @@ extern struct arm64_ftr_override id_aa64isar2_override;
 
 extern struct arm64_ftr_override arm64_sw_feature_override;
 
+static inline bool arm64_wxn_enabled(void)
+{
+	if (!IS_ENABLED(CONFIG_ARM64_WXN) ||
+	    cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val,
+						 ARM64_SW_FEATURE_OVERRIDE_NOWXN))
+		return false;
+	return true;
+}
+
 u32 get_kvm_ipa_limit(void);
 void dump_cpu_features(void);
 
diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
index 5966ee4a61542edf..6d4940342ba73060 100644
--- a/arch/arm64/include/asm/mman.h
+++ b/arch/arm64/include/asm/mman.h
@@ -35,11 +35,40 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
 }
 #define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)
 
+static inline bool arm64_check_wx_prot(unsigned long prot,
+				       struct task_struct *tsk)
+{
+	/*
+	 * When we are running with SCTLR_ELx.WXN==1, writable mappings are
+	 * implicitly non-executable. This means we should reject such mappings
+	 * when user space attempts to create them using mmap() or mprotect().
+	 */
+	if (arm64_wxn_enabled() &&
+	    ((prot & (PROT_WRITE | PROT_EXEC)) == (PROT_WRITE | PROT_EXEC))) {
+		/*
+		 * User space libraries such as libffi carry elaborate
+		 * heuristics to decide whether it is worth it to even attempt
+		 * to create writable executable mappings, as PaX or selinux
+		 * enabled systems will outright reject it. They will usually
+		 * fall back to something else (e.g., two separate shared
+		 * mmap()s of a temporary file) on failure.
+		 */
+		pr_info_ratelimited(
+			"process %s (%d) attempted to create PROT_WRITE+PROT_EXEC mapping\n",
+			tsk->comm, tsk->pid);
+		return false;
+	}
+	return true;
+}
+
 static inline bool arch_validate_prot(unsigned long prot,
 	unsigned long addr __always_unused)
 {
 	unsigned long supported = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM;
 
+	if (!arm64_check_wx_prot(prot, current))
+		return false;
+
 	if (system_supports_bti())
 		supported |= PROT_BTI;
 
@@ -50,6 +79,13 @@ static inline bool arch_validate_prot(unsigned long prot,
 }
 #define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr)
 
+static inline bool arch_validate_mmap_prot(unsigned long prot,
+					   unsigned long addr)
+{
+	return arm64_check_wx_prot(prot, current);
+}
+#define arch_validate_mmap_prot arch_validate_mmap_prot
+
 static inline bool arch_validate_flags(unsigned long vm_flags)
 {
 	if (!system_supports_mte())
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 3c80c34f14e152d9..4c20f7fc8abdbef9 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -19,13 +19,41 @@
 #include <asm/cacheflush.h>
 #include <asm/cpufeature.h>
 #include <asm/proc-fns.h>
-#include <asm-generic/mm_hooks.h>
 #include <asm/cputype.h>
 #include <asm/sysreg.h>
 #include <asm/tlbflush.h>
 
 extern bool rodata_full;
 
+static inline int arch_dup_mmap(struct mm_struct *oldmm,
+				struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void arch_exit_mmap(struct mm_struct *mm)
+{
+}
+
+static inline void arch_unmap(struct mm_struct *mm,
+			unsigned long start, unsigned long end)
+{
+}
+
+static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+		bool write, bool execute, bool foreign)
+{
+	if (IS_ENABLED(CONFIG_ARM64_WXN) && execute &&
+	    (vma->vm_flags & (VM_WRITE | VM_EXEC)) == (VM_WRITE | VM_EXEC)) {
+		pr_warn_ratelimited(
+			"process %s (%d) attempted to execute from writable memory\n",
+			current->comm, current->pid);
+		/* disallow unless the nowxn override is set */
+		return !arm64_wxn_enabled();
+	}
+	return true;
+}
+
 static inline void contextidr_thread_switch(struct task_struct *next)
 {
 	if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR))
diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
index d0ce3dc4e07aaf4d..662c3d21e150e7f9 100644
--- a/arch/arm64/kernel/pi/idreg-override.c
+++ b/arch/arm64/kernel/pi/idreg-override.c
@@ -136,6 +136,7 @@ DEFINE_OVERRIDE(5, smfr0, "id_aa64smfr0", id_aa64smfr0_override,
 DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override,
 		FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR),
 		FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF),
+		FIELD("nowxn", ARM64_SW_FEATURE_OVERRIDE_NOWXN),
 		{});
 
 /*
@@ -167,7 +168,8 @@ static const struct {
 	  "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0"	   },
 	{ "arm64.nomte",		"id_aa64pfr1.mte=0" },
 	{ "nokaslr",			"arm64_sw.nokaslr=1" },
-	{ "rodata=off",			"arm64_sw.rodataoff=1" },
+	{ "rodata=off",			"arm64_sw.rodataoff=1 arm64_sw.nowxn=1" },
+	{ "arm64.nowxn",		"arm64_sw.nowxn=1" },
 };
 
 static int __init find_field(const char *cmdline, char *opt, int len,
diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c
index 4b604b104460c3ef..2bbf017147830bbe 100644
--- a/arch/arm64/kernel/pi/map_kernel.c
+++ b/arch/arm64/kernel/pi/map_kernel.c
@@ -242,6 +242,25 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset)
 	idmap_cpu_replace_ttbr1(swapper_pg_dir);
 }
 
+static void noinline __section(".idmap.text") disable_wxn(void)
+{
+	u64 sctlr = read_sysreg(sctlr_el1) & ~SCTLR_ELx_WXN;
+
+	/*
+	 * We cannot safely clear the WXN bit while the MMU and caches are on,
+	 * so turn the MMU off, flush the TLBs and turn it on again but with
+	 * the WXN bit cleared this time.
+	 */
+	asm("	msr	sctlr_el1, %0		;"
+	    "	isb				;"
+	    "	tlbi    vmalle1			;"
+	    "	dsb     nsh			;"
+	    "	isb				;"
+	    "	msr     sctlr_el1, %1		;"
+	    "	isb				;"
+	    ::	"r"(sctlr & ~SCTLR_ELx_M), "r"(sctlr));
+}
+
 asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt)
 {
 	static char const chosen_str[] __initconst = "/chosen";
@@ -255,6 +274,11 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt)
 	/* Parse the command line for CPU feature overrides */
 	init_feature_override(boot_status, fdt, chosen);
 
+	if (IS_ENABLED(CONFIG_ARM64_WXN) &&
+	    cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val,
+						 ARM64_SW_FEATURE_OVERRIDE_NOWXN))
+		disable_wxn();
+
 	/*
 	 * The virtual KASLR displacement modulo 2MiB is decided by the
 	 * physical placement of the image, as otherwise, we might not be able
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index b596a39394ba5363..9d8d9d637105c200 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -465,6 +465,12 @@ SYM_FUNC_START(__cpu_setup)
 	 * Prepare SCTLR
 	 */
 	mov_q	x0, INIT_SCTLR_EL1_MMU_ON
+#ifdef CONFIG_ARM64_WXN
+	ldr_l	x1, arm64_sw_feature_override + FTR_OVR_VAL_OFFSET
+	tst	x1, #0xf << ARM64_SW_FEATURE_OVERRIDE_NOWXN
+	orr	x1, x0, #SCTLR_ELx_WXN
+	csel	x0, x0, x1, ne
+#endif
 	ret					// return to head.S
 
 	.unreq	mair
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity
  2022-11-11 17:11 ` [PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity Ard Biesheuvel
@ 2022-11-24  5:11   ` Anshuman Khandual
  0 siblings, 0 replies; 35+ messages in thread
From: Anshuman Khandual @ 2022-11-24  5:11 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: Marc Zyngier, Will Deacon, Mark Rutland, Kees Cook,
	Catalin Marinas, Mark Brown



On 11/11/22 22:41, Ard Biesheuvel wrote:
> The logic to decide between PTE and PMD mappings in the vmemmap region
> is currently based on the granularity of the initial ID map but those
> things have little to do with each other.
> 
> The reason we use PMDs here on 4k pagesize kernels is because a struct
> page array describing a single section of memory takes up at least the
> size described by a PMD, and so mapping down to pages is pointless.
> 
> So use the correct conditional, and add a comment to clarify it.
> 
> This allows us to remove or rename the swapper block size related
> constants in the future.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>> ---

The patch LGTM in itself.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

>  arch/arm64/mm/mmu.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 757c2fe54d2e99f0..0c35e1f195678695 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1196,7 +1196,12 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>  
>  	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>  
> -	if (!ARM64_KERNEL_USES_PMD_MAPS)
> +	/*
> +	 * Use page mappings for the vmemmap region if the area taken up by a
> +	 * struct page array covering a single section is smaller than the area
> +	 * covered by a PMD.
> +	 */
> +	if (SECTION_SIZE_BITS - VMEMMAP_SHIFT < PMD_SHIFT)
>  		return vmemmap_populate_basepages(start, end, node, altmap);
>  
>  	do {

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2022-11-24  5:12 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-11 17:11 [PATCH v7 00/33] arm64: robustify boot sequence and add support for WXN Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 01/33] arm64: mm: Avoid SWAPPER_BLOCK_xxx constants in FDT fixmap logic Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity Ard Biesheuvel
2022-11-24  5:11   ` Anshuman Khandual
2022-11-11 17:11 ` [PATCH v7 03/33] arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 04/33] arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 05/33] arm64: kernel: Disable latent_entropy GCC plugin in early C runtime Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 06/33] arm64: kernel: Add relocation check to code built under pi/ Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 07/33] arm64: kernel: Don't rely on objcopy to make code under pi/ __init Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 08/33] arm64: head: move relocation handling to C code Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 09/33] arm64: Turn kaslr_feature_override into a generic SW feature override Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 10/33] arm64: idreg-override: Omit non-NULL checks for override pointer Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 11/33] arm64: idreg-override: Use relative references to override variables Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 12/33] arm64: idreg-override: Use relative references to filter routines Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 13/33] arm64: idreg-override: Avoid parameq() and parameqn() Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 14/33] arm64: idreg-override: avoid strlen() to check for empty strings Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 15/33] arm64: idreg-override: Avoid sprintf() for simple string concatenation Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 16/33] arm64: idreg_override: Avoid kstrtou64() to parse a single hex digit Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 17/33] arm64: idreg-override: Move to early mini C runtime Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 18/33] arm64: kernel: Remove early fdt remap code Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 19/33] arm64: head: Clear BSS and the kernel page tables in one go Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 20/33] arm64: Move feature overrides into the BSS section Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 21/33] arm64: head: Run feature override detection before mapping the kernel Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 22/33] arm64: head: move dynamic shadow call stack patching into early C runtime Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 23/33] arm64: kaslr: Use feature override instead of parsing the cmdline again Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 24/33] arm64: idreg-override: Create a pseudo feature for rodata=off Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 25/33] arm64: head: allocate more pages for the kernel mapping Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 26/33] arm64: head: move memstart_offset_seed handling to C code Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 27/33] arm64: head: Move early kernel mapping routines into " Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 28/33] arm64: mm: avoid fixmap for early swapper_pg_dir updates Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 29/33] arm64: mm: omit redundant remap of kernel image Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 30/33] arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" Ard Biesheuvel
2022-11-11 17:11 ` [PATCH v7 31/33] arm64: mmu: Retire SWAPPER_BLOCK_xxx and related constants Ard Biesheuvel
2022-11-11 17:12 ` [PATCH v7 32/33] mm: add arch hook to validate mmap() prot flags Ard Biesheuvel
2022-11-11 17:12 ` [PATCH v7 33/33] arm64: mm: add support for WXN memory translation attribute Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).