linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv3 0/4] arm64: simplify restrictions on bootloaders
@ 2014-06-19 10:49 Mark Rutland
  2014-06-19 10:49 ` [PATCHv3 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-19 10:49 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

This is version 3 of the boot simplification series I posted previously [1,4].
Many thanks to those who reviewed / tested the prior postings.

Changes since v2 [4]
* Rebased to v3.16-rc1 now that the EFI changes have been merged. Fixed
  conflicts in arch/arm64/mm/init.c.
* Added flags field to export kernel endianness.
* Moved the linker script macros to image.h, to hide the magic behind a new
  HEAD_SYMBOLS macro.
* Fixed up the documentation, including some cleanups for the EFI changes. It
  now refers to v3.17, but I can update if this is delayed further.
* Renamed patch 3 as it has absorbed several Image header changes.
Changes since v1 [1]:
* Rebased to Matt Fleming's arm64-efi branch [2] to resolve conflict with the
  EFI stub series in head.S.
* Fixed random TEXT_OFFSET generation to be mawk compatible.
* Removed option to set TEXT_OFFSET explicitly per Catalin's request.
* Remove (invalid) recommendation of 1MB for dynamically initialised data.

For those who have not seen v1 or v2, a rationale and description of the series
follows:

Currently bootloaders have an extremely difficult time protecting memory from
the kernel, as the kernel may clobber memory below TEXT_OFFSET with pagetables,
and above the end of the kernel binary with the BSS.

This series attempts to ameliorate matters by adding a mechanism for bootloaders
to discover the minimum runtime footprint of the kernel image, including the BSS
and any other dynamically initialised data, and moving the initial page tables
into this region.

The currently ill-described image load offset variable is coerced to always be
little-endian. This means that bootloader can actually make use of the field for
any kernel (wither LE or BE), and as the field does not yet seem to be used
anywhere taking endianness into account I hope this is not problematic.
Documentation is updated with recommendations on handling the field. To aid in
encouraging bootloader authors to respect the field, an option is added to
randomize the text_offset field at link time, which may be used in test and/or
distribution kernels. So as to not break existing (but arguably broken) loaders
immediately, this option is hidden under kernel hacking and disabled by default.

The documentation is updated to cover how to use the new image_size field and
what to do if it is zero, and how to use the image_size field to determine
whether the text_offset field is guaranteed to be little-endian. In the absence
of an image_size field, it's not possible to provide a reasonable sensible
value due to configuration-dependent variation -- a recent defconfg kernel had
a ~190KB BSS, while an allyesconfig build (with some features disabled due to
build breakages) had a ~13MB BSS.

A BE conditional 64-bit endianness swapping routine (DATA_LE64) is added to
vmlinux.lds.S, as the linker is the only place we can endianness swap a value
calculated from two symbols known only at link time. There are several existing
headers that do almost the same thing but due to use of C prototypes and/or
casts are not suitable for use in a linker script. A separate series may be able
to unify that.

I've given some light testing to text_offset fuzzing with an updated bootwrapper
[3] which reads the text_offset field at build time to ensure the kernel gets
loaded at the right address. Nothing else is yet moved however, so this may
explode if this location happens to overlap the bootwrapper code, DTB, or
spin-table mbox. I'll try to teach the bootwrapper how to deal with that
shortly.

Cheers,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-May/257141.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi.git arm64-efi
[3] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/heads/unstable/text-offset
[4] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-May/260076.html

Mark Rutland (4):
  arm64: head.S: remove unnecessary function alignment
  arm64: place initial page tables above the kernel
  arm64: Update the Image header
  arm64: Enable TEXT_OFFSET fuzzing

 Documentation/arm64/booting.txt | 43 ++++++++++++++++++++++------
 arch/arm64/Kconfig.debug        | 16 +++++++++++
 arch/arm64/Makefile             |  4 +++
 arch/arm64/include/asm/image.h  | 62 +++++++++++++++++++++++++++++++++++++++++
 arch/arm64/include/asm/page.h   |  9 ++++++
 arch/arm64/kernel/head.S        | 49 +++++++++++++++-----------------
 arch/arm64/kernel/vmlinux.lds.S | 15 ++++++++++
 arch/arm64/mm/init.c            | 12 +++-----
 8 files changed, 168 insertions(+), 42 deletions(-)
 create mode 100644 arch/arm64/include/asm/image.h

-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 1/4] arm64: head.S: remove unnecessary function alignment
  2014-06-19 10:49 [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
@ 2014-06-19 10:49 ` Mark Rutland
  2014-06-19 10:49 ` [PATCHv3 2/4] arm64: place initial page tables above the kernel Mark Rutland
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-19 10:49 UTC (permalink / raw)
  To: linux-arm-kernel

Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.

Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.

This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm64/kernel/head.S | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index a96d3a6..7ec7817 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -456,8 +456,13 @@ ENDPROC(__enable_mmu)
  *  x27 = *virtual* address to jump to upon completion
  *
  * other registers depend on the function called upon completion
+ *
+ * We align the entire function to the smallest power of two larger than it to
+ * ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET
+ * close to the end of a 512MB or 1GB block we might require an additional
+ * table to map the entire function.
  */
-	.align	6
+	.align	4
 __turn_mmu_on:
 	msr	sctlr_el1, x0
 	isb
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv3 2/4] arm64: place initial page tables above the kernel
  2014-06-19 10:49 [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
  2014-06-19 10:49 ` [PATCHv3 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
@ 2014-06-19 10:49 ` Mark Rutland
  2014-06-19 10:49 ` [PATCHv3 3/4] arm64: Update the Image header Mark Rutland
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-19 10:49 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.

To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.

This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm64/include/asm/page.h   |  9 +++++++++
 arch/arm64/kernel/head.S        | 28 ++++++++--------------------
 arch/arm64/kernel/vmlinux.lds.S |  7 +++++++
 arch/arm64/mm/init.c            | 12 ++++--------
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 46bf666..a6331e6 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -31,6 +31,15 @@
 /* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
 #define __HAVE_ARCH_GATE_AREA		1
 
+/*
+ * The idmap and swapper page tables need some space reserved in the kernel
+ * image. The idmap only requires a pgd and a next level table to (section) map
+ * the kernel, while the swapper also maps the FDT and requires an additional
+ * table to map an early UART. See __create_page_tables for more information.
+ */
+#define SWAPPER_DIR_SIZE	(3 * PAGE_SIZE)
+#define IDMAP_DIR_SIZE		(2 * PAGE_SIZE)
+
 #ifndef __ASSEMBLY__
 
 #ifdef CONFIG_ARM64_64K_PAGES
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 7ec7817..e048f2b 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -35,29 +35,17 @@
 #include <asm/page.h>
 #include <asm/virt.h>
 
-/*
- * swapper_pg_dir is the virtual address of the initial page table. We place
- * the page tables 3 * PAGE_SIZE below KERNEL_RAM_VADDR. The idmap_pg_dir has
- * 2 pages and is placed below swapper_pg_dir.
- */
 #define KERNEL_RAM_VADDR	(PAGE_OFFSET + TEXT_OFFSET)
 
 #if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000
 #error KERNEL_RAM_VADDR must start at 0xXXX80000
 #endif
 
-#define SWAPPER_DIR_SIZE	(3 * PAGE_SIZE)
-#define IDMAP_DIR_SIZE		(2 * PAGE_SIZE)
-
-	.globl	swapper_pg_dir
-	.equ	swapper_pg_dir, KERNEL_RAM_VADDR - SWAPPER_DIR_SIZE
-
-	.globl	idmap_pg_dir
-	.equ	idmap_pg_dir, swapper_pg_dir - IDMAP_DIR_SIZE
-
-	.macro	pgtbl, ttb0, ttb1, phys
-	add	\ttb1, \phys, #TEXT_OFFSET - SWAPPER_DIR_SIZE
-	sub	\ttb0, \ttb1, #IDMAP_DIR_SIZE
+	.macro	pgtbl, ttb0, ttb1, virt_to_phys
+	ldr	\ttb1, =swapper_pg_dir
+	ldr	\ttb0, =idmap_pg_dir
+	add	\ttb1, \ttb1, \virt_to_phys
+	add	\ttb0, \ttb0, \virt_to_phys
 	.endm
 
 #ifdef CONFIG_ARM64_64K_PAGES
@@ -414,7 +402,7 @@ ENTRY(secondary_startup)
 	mov	x23, x0				// x23=current cpu_table
 	cbz	x23, __error_p			// invalid processor (x23=0)?
 
-	pgtbl	x25, x26, x24			// x25=TTBR0, x26=TTBR1
+	pgtbl	x25, x26, x28			// x25=TTBR0, x26=TTBR1
 	ldr	x12, [x23, #CPU_INFO_SETUP]
 	add	x12, x12, x28			// __virt_to_phys
 	blr	x12				// initialise processor
@@ -528,7 +516,7 @@ ENDPROC(__calc_phys_offset)
  *   - pgd entry for fixed mappings (TTBR1)
  */
 __create_page_tables:
-	pgtbl	x25, x26, x24			// idmap_pg_dir and swapper_pg_dir addresses
+	pgtbl	x25, x26, x28			// idmap_pg_dir and swapper_pg_dir addresses
 	mov	x27, lr
 
 	/*
@@ -617,7 +605,7 @@ ENDPROC(__create_page_tables)
 __switch_data:
 	.quad	__mmap_switched
 	.quad	__bss_start			// x6
-	.quad	_end				// x7
+	.quad	__bss_stop			// x7
 	.quad	processor_id			// x4
 	.quad	__fdt_pointer			// x5
 	.quad	memstart_addr			// x6
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index f1e6d5c..c6648d3 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -104,6 +104,13 @@ SECTIONS
 	_edata = .;
 
 	BSS_SECTION(0, 0, 0)
+
+	. = ALIGN(PAGE_SIZE);
+	idmap_pg_dir = .;
+	. += IDMAP_DIR_SIZE;
+	swapper_pg_dir = .;
+	. += SWAPPER_DIR_SIZE;
+
 	_end = .;
 
 	STABS_DEBUG
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 091d428..35bca76 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -126,20 +126,16 @@ static void arm64_memory_present(void)
 
 void __init arm64_memblock_init(void)
 {
-	/* Register the kernel text, kernel data and initrd with memblock */
+	/*
+	 * Register the kernel text, kernel data, initrd, and initial
+	 * pagetables with memblock.
+	 */
 	memblock_reserve(__pa(_text), _end - _text);
 #ifdef CONFIG_BLK_DEV_INITRD
 	if (initrd_start)
 		memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
 #endif
 
-	/*
-	 * Reserve the page tables.  These are already in use,
-	 * and can only be in node 0.
-	 */
-	memblock_reserve(__pa(swapper_pg_dir), SWAPPER_DIR_SIZE);
-	memblock_reserve(__pa(idmap_pg_dir), IDMAP_DIR_SIZE);
-
 	early_init_fdt_scan_reserved_mem();
 	dma_contiguous_reserve(0);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv3 3/4] arm64: Update the Image header
  2014-06-19 10:49 [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
  2014-06-19 10:49 ` [PATCHv3 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
  2014-06-19 10:49 ` [PATCHv3 2/4] arm64: place initial page tables above the kernel Mark Rutland
@ 2014-06-19 10:49 ` Mark Rutland
  2014-06-20  8:55   ` Will Deacon
  2014-06-20 17:03   ` Geoff Levand
  2014-06-19 10:49 ` [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
  2014-06-20  8:56 ` [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Will Deacon
  4 siblings, 2 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-19 10:49 UTC (permalink / raw)
  To: linux-arm-kernel

Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.

Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.

This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.

Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.

The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
---
 Documentation/arm64/booting.txt | 43 ++++++++++++++++++++++------
 arch/arm64/include/asm/image.h  | 62 +++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/head.S        |  6 ++--
 arch/arm64/kernel/vmlinux.lds.S |  3 ++
 4 files changed, 103 insertions(+), 11 deletions(-)
 create mode 100644 arch/arm64/include/asm/image.h

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 37fc4f6..85af34d 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -72,27 +72,54 @@ The decompressed kernel image contains a 64-byte header as follows:
 
   u32 code0;			/* Executable code */
   u32 code1;			/* Executable code */
-  u64 text_offset;		/* Image load offset */
-  u64 res0	= 0;		/* reserved */
-  u64 res1	= 0;		/* reserved */
+  u64 text_offset;		/* Image load offset, little endian */
+  u64 image_size;		/* Effective Image size, little endian */
+  u64 flags;			/* kernel flags, little endian */
   u64 res2	= 0;		/* reserved */
   u64 res3	= 0;		/* reserved */
   u64 res4	= 0;		/* reserved */
   u32 magic	= 0x644d5241;	/* Magic number, little endian, "ARM\x64" */
-  u32 res5 = 0;      		/* reserved */
+  u32 res5;      		/* reserved (used for PE COFF offset) */
 
 
 Header notes:
 
+- As of v3.17, all fields are little endian unless stated otherwise.
+
 - code0/code1 are responsible for branching to stext.
+
 - when booting through EFI, code0/code1 are initially skipped.
   res5 is an offset to the PE header and the PE header has the EFI
-  entry point (efi_stub_entry). When the stub has done its work, it
+  entry point (efi_stub_entry).  When the stub has done its work, it
   jumps to code0 to resume the normal boot process.
 
-The image must be placed at the specified offset (currently 0x80000)
-from the start of the system RAM and called there. The start of the
-system RAM must be aligned to 2MB.
+- Prior to v3.17, the endianness of text_offset was not specified.  In
+  these cases image_size is zero and text_offset is 0x80000 in the
+  endianness of the kernel.  Where image_size is non-zero image_size is
+  little-endian and must be respected.  Where image_size is zero,
+  text_offset can be assumed to be 0x80000.
+
+- The flags field (introduced in v3.17) is a little-endian 64-bit field
+  composed as follows:
+  Bit 0: 	Kernel endianness.  1 if BE, 0 if LE.
+  Bits 1-63:	Reserved.
+
+- When image_size is zero, a bootloader should attempt to keep as much
+  memory as possible free for use by the kernel immediately after the
+  end of the kernel image. The amount of space required will vary
+  depending on selected features, and is effectively unbound.
+
+The Image must be placed text_offset bytes from a 2MB aligned base
+address near the start of usable system RAM and called there. Memory
+below that base address is currently unusable by Linux, and therefore it
+is strongly recommended that this location is the start of system RAM.
+At least image_size bytes from the start of the image must be free for
+use by the kernel.
+
+Any memory described to the kernel (even that below the 2MB aligned base
+address) which is not marked as reserved from the kernel e.g. with a
+memreserve region in the device tree) will be considered as available to
+the kernel.
 
 Before jumping into the kernel, the following conditions must be met:
 
diff --git a/arch/arm64/include/asm/image.h b/arch/arm64/include/asm/image.h
new file mode 100644
index 0000000..8fae075
--- /dev/null
+++ b/arch/arm64/include/asm/image.h
@@ -0,0 +1,62 @@
+/*
+ * Linker script macros to generate Image header fields.
+ *
+ * Copyright (C) 2014 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_IMAGE_H
+#define __ASM_IMAGE_H
+
+#ifndef LINKER_SCRIPT
+#error This file should only be included in vmlinux.lds.S
+#endif
+
+/*
+ * There aren't any ELF relocations we can use to endian-swap values known only
+ * at link time (e.g. the subtraction of two symbol addresses), so we must get
+ * the linker to endian-swap certain values before emitting them.
+ */
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define DATA_LE64(data)					\
+	((((data) & 0x00000000000000ff) << 56) |	\
+	 (((data) & 0x000000000000ff00) << 40) |	\
+	 (((data) & 0x0000000000ff0000) << 24) |	\
+	 (((data) & 0x00000000ff000000) << 8)  |	\
+	 (((data) & 0x000000ff00000000) >> 8)  |	\
+	 (((data) & 0x0000ff0000000000) >> 24) |	\
+	 (((data) & 0x00ff000000000000) >> 40) |	\
+	 (((data) & 0xff00000000000000) >> 56))
+#else
+#define DATA_LE64(data) ((data) & 0xffffffffffffffff)
+#endif
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define __HEAD_FLAG_BE	1
+#else
+#define __HEAD_FLAG_BE	0
+#endif
+
+#define __HEAD_FLAGS	(__HEAD_FLAG_BE << 0)
+
+/*
+ * These will output as part of the Image header, which should be little-endian
+ * regardless of the endianness of the kernel. While constant values could be
+ * endian swapped in head.S, all are done here for consistency.
+ */
+#define HEAD_SYMBOLS						\
+	_kernel_size_le		= DATA_LE64(_end - _text);	\
+	_kernel_offset_le	= DATA_LE64(TEXT_OFFSET);	\
+	_kernel_flags_le	= DATA_LE64(__HEAD_FLAGS);
+
+#endif /* __ASM_IMAGE_H */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index e048f2b..7b59c3d 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -108,9 +108,9 @@ efi_head:
 	b	stext				// branch to kernel start, magic
 	.long	0				// reserved
 #endif
-	.quad	TEXT_OFFSET			// Image load offset from start of RAM
-	.quad	0				// reserved
-	.quad	0				// reserved
+	.quad	_kernel_offset_le		// Image load offset from start of RAM, little-endian
+	.quad	_kernel_size_le			// Effective size of kernel image, little-endian
+	.quad	_kernel_flags_le		// Informative flags, little-endian
 	.quad	0				// reserved
 	.quad	0				// reserved
 	.quad	0				// reserved
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index c6648d3..9cbf77a 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -5,6 +5,7 @@
  */
 
 #include <asm-generic/vmlinux.lds.h>
+#include <asm/image.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/page.h>
@@ -114,6 +115,8 @@ SECTIONS
 	_end = .;
 
 	STABS_DEBUG
+
+	HEAD_SYMBOLS
 }
 
 /*
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing
  2014-06-19 10:49 [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
                   ` (2 preceding siblings ...)
  2014-06-19 10:49 ` [PATCHv3 3/4] arm64: Update the Image header Mark Rutland
@ 2014-06-19 10:49 ` Mark Rutland
  2014-06-20  8:50   ` Will Deacon
  2014-06-20  8:56 ` [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Will Deacon
  4 siblings, 1 reply; 12+ messages in thread
From: Mark Rutland @ 2014-06-19 10:49 UTC (permalink / raw)
  To: linux-arm-kernel

The arm64 Image header contains a text_offset field which bootloaders
are supposed to read to determine the offset (from a 2MB aligned "start
of memory" per booting.txt) at which to load the kernel. The offset is
not well respected by bootloaders at present, and due to the lack of
variation there is little incentive to support it. This is unfortunate
for the sake of future kernels where we may wish to vary the text offset
(even zeroing it).

This patch adds options to arm64 to enable fuzz-testing of text_offset.
CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
16-byte aligned value value in the range [0..2MB) upon a build of the
kernel. It is recommended that distribution kernels enable randomization
to test bootloaders such that any compliance issues can be fixed early.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
---
 arch/arm64/Kconfig.debug        | 16 ++++++++++++++++
 arch/arm64/Makefile             |  4 ++++
 arch/arm64/kernel/head.S        |  8 ++++++--
 arch/arm64/kernel/vmlinux.lds.S |  5 +++++
 4 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index 1c1b756..566bf80 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -28,4 +28,20 @@ config PID_IN_CONTEXTIDR
 	  instructions during context switch. Say Y here only if you are
 	  planning to use hardware trace tools with this kernel.
 
+config ARM64_RANDOMIZE_TEXT_OFFSET
+	bool "Randomize TEXT_OFFSET at build time (EXPERIMENTAL)"
+	default N
+	help
+	  Say Y here if you want the image load offset (AKA TEXT_OFFSET)
+	  of the kernel to be randomized at build-time. When selected,
+	  this option will cause TEXT_OFFSET to be randomized upon any
+	  build of the kernel, and the offset will be reflected in the
+	  text_offset field of the resulting Image. This can be used to
+	  fuzz-test bootloaders which respect text_offset.
+
+	  This option is intended for bootloader and/or kernel testing
+	  only. Bootloaders must make no assumptions regarding the value
+	  of TEXT_OFFSET and platforms must not require a specific
+	  value.
+
 endmenu
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 8185a91..e8d025c 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -38,7 +38,11 @@ CHECKFLAGS	+= -D__aarch64__
 head-y		:= arch/arm64/kernel/head.o
 
 # The byte offset of the kernel image in RAM from the start of RAM.
+ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y)
+TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%04x0\n", int(65535 * rand())}')
+else
 TEXT_OFFSET := 0x00080000
+endif
 
 export	TEXT_OFFSET GZFLAGS
 
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 7b59c3d..8483504 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -37,8 +37,12 @@
 
 #define KERNEL_RAM_VADDR	(PAGE_OFFSET + TEXT_OFFSET)
 
-#if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000
-#error KERNEL_RAM_VADDR must start at 0xXXX80000
+#if (TEXT_OFFSET & 0xf) != 0
+#error TEXT_OFFSET must be at least 16B aligned
+#elif (PAGE_OFFSET & 0xfffff) != 0
+#error PAGE_OFFSET must be at least 2MB aligned
+#elif TEXT_OFFSET > 0xfffff
+#error TEXT_OFFSET must be less than 2MB
 #endif
 
 	.macro	pgtbl, ttb0, ttb1, virt_to_phys
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 9cbf77a..d1270c3 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -124,3 +124,8 @@ SECTIONS
  */
 ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end),
        "HYP init code too big")
+
+/*
+ * If padding is applied before .head.text, virt<->phys conversions will fail.
+ */
+ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing
  2014-06-19 10:49 ` [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
@ 2014-06-20  8:50   ` Will Deacon
  2014-06-20 10:35     ` Mark Rutland
  0 siblings, 1 reply; 12+ messages in thread
From: Will Deacon @ 2014-06-20  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 19, 2014 at 11:49:23AM +0100, Mark Rutland wrote:
> The arm64 Image header contains a text_offset field which bootloaders
> are supposed to read to determine the offset (from a 2MB aligned "start
> of memory" per booting.txt) at which to load the kernel. The offset is
> not well respected by bootloaders at present, and due to the lack of
> variation there is little incentive to support it. This is unfortunate
> for the sake of future kernels where we may wish to vary the text offset
> (even zeroing it).
> 
> This patch adds options to arm64 to enable fuzz-testing of text_offset.
> CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
> 16-byte aligned value value in the range [0..2MB) upon a build of the
> kernel. It is recommended that distribution kernels enable randomization
> to test bootloaders such that any compliance issues can be fixed early.

[...]

> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index 1c1b756..566bf80 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -28,4 +28,20 @@ config PID_IN_CONTEXTIDR
>  	  instructions during context switch. Say Y here only if you are
>  	  planning to use hardware trace tools with this kernel.
>  
> +config ARM64_RANDOMIZE_TEXT_OFFSET
> +	bool "Randomize TEXT_OFFSET at build time (EXPERIMENTAL)"

Lose the (EXPERIMENTAL) suffix -- this already lives under Kconfig.debug.

> +	default N

I think this is redundant.

> +	help
> +	  Say Y here if you want the image load offset (AKA TEXT_OFFSET)
> +	  of the kernel to be randomized at build-time. When selected,
> +	  this option will cause TEXT_OFFSET to be randomized upon any
> +	  build of the kernel, and the offset will be reflected in the
> +	  text_offset field of the resulting Image. This can be used to
> +	  fuzz-test bootloaders which respect text_offset.
> +
> +	  This option is intended for bootloader and/or kernel testing
> +	  only. Bootloaders must make no assumptions regarding the value
> +	  of TEXT_OFFSET and platforms must not require a specific
> +	  value.

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 3/4] arm64: Update the Image header
  2014-06-19 10:49 ` [PATCHv3 3/4] arm64: Update the Image header Mark Rutland
@ 2014-06-20  8:55   ` Will Deacon
  2014-06-20 10:32     ` Mark Rutland
  2014-06-20 17:03   ` Geoff Levand
  1 sibling, 1 reply; 12+ messages in thread
From: Will Deacon @ 2014-06-20  8:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 19, 2014 at 11:49:22AM +0100, Mark Rutland wrote:
> Currently the kernel Image is stripped of everything past the initial
> stack, and at runtime the memory is initialised and used by the kernel.
> This makes the effective minimum memory footprint of the kernel larger
> than the size of the loaded binary, though bootloaders have no mechanism
> to identify how large this minimum memory footprint is. This makes it
> difficult to choose safe locations to place both the kernel and other
> binaries required at boot (DTB, initrd, etc), such that the kernel won't
> clobber said binaries or other reserved memory during initialisation.

[...]

> +/*
> + * There aren't any ELF relocations we can use to endian-swap values known only
> + * at link time (e.g. the subtraction of two symbol addresses), so we must get
> + * the linker to endian-swap certain values before emitting them.
> + */
> +#ifdef CONFIG_CPU_BIG_ENDIAN
> +#define DATA_LE64(data)					\
> +	((((data) & 0x00000000000000ff) << 56) |	\
> +	 (((data) & 0x000000000000ff00) << 40) |	\

Are you sure that these shifts are valid without a UL prefix on the first
constant operand? I don't think the leading zeroes promote the mask to a
64-bit type, so you end up shifting greater than the width of the type.

I guess the compiler doesn't allocate (data & 0xff) into a w register?

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 0/4] arm64: simplify restrictions on bootloaders
  2014-06-19 10:49 [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
                   ` (3 preceding siblings ...)
  2014-06-19 10:49 ` [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
@ 2014-06-20  8:56 ` Will Deacon
  4 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2014-06-20  8:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 19, 2014 at 11:49:19AM +0100, Mark Rutland wrote:
> Hi all,
> 
> This is version 3 of the boot simplification series I posted previously [1,4].
> Many thanks to those who reviewed / tested the prior postings.

Other than my minor comments:

  Acked-by: Will Deacon <will.deacon@arm.com>

for the series.

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 3/4] arm64: Update the Image header
  2014-06-20  8:55   ` Will Deacon
@ 2014-06-20 10:32     ` Mark Rutland
  0 siblings, 0 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-20 10:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jun 20, 2014 at 09:55:43AM +0100, Will Deacon wrote:
> On Thu, Jun 19, 2014 at 11:49:22AM +0100, Mark Rutland wrote:
> > Currently the kernel Image is stripped of everything past the initial
> > stack, and at runtime the memory is initialised and used by the kernel.
> > This makes the effective minimum memory footprint of the kernel larger
> > than the size of the loaded binary, though bootloaders have no mechanism
> > to identify how large this minimum memory footprint is. This makes it
> > difficult to choose safe locations to place both the kernel and other
> > binaries required at boot (DTB, initrd, etc), such that the kernel won't
> > clobber said binaries or other reserved memory during initialisation.
> 
> [...]
> 
> > +/*
> > + * There aren't any ELF relocations we can use to endian-swap values known only
> > + * at link time (e.g. the subtraction of two symbol addresses), so we must get
> > + * the linker to endian-swap certain values before emitting them.
> > + */
> > +#ifdef CONFIG_CPU_BIG_ENDIAN
> > +#define DATA_LE64(data)					\
> > +	((((data) & 0x00000000000000ff) << 56) |	\
> > +	 (((data) & 0x000000000000ff00) << 40) |	\
> 
> Are you sure that these shifts are valid without a UL prefix on the first
> constant operand? I don't think the leading zeroes promote the mask to a
> 64-bit type, so you end up shifting greater than the width of the type.

A key point to bear in mind is that this is in ld, not gas or gcc, which
is why I needed a new macro in the first place. See the error on #ifndef
LINKER_SCRIPT at the top of the file.

As far as I can tell these are valid within ld. The GNU ld documentation
says expressions should be evaluated as 64-bit values (given that
aarch64 is 64-bit) [1].

The kernel header looks correct when inspected with a hex editor, for
both LE and BE.

> I guess the compiler doesn't allocate (data & 0xff) into a w register?

I would guess it wouldn't allocate anything given it is not involved :)

Mark.

[1] https://sourceware.org/binutils/docs/ld/Expressions.html#Expressions

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing
  2014-06-20  8:50   ` Will Deacon
@ 2014-06-20 10:35     ` Mark Rutland
  0 siblings, 0 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-20 10:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jun 20, 2014 at 09:50:08AM +0100, Will Deacon wrote:
> On Thu, Jun 19, 2014 at 11:49:23AM +0100, Mark Rutland wrote:
> > The arm64 Image header contains a text_offset field which bootloaders
> > are supposed to read to determine the offset (from a 2MB aligned "start
> > of memory" per booting.txt) at which to load the kernel. The offset is
> > not well respected by bootloaders at present, and due to the lack of
> > variation there is little incentive to support it. This is unfortunate
> > for the sake of future kernels where we may wish to vary the text offset
> > (even zeroing it).
> > 
> > This patch adds options to arm64 to enable fuzz-testing of text_offset.
> > CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
> > 16-byte aligned value value in the range [0..2MB) upon a build of the
> > kernel. It is recommended that distribution kernels enable randomization
> > to test bootloaders such that any compliance issues can be fixed early.
> 
> [...]
> 
> > diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> > index 1c1b756..566bf80 100644
> > --- a/arch/arm64/Kconfig.debug
> > +++ b/arch/arm64/Kconfig.debug
> > @@ -28,4 +28,20 @@ config PID_IN_CONTEXTIDR
> >  	  instructions during context switch. Say Y here only if you are
> >  	  planning to use hardware trace tools with this kernel.
> >  
> > +config ARM64_RANDOMIZE_TEXT_OFFSET
> > +	bool "Randomize TEXT_OFFSET at build time (EXPERIMENTAL)"
> 
> Lose the (EXPERIMENTAL) suffix -- this already lives under Kconfig.debug.

Sure.

> 
> > +	default N
> 
> I think this is redundant.

I think so too. Gone.

Cheers,
Mark.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 3/4] arm64: Update the Image header
  2014-06-19 10:49 ` [PATCHv3 3/4] arm64: Update the Image header Mark Rutland
  2014-06-20  8:55   ` Will Deacon
@ 2014-06-20 17:03   ` Geoff Levand
  2014-06-24 13:49     ` Mark Rutland
  1 sibling, 1 reply; 12+ messages in thread
From: Geoff Levand @ 2014-06-20 17:03 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On Thu, 2014-06-19 at 11:49 +0100, Mark Rutland wrote: 
> diff --git a/arch/arm64/include/asm/image.h b/arch/arm64/include/asm/image.h
> new file mode 100644
> index 0000000..8fae075
> --- /dev/null
> +++ b/arch/arm64/include/asm/image.h
> @@ -0,0 +1,62 @@
> +/*
> + * Linker script macros to generate Image header fields.

...

> + */
> +#ifndef __ASM_IMAGE_H
> +#define __ASM_IMAGE_H
> +
> +#ifndef LINKER_SCRIPT
> +#error This file should only be included in vmlinux.lds.S
> +#endif

Since this is just used by the linker script, shouldn't it go in
arch/arm64/kernel/image.h?

-Geoff

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv3 3/4] arm64: Update the Image header
  2014-06-20 17:03   ` Geoff Levand
@ 2014-06-24 13:49     ` Mark Rutland
  0 siblings, 0 replies; 12+ messages in thread
From: Mark Rutland @ 2014-06-24 13:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jun 20, 2014 at 06:03:30PM +0100, Geoff Levand wrote:
> Hi Mark,
> 
> On Thu, 2014-06-19 at 11:49 +0100, Mark Rutland wrote: 
> > diff --git a/arch/arm64/include/asm/image.h b/arch/arm64/include/asm/image.h
> > new file mode 100644
> > index 0000000..8fae075
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/image.h
> > @@ -0,0 +1,62 @@
> > +/*
> > + * Linker script macros to generate Image header fields.
> 
> ...
> 
> > + */
> > +#ifndef __ASM_IMAGE_H
> > +#define __ASM_IMAGE_H
> > +
> > +#ifndef LINKER_SCRIPT
> > +#error This file should only be included in vmlinux.lds.S
> > +#endif
> 
> Since this is just used by the linker script, shouldn't it go in
> arch/arm64/kernel/image.h?

Sure, that makes sense. I'll post a (hopefully final) v4 shortly.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-06-24 13:49 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-19 10:49 [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
2014-06-19 10:49 ` [PATCHv3 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
2014-06-19 10:49 ` [PATCHv3 2/4] arm64: place initial page tables above the kernel Mark Rutland
2014-06-19 10:49 ` [PATCHv3 3/4] arm64: Update the Image header Mark Rutland
2014-06-20  8:55   ` Will Deacon
2014-06-20 10:32     ` Mark Rutland
2014-06-20 17:03   ` Geoff Levand
2014-06-24 13:49     ` Mark Rutland
2014-06-19 10:49 ` [PATCHv3 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
2014-06-20  8:50   ` Will Deacon
2014-06-20 10:35     ` Mark Rutland
2014-06-20  8:56 ` [PATCHv3 0/4] arm64: simplify restrictions on bootloaders Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).