* [PATCHv2 0/4] arm64: simplify restrictions on bootloaders
@ 2014-05-27 13:18 Mark Rutland
2014-05-27 13:18 ` [PATCHv2 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Mark Rutland @ 2014-05-27 13:18 UTC (permalink / raw)
To: linux-arm-kernel
Hi all,
This is version 2 of the boot simplification series I posted previously [1].
Many thanks to those who reviewed / tested the first posting.
Changes since v1 [1]:
* Rebased to Matt Fleming's arm64-efi branch [2] to resolve conflict with the
EFI stub series in head.S.
* Fixed random TEXT_OFFSET generation to be mawk compatible.
* Removed option to set TEXT_OFFSET explicitly per Catalin's request.
* Remove (invalid) recommendation of 1MB for dynamically initialised data.
Laura, due to having to alter head.S to fix a conflict with the EFI stub series,
I've dropped your tested-by from patch 3. While I don't expect a functional
regression it seemed significant enough to drop the tag.
For those who have not seen v1, a rationale and description of the series
follows:
Currently bootloaders have an extremely difficult time protecting memory from
the kernel, as the kernel may clobber memory below TEXT_OFFSET with pagetables,
and above the end of the kernel binary with the BSS.
This series attempts to ameliorate matters by adding a mechanism for bootloaders
to discover the minimum runtime footprint of the kernel image, including the BSS
and any other dynamically initialised data, and moving the initial page tables
into this region.
The currently ill-described image load offset variable is coerced to always be
little-endian. This means that bootloader can actually make use of the field for
any kernel (wither LE or BE), and as the field does not yet seem to be used
anywhere taking endianness into account I hope this is not problematic.
Documentation is updated with recommendations on handling the field. To aid in
encouraging bootloader authors to respect the field, an option is added to
randomize the text_offset field at link time, which may be used in test and/or
distribution kernels. So as to not break existing (but arguably broken) loaders
immediately, this option is hidden under kernel hacking and disabled by default.
The documentation is updated to cover how to use the new image_size field and
what to do if it is zero, and how to use the image_size field to determine
whether the text_offset field is guaranteed to be little-endian. In the absence
of an image_size field, it's not possible to provide a reasonable sensible
value due to configuration-dependent variation -- a recent defconfg kernel had
a ~190KB BSS, while an allyesconfig build (with some features disabled due to
build breakages) had a ~13MB BSS.
A BE conditional 64-bit endianness swapping routine (DATA_LE64) is added to
vmlinux.lds.S, as the linker is the only place we can endianness swap a value
calculated from two symbols known only at link time. There are several existing
headers that do almost the same thing but due to use of C prototypes and/or
casts are not suitable for use in a linker script. A separate series may be able
to unify that.
I've given some light testing to text_offset fuzzing with an updated bootwrapper
[3] which reads the text_offset field at build time to ensure the kernel gets
loaded at the right address. Nothing else is yet moved however, so this may
explode if this location happens to overlap the bootwrapper code, DTB, or
spin-table mbox. I'll try to teach the bootwrapper how to deal with that
shortly.
Cheers,
Mark.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-May/257141.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi.git arm64-efi
[3] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/heads/unstable/text-offset
Mark Rutland (4):
arm64: head.S: remove unnecessary function alignment
arm64: place initial page tables above the kernel
arm64: export effective Image size to bootloaders
arm64: Enable TEXT_OFFSET fuzzing
Documentation/arm64/booting.txt | 29 ++++++++++++++++++++-----
arch/arm64/Kconfig.debug | 16 ++++++++++++++
arch/arm64/Makefile | 4 ++++
arch/arm64/include/asm/page.h | 9 ++++++++
arch/arm64/kernel/head.S | 47 +++++++++++++++++++----------------------
arch/arm64/kernel/vmlinux.lds.S | 40 +++++++++++++++++++++++++++++++++++
arch/arm64/mm/init.c | 12 ++++-------
7 files changed, 119 insertions(+), 38 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCHv2 1/4] arm64: head.S: remove unnecessary function alignment
2014-05-27 13:18 [PATCHv2 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
@ 2014-05-27 13:18 ` Mark Rutland
2014-05-27 13:18 ` [PATCHv2 2/4] arm64: place initial page tables above the kernel Mark Rutland
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2014-05-27 13:18 UTC (permalink / raw)
To: linux-arm-kernel
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.
Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.
This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm64/kernel/head.S | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 738291b..dc3ee59 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -458,8 +458,13 @@ ENDPROC(__enable_mmu)
* x27 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
+ *
+ * We align the entire function to the smallest power of two larger than it to
+ * ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET
+ * close to the end of a 512MB or 1GB block we might require an additional
+ * table to map the entire function.
*/
- .align 6
+ .align 4
__turn_mmu_on:
msr sctlr_el1, x0
isb
--
1.9.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCHv2 2/4] arm64: place initial page tables above the kernel
2014-05-27 13:18 [PATCHv2 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
2014-05-27 13:18 ` [PATCHv2 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
@ 2014-05-27 13:18 ` Mark Rutland
2014-05-27 13:18 ` [PATCHv2 3/4] arm64: export effective Image size to bootloaders Mark Rutland
2014-05-27 13:18 ` [PATCHv2 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
3 siblings, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2014-05-27 13:18 UTC (permalink / raw)
To: linux-arm-kernel
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.
To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.
This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm64/include/asm/page.h | 9 +++++++++
arch/arm64/kernel/head.S | 28 ++++++++--------------------
arch/arm64/kernel/vmlinux.lds.S | 7 +++++++
arch/arm64/mm/init.c | 12 ++++--------
4 files changed, 28 insertions(+), 28 deletions(-)
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 46bf666..a6331e6 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -31,6 +31,15 @@
/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
#define __HAVE_ARCH_GATE_AREA 1
+/*
+ * The idmap and swapper page tables need some space reserved in the kernel
+ * image. The idmap only requires a pgd and a next level table to (section) map
+ * the kernel, while the swapper also maps the FDT and requires an additional
+ * table to map an early UART. See __create_page_tables for more information.
+ */
+#define SWAPPER_DIR_SIZE (3 * PAGE_SIZE)
+#define IDMAP_DIR_SIZE (2 * PAGE_SIZE)
+
#ifndef __ASSEMBLY__
#ifdef CONFIG_ARM64_64K_PAGES
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index dc3ee59..36ecee2 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -35,29 +35,17 @@
#include <asm/page.h>
#include <asm/virt.h>
-/*
- * swapper_pg_dir is the virtual address of the initial page table. We place
- * the page tables 3 * PAGE_SIZE below KERNEL_RAM_VADDR. The idmap_pg_dir has
- * 2 pages and is placed below swapper_pg_dir.
- */
#define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET)
#if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000
#error KERNEL_RAM_VADDR must start at 0xXXX80000
#endif
-#define SWAPPER_DIR_SIZE (3 * PAGE_SIZE)
-#define IDMAP_DIR_SIZE (2 * PAGE_SIZE)
-
- .globl swapper_pg_dir
- .equ swapper_pg_dir, KERNEL_RAM_VADDR - SWAPPER_DIR_SIZE
-
- .globl idmap_pg_dir
- .equ idmap_pg_dir, swapper_pg_dir - IDMAP_DIR_SIZE
-
- .macro pgtbl, ttb0, ttb1, phys
- add \ttb1, \phys, #TEXT_OFFSET - SWAPPER_DIR_SIZE
- sub \ttb0, \ttb1, #IDMAP_DIR_SIZE
+ .macro pgtbl, ttb0, ttb1, virt_to_phys
+ ldr \ttb1, =swapper_pg_dir
+ ldr \ttb0, =idmap_pg_dir
+ add \ttb1, \ttb1, \virt_to_phys
+ add \ttb0, \ttb0, \virt_to_phys
.endm
#ifdef CONFIG_ARM64_64K_PAGES
@@ -416,7 +404,7 @@ ENTRY(secondary_startup)
mov x23, x0 // x23=current cpu_table
cbz x23, __error_p // invalid processor (x23=0)?
- pgtbl x25, x26, x24 // x25=TTBR0, x26=TTBR1
+ pgtbl x25, x26, x28 // x25=TTBR0, x26=TTBR1
ldr x12, [x23, #CPU_INFO_SETUP]
add x12, x12, x28 // __virt_to_phys
blr x12 // initialise processor
@@ -530,7 +518,7 @@ ENDPROC(__calc_phys_offset)
* - pgd entry for fixed mappings (TTBR1)
*/
__create_page_tables:
- pgtbl x25, x26, x24 // idmap_pg_dir and swapper_pg_dir addresses
+ pgtbl x25, x26, x28 // idmap_pg_dir and swapper_pg_dir addresses
mov x27, lr
/*
@@ -619,7 +607,7 @@ ENDPROC(__create_page_tables)
__switch_data:
.quad __mmap_switched
.quad __bss_start // x6
- .quad _end // x7
+ .quad __bss_stop // x7
.quad processor_id // x4
.quad __fdt_pointer // x5
.quad memstart_addr // x6
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4ba7a55..51258bc 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -104,6 +104,13 @@ SECTIONS
_edata = .;
BSS_SECTION(0, 0, 0)
+
+ . = ALIGN(PAGE_SIZE);
+ idmap_pg_dir = .;
+ . += IDMAP_DIR_SIZE;
+ swapper_pg_dir = .;
+ . += SWAPPER_DIR_SIZE;
+
_end = .;
STABS_DEBUG
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 51d5352..cc3339d 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -128,20 +128,16 @@ void __init arm64_memblock_init(void)
{
u64 *reserve_map, base, size;
- /* Register the kernel text, kernel data and initrd with memblock */
+ /*
+ * Register the kernel text, kernel data, initrd, and initial
+ * pagetables with memblock.
+ */
memblock_reserve(__pa(_text), _end - _text);
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start)
memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
#endif
- /*
- * Reserve the page tables. These are already in use,
- * and can only be in node 0.
- */
- memblock_reserve(__pa(swapper_pg_dir), SWAPPER_DIR_SIZE);
- memblock_reserve(__pa(idmap_pg_dir), IDMAP_DIR_SIZE);
-
/* Reserve the dtb region */
memblock_reserve(virt_to_phys(initial_boot_params),
be32_to_cpu(initial_boot_params->totalsize));
--
1.9.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCHv2 3/4] arm64: export effective Image size to bootloaders
2014-05-27 13:18 [PATCHv2 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
2014-05-27 13:18 ` [PATCHv2 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
2014-05-27 13:18 ` [PATCHv2 2/4] arm64: place initial page tables above the kernel Mark Rutland
@ 2014-05-27 13:18 ` Mark Rutland
2014-05-27 13:18 ` [PATCHv2 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
3 siblings, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2014-05-27 13:18 UTC (permalink / raw)
To: linux-arm-kernel
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
---
Documentation/arm64/booting.txt | 29 ++++++++++++++++++++++++-----
arch/arm64/kernel/head.S | 4 ++--
arch/arm64/kernel/vmlinux.lds.S | 28 ++++++++++++++++++++++++++++
3 files changed, 54 insertions(+), 7 deletions(-)
diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 37fc4f6..a446843 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -72,8 +72,8 @@ The decompressed kernel image contains a 64-byte header as follows:
u32 code0; /* Executable code */
u32 code1; /* Executable code */
- u64 text_offset; /* Image load offset */
- u64 res0 = 0; /* reserved */
+ u64 text_offset; /* Image load offset, little endian */
+ u64 image_size; /* Effective Image size, little endian */
u64 res1 = 0; /* reserved */
u64 res2 = 0; /* reserved */
u64 res3 = 0; /* reserved */
@@ -90,9 +90,28 @@ Header notes:
entry point (efi_stub_entry). When the stub has done its work, it
jumps to code0 to resume the normal boot process.
-The image must be placed at the specified offset (currently 0x80000)
-from the start of the system RAM and called there. The start of the
-system RAM must be aligned to 2MB.
+- Older kernel versions did not define the endianness of text_offset.
+ In these cases image_size is zero and text_offset is 0x80000 in the
+ endianness of the kernel. Where image_size is non-zero image_size is
+ little-endian and must be respected. Where image_size is zero,
+ text_offset can be assumed to be 0x80000.
+
+- When image_size is zero, a bootloader should attempt to keep as much
+ memory as possible free for use by the kernel immediately after the
+ end of the kernel image. The amount of space required will vary
+ depending on selected features, and is effectively unbound.
+
+The Image must be placed text_offset bytes from a 2MB aligned base
+address near the start of usable system RAM and called there. Memory
+below that base address is currently unusable by Linux, and therefore it
+is strongly recommended that this location is the start of system RAM.
+At least image_size bytes from the start of the image must be free for
+use by the kernel.
+
+Any memory described to the kernel (even that below the 2MB aligned base
+address) which is not marked as reserved from the kernel e.g. with a
+memreserve region in the device tree) will be considered as available to
+the kernel.
Before jumping into the kernel, the following conditions must be met:
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 36ecee2..2c13d3b 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -108,8 +108,8 @@ efi_head:
b stext // branch to kernel start, magic
.long 0 // reserved
#endif
- .quad TEXT_OFFSET // Image load offset from start of RAM
- .quad 0 // reserved
+ .quad _kernel_offset_le // Image load offset from start of RAM, little-endian
+ .quad _kernel_size_le // Effective size of kernel image, little-endian
.quad 0 // reserved
.quad 0 // reserved
.quad 0 // reserved
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 51258bc..21a8ad1 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -30,6 +30,25 @@ jiffies = jiffies_64;
*(.hyp.text) \
VMLINUX_SYMBOL(__hyp_text_end) = .;
+/*
+ * There aren't any ELF relocations we can use to endian-swap values known only
+ * at link time (e.g. the subtraction of two symbol addresses), so we must get
+ * the linker to endian-swap certain values before emitting them.
+ */
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define DATA_LE64(data) \
+ ((((data) & 0x00000000000000ff) << 56) | \
+ (((data) & 0x000000000000ff00) << 40) | \
+ (((data) & 0x0000000000ff0000) << 24) | \
+ (((data) & 0x00000000ff000000) << 8) | \
+ (((data) & 0x000000ff00000000) >> 8) | \
+ (((data) & 0x0000ff0000000000) >> 24) | \
+ (((data) & 0x00ff000000000000) >> 40) | \
+ (((data) & 0xff00000000000000) >> 56))
+#else
+#define DATA_LE64(data) ((data) & 0xffffffffffffffff)
+#endif
+
SECTIONS
{
/*
@@ -114,6 +133,15 @@ SECTIONS
_end = .;
STABS_DEBUG
+
+ /*
+ * These will output as part of the Image header, which should be
+ * little-endian regardless of the endianness of the kernel. While
+ * constant values could be endian swapped in head.S, all are done here
+ * for consistency.
+ */
+ _kernel_size_le = DATA_LE64(_end - _text);
+ _kernel_offset_le = DATA_LE64(TEXT_OFFSET);
}
/*
--
1.9.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCHv2 4/4] arm64: Enable TEXT_OFFSET fuzzing
2014-05-27 13:18 [PATCHv2 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
` (2 preceding siblings ...)
2014-05-27 13:18 ` [PATCHv2 3/4] arm64: export effective Image size to bootloaders Mark Rutland
@ 2014-05-27 13:18 ` Mark Rutland
2014-05-30 13:22 ` Tom Rini
3 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2014-05-27 13:18 UTC (permalink / raw)
To: linux-arm-kernel
The arm64 Image header contains a text_offset field which bootloaders
are supposed to read to determine the offset (from a 2MB aligned "start
of memory" per booting.txt) at which to load the kernel. The offset is
not well respected by bootloaders at present, and due to the lack of
variation there is little incentive to support it. This is unfortunate
for the sake of future kernels where we may wish to vary the text offset
(even zeroing it).
This patch adds options to arm64 to enable fuzz-testing of text_offset.
CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
16-byte aligned value value in the range [0..2MB) upon a build of the
kernel. It is recommended that distribution kernels enable randomization
to test bootloaders such that any compliance issues can be fixed early.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
arch/arm64/Kconfig.debug | 16 ++++++++++++++++
arch/arm64/Makefile | 4 ++++
arch/arm64/kernel/head.S | 8 ++++++--
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
4 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index d10ec33..d757875 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -37,4 +37,20 @@ config PID_IN_CONTEXTIDR
instructions during context switch. Say Y here only if you are
planning to use hardware trace tools with this kernel.
+config ARM64_RANDOMIZE_TEXT_OFFSET
+ bool "Randomize TEXT_OFFSET at build time (EXPERIMENTAL)"
+ default N
+ help
+ Say Y here if you want the image load offset (AKA TEXT_OFFSET)
+ of the kernel to be randomized at build-time. When selected,
+ this option will cause TEXT_OFFSET to be randomized upon any
+ build of the kernel, and the offset will be reflected in the
+ text_offset field of the resulting Image. This can be used to
+ fuzz-test bootloaders which respect text_offset.
+
+ This option is intended for bootloader and/or kernel testing
+ only. Bootloaders must make no assumptions regarding the value
+ of TEXT_OFFSET and platforms must not require a specific
+ value.
+
endmenu
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 2fceb71..4bde6f1 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -38,7 +38,11 @@ CHECKFLAGS += -D__aarch64__
head-y := arch/arm64/kernel/head.o
# The byte offset of the kernel image in RAM from the start of RAM.
+ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y)
+TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%04x0\n", int(65535 * rand())}')
+else
TEXT_OFFSET := 0x00080000
+endif
export TEXT_OFFSET GZFLAGS
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2c13d3b..3bbad33 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -37,8 +37,12 @@
#define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET)
-#if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000
-#error KERNEL_RAM_VADDR must start at 0xXXX80000
+#if (TEXT_OFFSET & 0xf) != 0
+#error TEXT_OFFSET must be at least 16B aligned
+#elif (PAGE_OFFSET & 0xfffff) != 0
+#error PAGE_OFFSET must be at least 2MB aligned
+#elif TEXT_OFFSET > 0xfffff
+#error TEXT_OFFSET must be less than 2MB
#endif
.macro pgtbl, ttb0, ttb1, virt_to_phys
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 21a8ad1..05fc047 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -149,3 +149,8 @@ SECTIONS
*/
ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end),
"HYP init code too big")
+
+/*
+ * If padding is applied before .head.text, virt<->phys conversions will fail.
+ */
+ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
--
1.9.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCHv2 4/4] arm64: Enable TEXT_OFFSET fuzzing
2014-05-27 13:18 ` [PATCHv2 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
@ 2014-05-30 13:22 ` Tom Rini
0 siblings, 0 replies; 6+ messages in thread
From: Tom Rini @ 2014-05-30 13:22 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, May 27, 2014 at 02:18:30PM +0100, Mark Rutland wrote:
> The arm64 Image header contains a text_offset field which bootloaders
> are supposed to read to determine the offset (from a 2MB aligned "start
> of memory" per booting.txt) at which to load the kernel. The offset is
> not well respected by bootloaders at present, and due to the lack of
> variation there is little incentive to support it. This is unfortunate
> for the sake of future kernels where we may wish to vary the text offset
> (even zeroing it).
>
> This patch adds options to arm64 to enable fuzz-testing of text_offset.
> CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
> 16-byte aligned value value in the range [0..2MB) upon a build of the
> kernel. It is recommended that distribution kernels enable randomization
> to test bootloaders such that any compliance issues can be fixed early.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
--
Tom
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2014-05-30 13:22 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-27 13:18 [PATCHv2 0/4] arm64: simplify restrictions on bootloaders Mark Rutland
2014-05-27 13:18 ` [PATCHv2 1/4] arm64: head.S: remove unnecessary function alignment Mark Rutland
2014-05-27 13:18 ` [PATCHv2 2/4] arm64: place initial page tables above the kernel Mark Rutland
2014-05-27 13:18 ` [PATCHv2 3/4] arm64: export effective Image size to bootloaders Mark Rutland
2014-05-27 13:18 ` [PATCHv2 4/4] arm64: Enable TEXT_OFFSET fuzzing Mark Rutland
2014-05-30 13:22 ` Tom Rini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox