linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64: restrict initrd placement to guarantee linear region coverage
@ 2016-03-30 13:18 Ard Biesheuvel
  2016-03-30 13:18 ` [PATCH 1/2] arm64: add the initrd region to the linear mapping explicitly Ard Biesheuvel
  2016-03-30 13:18 ` [PATCH 2/2] arm64: remove the now unneeded relocate_initrd() Ard Biesheuvel
  0 siblings, 2 replies; 3+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 13:18 UTC (permalink / raw)
  To: linux-arm-kernel

Since commit a7f8de168ace ("arm64: allow kernel Image to be loaded anywhere
in physical memory"), we need to take some extra care to ensure that the linear
region covers the kernel image if the [disjoint] placement of system RAM in
the physical address space spans a larger distance than we can cover with the
linear mapping.

A related issue, which is not currently covered, may occur if the kernel Image
is loaded high up in physical memory while the initrd is placed close to the
beginning of memory, in a way that does not allow the linear mapping to cover
both entirely. This will currently go undetected by relocate_initrd(), and will
crash the kernel as soon as it tries to access the initrd contents.

Rather than updating relocate_initrd() to deal with this case as well, this
series replaces it with
a) a new arm64 boot protocol requirement to place the kernel Image and the
   initrd within a reasonable distance of each other, so that the linear region
   issue described above can no longer occur,
b) code to add back the memory covered by the initrd if it was removed from the
   linear region due to a mem= kernel command line parameter, which is the use
   case relocate_initrd() was designed to address in the first place.

This way, we can remove relocate_initrd() entirely, and simply rely on the
placement of initrd by the bootloader. The only side effect is that the mem=
limit could be relaxed somewhat (i.e., by the size of the initrd) if the initrd
is placed outside of the memory that is covered by the mem= parameter. This is
entirely under the control of the bootloader, and if this is a concern, the
bootloader should pass mem= and initrd= arguments which are mutually consistent.

Ard Biesheuvel (2):
  arm64: add the initrd region to the linear mapping explicitly
  arm64: remove the now unneeded relocate_initrd()

 Documentation/arm64/booting.txt |  4 ++
 arch/arm64/kernel/setup.c       | 64 --------------------
 arch/arm64/mm/init.c            | 29 +++++++++
 3 files changed, 33 insertions(+), 64 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] arm64: add the initrd region to the linear mapping explicitly
  2016-03-30 13:18 [PATCH 0/2] arm64: restrict initrd placement to guarantee linear region coverage Ard Biesheuvel
@ 2016-03-30 13:18 ` Ard Biesheuvel
  2016-03-30 13:18 ` [PATCH 2/2] arm64: remove the now unneeded relocate_initrd() Ard Biesheuvel
  1 sibling, 0 replies; 3+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 13:18 UTC (permalink / raw)
  To: linux-arm-kernel

Instead of going out of our way to relocate the initrd if it turns out
to occupy memory that is not covered by the linear mapping, just add the
initrd to the linear mapping. This puts the burden on the bootloader to
pass initrd= and mem= options that are mutually consistent.

Note that, since the placement of the linear region in the PA space is
also dependent on the placement of the kernel Image, which may reside
anywhere in memory, we may still end up with a situation where the initrd
and the kernel Image are simply too far apart to be covered by the linear
region.

Since we now leave it up to the bootloader to pass the initrd in memory
that is guaranteed to be accessible by the kernel, add a mention of this to
the arm64 boot protocol specification as well.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/arm64/booting.txt |  4 +++
 arch/arm64/mm/init.c            | 29 ++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 56d6d8b796db..8d0df62c3fe0 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -132,6 +132,10 @@ NOTE: versions prior to v4.6 cannot make use of memory below the
 physical offset of the Image so it is recommended that the Image be
 placed as close as possible to the start of system RAM.
 
+If an initrd/initramfs is passed to the kernel at boot, it must reside
+entirely within a 1 GB aligned physical memory window of up to 32 GB in
+size that fully covers the kernel Image as well.
+
 Any memory described to the kernel (even that below the start of the
 image) which is not marked as reserved from the kernel (e.g., with a
 memreserve region in the device tree) will be considered as available to
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 83ae8b5e5666..82ced5fa1e66 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -207,6 +207,35 @@ void __init arm64_memblock_init(void)
 		memblock_add(__pa(_text), (u64)(_end - _text));
 	}
 
+	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) {
+		/*
+		 * Add back the memory we just removed if it results in the
+		 * initrd to become inaccessible via the linear mapping.
+		 * Otherwise, this is a no-op
+		 */
+		u64 base = initrd_start & PAGE_MASK;
+		u64 size = PAGE_ALIGN(initrd_end) - base;
+
+		/*
+		 * We can only add back the initrd memory if we don't end up
+		 * with more memory than we can address via the linear mapping.
+		 * It is up to the bootloader to position the kernel and the
+		 * initrd reasonably close to each other (i.e., within 32 GB of
+		 * each other) so that all granule/#levels combinations can
+		 * always access both.
+		 */
+		if (WARN(base < memblock_start_of_DRAM() ||
+			 base + size > memblock_start_of_DRAM() +
+				       linear_region_size,
+			"initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
+			initrd_start = 0;
+		} else {
+			memblock_remove(base, size); /* clear MEMBLOCK_ flags */
+			memblock_add(base, size);
+			memblock_reserve(base, size);
+		}
+	}
+
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
 		extern u16 memstart_offset_seed;
 		u64 range = linear_region_size -
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] arm64: remove the now unneeded relocate_initrd()
  2016-03-30 13:18 [PATCH 0/2] arm64: restrict initrd placement to guarantee linear region coverage Ard Biesheuvel
  2016-03-30 13:18 ` [PATCH 1/2] arm64: add the initrd region to the linear mapping explicitly Ard Biesheuvel
@ 2016-03-30 13:18 ` Ard Biesheuvel
  1 sibling, 0 replies; 3+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 13:18 UTC (permalink / raw)
  To: linux-arm-kernel

This removes the relocate_initrd() implementation and invocation, which are
no longer needed now that the placement of the initrd is guaranteed to be
covered by the linear mapping.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/setup.c | 64 --------------------
 1 file changed, 64 deletions(-)

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 9dc67769b6a4..7b85b1d6a6fb 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -224,69 +224,6 @@ static void __init request_standard_resources(void)
 	}
 }
 
-#ifdef CONFIG_BLK_DEV_INITRD
-/*
- * Relocate initrd if it is not completely within the linear mapping.
- * This would be the case if mem= cuts out all or part of it.
- */
-static void __init relocate_initrd(void)
-{
-	phys_addr_t orig_start = __virt_to_phys(initrd_start);
-	phys_addr_t orig_end = __virt_to_phys(initrd_end);
-	phys_addr_t ram_end = memblock_end_of_DRAM();
-	phys_addr_t new_start;
-	unsigned long size, to_free = 0;
-	void *dest;
-
-	if (orig_end <= ram_end)
-		return;
-
-	/*
-	 * Any of the original initrd which overlaps the linear map should
-	 * be freed after relocating.
-	 */
-	if (orig_start < ram_end)
-		to_free = ram_end - orig_start;
-
-	size = orig_end - orig_start;
-	if (!size)
-		return;
-
-	/* initrd needs to be relocated completely inside linear mapping */
-	new_start = memblock_find_in_range(0, PFN_PHYS(max_pfn),
-					   size, PAGE_SIZE);
-	if (!new_start)
-		panic("Cannot relocate initrd of size %ld\n", size);
-	memblock_reserve(new_start, size);
-
-	initrd_start = __phys_to_virt(new_start);
-	initrd_end   = initrd_start + size;
-
-	pr_info("Moving initrd from [%llx-%llx] to [%llx-%llx]\n",
-		orig_start, orig_start + size - 1,
-		new_start, new_start + size - 1);
-
-	dest = (void *)initrd_start;
-
-	if (to_free) {
-		memcpy(dest, (void *)__phys_to_virt(orig_start), to_free);
-		dest += to_free;
-	}
-
-	copy_from_early_mem(dest, orig_start + to_free, size - to_free);
-
-	if (to_free) {
-		pr_info("Freeing original RAMDISK from [%llx-%llx]\n",
-			orig_start, orig_start + to_free - 1);
-		memblock_free(orig_start, to_free);
-	}
-}
-#else
-static inline void __init relocate_initrd(void)
-{
-}
-#endif
-
 u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
 
 void __init setup_arch(char **cmdline_p)
@@ -327,7 +264,6 @@ void __init setup_arch(char **cmdline_p)
 	acpi_boot_table_init();
 
 	paging_init();
-	relocate_initrd();
 
 	kasan_init();
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-03-30 13:18 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-30 13:18 [PATCH 0/2] arm64: restrict initrd placement to guarantee linear region coverage Ard Biesheuvel
2016-03-30 13:18 ` [PATCH 1/2] arm64: add the initrd region to the linear mapping explicitly Ard Biesheuvel
2016-03-30 13:18 ` [PATCH 2/2] arm64: remove the now unneeded relocate_initrd() Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).