linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] arm64: relax Image placement rules
@ 2015-09-23  0:37 Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
                   ` (8 more replies)
  0 siblings, 9 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

This is a followup to the "arm64: update/clarify/relax Image and FDT placement
rules" series I sent a while ago:
(http://article.gmane.org/gmane.linux.ports.arm.kernel/407148)

This has now been split in two series: this second series deals with the
physical and virtual placement of the kernel Image.

This series updates the mapping of the kernel Image and the linear mapping of
system memory to allow more freedom in the choice of placement without affecting
the accessibility of system RAM below the kernel Image, and the mapping
efficiency (i.e., memory can always be mapped in 512 MB or 1 GB blocks).

Changes since v1:
- dropped somewhat unrelated patch #1 and patches #2 and #3 that have been
  merged separately
- rebased onto v4.2-rc3
- tweak the generic early_init_dt_add_memory_arch for our purposes rather than
  clone the implementation completely

Changes since original combined series:
- rebased onto v4.1-rc3
- dropped phys_offset_bias in favor of updating memstart_addr with the bias
  temporarily
- bootstrap the linear mapping of the base of RAM in addition to the linear
  mapping of the statically allocated page tables
- fixed a bug in the handling of the mem= kernel command line parameter.

Known issues:
- the mem= command line parameter works correctly now, but removes memory from
  the bottom first before clipping from the top, which may be undesirable since
  it may discard precious memory below the 4 GB boundary.s

Patch #1 refactors the generic early_init_dt_add_memory_arch implementation to
allow to minimum memblock address to be overridden by the architecture.

Patch #2 changes the memblock_reserve logic so that unused page table
reservations are left unreserved in memblock.

Patch #3 refactors early_fixmap_init() so that we can reuse its core for
bootstrapping other memory mappings.

Patch #4 bootstraps the linear mapping explicitly. Up until now, this was done
implicitly due to the fact that the linear mapping starts at the base of the
kernel Image.

Patch #5 moves the mapping of the kernel Image outside of the linear mapping.

Patch #6 changes the attributes of the linear mapping to non-executable since
we don't execute code from it anymore.

Patch #7 allows the kernel to be loaded at any 2 MB aligned offset in physical
memory, by assigning PHYS_OFFSET based on the available memory and not based on
the physical address of the base of the kernel Image.

Ard Biesheuvel (7):
  of/fdt: make memblock minimum physical address arch configurable
  arm64: use more granular reservations for static page table
    allocations
  arm64: split off early mapping code from early_fixmap_init()
  arm64: mm: explicitly bootstrap the linear mapping
  arm64: move kernel mapping out of linear region
  arm64: map linear region as non-executable
  arm64: allow kernel Image to be loaded anywhere in physical memory

 Documentation/arm64/booting.txt   |  12 +-
 arch/arm64/include/asm/boot.h     |   7 +
 arch/arm64/include/asm/compiler.h |   2 +
 arch/arm64/include/asm/memory.h   |  27 ++-
 arch/arm64/kernel/head.S          |  18 +-
 arch/arm64/kernel/vmlinux.lds.S   |  40 +++-
 arch/arm64/mm/dump.c              |   3 +-
 arch/arm64/mm/init.c              |  56 ++++-
 arch/arm64/mm/mmu.c               | 230 +++++++++++++-------
 drivers/of/fdt.c                  |   5 +-
 10 files changed, 290 insertions(+), 110 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-09-23  4:45   ` Mark Rutland
  2015-09-23 22:59   ` Rob Herring
  2015-09-23  0:37 ` [PATCH v2 2/7] arm64: use more granular reservations for static page table allocations Ard Biesheuvel
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

By default, early_init_dt_add_memory_arch() ignores memory below
the base of the kernel image since it won't be addressable via the
linear mapping. However, this is not appropriate anymore once we
decouple the kernel text mapping from the linear mapping, so archs
may want to drop the low limit entirely. So allow the minimum to be
overridden by setting MIN_MEMBLOCK_ADDR.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/of/fdt.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 6e82bc42373b..5e7ef800a816 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -967,13 +967,16 @@ int __init early_init_dt_scan_chosen(unsigned long node, const char *uname,
 }
 
 #ifdef CONFIG_HAVE_MEMBLOCK
+#ifndef MIN_MEMBLOCK_ADDR
+#define MIN_MEMBLOCK_ADDR	__pa(PAGE_OFFSET)
+#endif
 #ifndef MAX_MEMBLOCK_ADDR
 #define MAX_MEMBLOCK_ADDR	((phys_addr_t)~0)
 #endif
 
 void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)
 {
-	const u64 phys_offset = __pa(PAGE_OFFSET);
+	const u64 phys_offset = MIN_MEMBLOCK_ADDR;
 
 	if (!PAGE_ALIGNED(base)) {
 		if (size < PAGE_SIZE - (base & ~PAGE_MASK)) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/7] arm64: use more granular reservations for static page table allocations
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 3/7] arm64: split off early mapping code from early_fixmap_init() Ard Biesheuvel
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

Before introducing new statically allocated page tables and increasing
their alignment in subsequent patches, update the reservation logic
so that only pages that are in actual use end up as reserved with
memblock.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/init.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f5c0680d17d9..b9390eb1e29f 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -37,6 +37,7 @@
 
 #include <asm/fixmap.h>
 #include <asm/memory.h>
+#include <asm/mmu_context.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
 #include <asm/sizes.h>
@@ -164,11 +165,13 @@ void __init arm64_memblock_init(void)
 	 * Register the kernel text, kernel data, initrd, and initial
 	 * pagetables with memblock.
 	 */
-	memblock_reserve(__pa(_text), _end - _text);
+	memblock_reserve(__pa(_text), __bss_stop - _text);
 #ifdef CONFIG_BLK_DEV_INITRD
 	if (initrd_start)
 		memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
 #endif
+	memblock_reserve(__pa(idmap_pg_dir), IDMAP_DIR_SIZE);
+	memblock_reserve(__pa(swapper_pg_dir), SWAPPER_DIR_SIZE);
 
 	early_init_fdt_scan_reserved_mem();
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/7] arm64: split off early mapping code from early_fixmap_init()
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 2/7] arm64: use more granular reservations for static page table allocations Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 4/7] arm64: mm: explicitly bootstrap the linear mapping Ard Biesheuvel
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

This splits off and generalises the population of the statically
allocated fixmap page tables so that we may reuse it later for
the linear mapping once we move the kernel text mapping out of it.

This also involves taking into account that table entries at any of
the levels we are populating may have been populated already, since
the fixmap mapping might not be disjoint up to the pgd level anymore
from other early mappings.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/compiler.h |  2 +
 arch/arm64/kernel/vmlinux.lds.S   | 12 ++--
 arch/arm64/mm/mmu.c               | 60 ++++++++++++++------
 3 files changed, 51 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
index ee35fd0f2236..dd342af63673 100644
--- a/arch/arm64/include/asm/compiler.h
+++ b/arch/arm64/include/asm/compiler.h
@@ -27,4 +27,6 @@
  */
 #define __asmeq(x, y)  ".ifnc " x "," y " ; .err ; .endif\n\t"
 
+#define __pgdir		__attribute__((section(".pgdir"),aligned(PAGE_SIZE)))
+
 #endif	/* __ASM_COMPILER_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 98073332e2d0..ceec4def354b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -160,11 +160,13 @@ SECTIONS
 
 	BSS_SECTION(0, 0, 0)
 
-	. = ALIGN(PAGE_SIZE);
-	idmap_pg_dir = .;
-	. += IDMAP_DIR_SIZE;
-	swapper_pg_dir = .;
-	. += SWAPPER_DIR_SIZE;
+	.pgdir (NOLOAD) : ALIGN(PAGE_SIZE) {
+		idmap_pg_dir = .;
+		. += IDMAP_DIR_SIZE;
+		swapper_pg_dir = .;
+		. += SWAPPER_DIR_SIZE;
+		*(.pgdir)
+	}
 
 	_end = .;
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9211b8527f25..5af804334697 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -342,6 +342,44 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 }
 #endif
 
+struct bootstrap_pgtables {
+	pte_t	pte[PTRS_PER_PTE];
+	pmd_t	pmd[PTRS_PER_PMD > 1 ? PTRS_PER_PMD : 0];
+	pud_t	pud[PTRS_PER_PUD > 1 ? PTRS_PER_PUD : 0];
+};
+
+static void __init bootstrap_early_mapping(unsigned long addr,
+					   struct bootstrap_pgtables *reg,
+					   bool pte_level)
+{
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_offset_k(addr);
+	if (pgd_none(*pgd)) {
+		clear_page(reg->pud);
+		memblock_reserve(__pa(reg->pud), PAGE_SIZE);
+		pgd_populate(&init_mm, pgd, reg->pud);
+	}
+	pud = pud_offset(pgd, addr);
+	if (pud_none(*pud)) {
+		clear_page(reg->pmd);
+		memblock_reserve(__pa(reg->pmd), PAGE_SIZE);
+		pud_populate(&init_mm, pud, reg->pmd);
+	}
+
+	if (!pte_level)
+		return;
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd)) {
+		clear_page(reg->pte);
+		memblock_reserve(__pa(reg->pte), PAGE_SIZE);
+		pmd_populate_kernel(&init_mm, pmd, reg->pte);
+	}
+}
+
 static void __init map_mem(void)
 {
 	struct memblock_region *reg;
@@ -544,14 +582,6 @@ void vmemmap_free(unsigned long start, unsigned long end)
 }
 #endif	/* CONFIG_SPARSEMEM_VMEMMAP */
 
-static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
-#if CONFIG_PGTABLE_LEVELS > 2
-static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss;
-#endif
-#if CONFIG_PGTABLE_LEVELS > 3
-static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss;
-#endif
-
 static inline pud_t * fixmap_pud(unsigned long addr)
 {
 	pgd_t *pgd = pgd_offset_k(addr);
@@ -581,21 +611,15 @@ static inline pte_t * fixmap_pte(unsigned long addr)
 
 void __init early_fixmap_init(void)
 {
-	pgd_t *pgd;
-	pud_t *pud;
+	static struct bootstrap_pgtables fixmap_bs_pgtables __pgdir;
 	pmd_t *pmd;
-	unsigned long addr = FIXADDR_START;
 
-	pgd = pgd_offset_k(addr);
-	pgd_populate(&init_mm, pgd, bm_pud);
-	pud = pud_offset(pgd, addr);
-	pud_populate(&init_mm, pud, bm_pmd);
-	pmd = pmd_offset(pud, addr);
-	pmd_populate_kernel(&init_mm, pmd, bm_pte);
+	bootstrap_early_mapping(FIXADDR_START, &fixmap_bs_pgtables, true);
+	pmd = fixmap_pmd(FIXADDR_START);
 
 	/*
 	 * The boot-ioremap range spans multiple pmds, for which
-	 * we are not preparted:
+	 * we are not prepared:
 	 */
 	BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
 		     != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/7] arm64: mm: explicitly bootstrap the linear mapping
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2015-09-23  0:37 ` [PATCH v2 3/7] arm64: split off early mapping code from early_fixmap_init() Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 5/7] arm64: move kernel mapping out of linear region Ard Biesheuvel
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation of moving the kernel text out of the linear
mapping, ensure that the part of the kernel Image that contains
the statically allocated page tables is made accessible via the
linear mapping before performing the actual mapping of all of
memory. This is needed by the normal mapping routines, that rely
on the linear mapping to walk the page tables while manipulating
them.

In addition, explicitly map the start of DRAM and set the memblock
limit so that all early memblock allocations are done from a region
that is guaranteed to be mapped.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/vmlinux.lds.S |  19 +++-
 arch/arm64/mm/mmu.c             | 109 ++++++++++++++------
 2 files changed, 98 insertions(+), 30 deletions(-)

diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index ceec4def354b..0b82c4c203fb 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -68,6 +68,18 @@ PECOFF_FILE_ALIGNMENT = 0x200;
 #define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
 #endif
 
+/*
+ * The pgdir region needs to be mappable using a single PMD or PUD sized region,
+ * so it should not cross a 512 MB or 1 GB alignment boundary, respectively
+ * (depending on page size). So align to a power-of-2 upper bound of the size
+ * of the entire __pgdir section.
+ */
+#if CONFIG_ARM64_PGTABLE_LEVELS == 2
+#define PGDIR_ALIGN	(8 * PAGE_SIZE)
+#else
+#define PGDIR_ALIGN	(16 * PAGE_SIZE)
+#endif
+
 SECTIONS
 {
 	/*
@@ -160,7 +172,7 @@ SECTIONS
 
 	BSS_SECTION(0, 0, 0)
 
-	.pgdir (NOLOAD) : ALIGN(PAGE_SIZE) {
+	.pgdir (NOLOAD) : ALIGN(PGDIR_ALIGN) {
 		idmap_pg_dir = .;
 		. += IDMAP_DIR_SIZE;
 		swapper_pg_dir = .;
@@ -185,6 +197,11 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
 	"ID map text too big or misaligned")
 
 /*
+ * Check that the chosen PGDIR_ALIGN value is sufficient.
+ */
+ASSERT(SIZEOF(.pgdir) <= ALIGNOF(.pgdir), ".pgdir size exceeds its alignment")
+
+/*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
 ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5af804334697..3f99cf1aaa0d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -380,26 +380,92 @@ static void __init bootstrap_early_mapping(unsigned long addr,
 	}
 }
 
-static void __init map_mem(void)
+/*
+ * Bootstrap a memory mapping in such a way that it does not require allocation
+ * of page tables beyond the ones that were allocated statically by
+ * bootstrap_early_mapping().
+ * This is done by finding the memblock that covers pa_base, and intersecting
+ * it with the naturally aligned 512 MB of 1 GB region (depending on page size)
+ * that covers pa_base as well and (on 4k pages) round it to section size.
+ */
+static unsigned long __init bootstrap_region(struct bootstrap_pgtables *reg,
+					     phys_addr_t pa_base,
+					     unsigned long va_offset)
 {
-	struct memblock_region *reg;
-	phys_addr_t limit;
+	unsigned long va_base = __phys_to_virt(pa_base) + va_offset;
+	struct memblock_region *mr;
+
+	bootstrap_early_mapping(va_base, reg,
+				IS_ENABLED(CONFIG_ARM64_64K_PAGES));
+
+	for_each_memblock(memory, mr) {
+		phys_addr_t start = mr->base;
+		phys_addr_t end = start + mr->size;
+		unsigned long vstart, vend;
+
+		if (start > pa_base || end <= pa_base)
+			continue;
+
+#ifdef CONFIG_ARM64_64K_PAGES
+		/* clip the region to PMD size */
+		vstart = max(va_base & PMD_MASK,
+			     round_up(__phys_to_virt(start) + va_offset,
+				      PAGE_SIZE));
+		vend = min(round_up(va_base + 1, PMD_SIZE),
+			   round_down(__phys_to_virt(end) + va_offset,
+				      PAGE_SIZE));
+#else
+		/* clip the region to PUD size */
+		vstart = max(va_base & PUD_MASK,
+			     round_up(__phys_to_virt(start) + va_offset,
+				      PMD_SIZE));
+		vend = min(round_up(va_base + 1, PUD_SIZE),
+			   round_down(__phys_to_virt(end) + va_offset,
+				      PMD_SIZE));
+#endif
+
+		create_mapping(__pa(vstart - va_offset), vstart, vend - vstart,
+			       PAGE_KERNEL_EXEC);
+
+		return vend;
+	}
+	return 0;
+}
+
+/*
+ * Bootstrap the linear ranges that cover the start of DRAM and swapper_pg_dir
+ * so that the statically allocated page tables as well as newly allocated ones
+ * are accessible via the linear mapping.
+ */
+static void __init bootstrap_linear_mapping(unsigned long va_offset)
+{
+	static struct bootstrap_pgtables __pgdir bs_pgdir_low, bs_pgdir_high;
+	unsigned long vend;
+
+	/* Bootstrap the mapping for the beginning of RAM */
+	vend = bootstrap_region(&bs_pgdir_low, memblock_start_of_DRAM(),
+				va_offset);
+	BUG_ON(vend == 0);
 
 	/*
 	 * Temporarily limit the memblock range. We need to do this as
 	 * create_mapping requires puds, pmds and ptes to be allocated from
-	 * memory addressable from the initial direct kernel mapping.
-	 *
-	 * The initial direct kernel mapping, located@swapper_pg_dir, gives
-	 * us PUD_SIZE (4K pages) or PMD_SIZE (64K pages) memory starting from
-	 * PHYS_OFFSET (which must be aligned to 2MB as per
-	 * Documentation/arm64/booting.txt).
+	 * memory addressable from the early linear mapping.
 	 */
-	if (IS_ENABLED(CONFIG_ARM64_64K_PAGES))
-		limit = PHYS_OFFSET + PMD_SIZE;
-	else
-		limit = PHYS_OFFSET + PUD_SIZE;
-	memblock_set_current_limit(limit);
+	memblock_set_current_limit(__pa(vend - va_offset));
+
+	/* Bootstrap the linear mapping of the kernel image */
+	vend = bootstrap_region(&bs_pgdir_high, __pa(swapper_pg_dir),
+				va_offset);
+	if (vend == 0)
+		panic("Kernel image not covered by memblock");
+}
+
+static void __init map_mem(void)
+{
+	struct memblock_region *reg;
+
+	bootstrap_linear_mapping(0);
 
 	/* map all the memory banks */
 	for_each_memblock(memory, reg) {
@@ -409,21 +475,6 @@ static void __init map_mem(void)
 		if (start >= end)
 			break;
 
-#ifndef CONFIG_ARM64_64K_PAGES
-		/*
-		 * For the first memory bank align the start address and
-		 * current memblock limit to prevent create_mapping() from
-		 * allocating pte page tables from unmapped memory.
-		 * When 64K pages are enabled, the pte page table for the
-		 * first PGDIR_SIZE is already present in swapper_pg_dir.
-		 */
-		if (start < limit)
-			start = ALIGN(start, PMD_SIZE);
-		if (end < limit) {
-			limit = end & PMD_MASK;
-			memblock_set_current_limit(limit);
-		}
-#endif
 		__map_memblock(start, end);
 	}
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/7] arm64: move kernel mapping out of linear region
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2015-09-23  0:37 ` [PATCH v2 4/7] arm64: mm: explicitly bootstrap the linear mapping Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 6/7] arm64: map linear region as non-executable Ard Biesheuvel
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

This moves the primary mapping of the kernel Image out of
the linear region. This is a preparatory step towards allowing
the kernel Image to reside anywhere in physical memory without
affecting the ability to map all of it efficiently.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/boot.h   |  7 +++++++
 arch/arm64/include/asm/memory.h | 19 ++++++++++++++++---
 arch/arm64/kernel/head.S        | 18 +++++++++++++-----
 arch/arm64/kernel/vmlinux.lds.S | 11 +++++++++--
 arch/arm64/mm/dump.c            |  3 ++-
 arch/arm64/mm/mmu.c             | 10 +++++++++-
 6 files changed, 56 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/boot.h b/arch/arm64/include/asm/boot.h
index 81151b67b26b..092d1096ce9a 100644
--- a/arch/arm64/include/asm/boot.h
+++ b/arch/arm64/include/asm/boot.h
@@ -11,4 +11,11 @@
 #define MIN_FDT_ALIGN		8
 #define MAX_FDT_SIZE		SZ_2M
 
+/*
+ * arm64 requires the kernel image to be 2 MB aligned and
+ * not exceed 64 MB in size.
+ */
+#define MIN_KIMG_ALIGN		SZ_2M
+#define MAX_KIMG_SIZE		SZ_64M
+
 #endif
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 6b4c3ad75a2a..bdea5b4c7be9 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -24,6 +24,7 @@
 #include <linux/compiler.h>
 #include <linux/const.h>
 #include <linux/types.h>
+#include <asm/boot.h>
 #include <asm/sizes.h>
 
 /*
@@ -39,7 +40,12 @@
 #define PCI_IO_SIZE		SZ_16M
 
 /*
- * PAGE_OFFSET - the virtual address of the start of the kernel image (top
+ * Offset below PAGE_OFFSET where to map the kernel Image.
+ */
+#define KIMAGE_OFFSET		MAX_KIMG_SIZE
+
+/*
+ * PAGE_OFFSET - the virtual address of the base of the linear mapping (top
  *		 (VA_BITS - 1))
  * VA_BITS - the maximum number of bits for virtual addresses.
  * TASK_SIZE - the maximum size of a user space task.
@@ -49,7 +55,8 @@
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) << (VA_BITS - 1))
-#define MODULES_END		(PAGE_OFFSET)
+#define KIMAGE_VADDR		(PAGE_OFFSET - KIMAGE_OFFSET)
+#define MODULES_END		KIMAGE_VADDR
 #define MODULES_VADDR		(MODULES_END - SZ_64M)
 #define PCI_IO_END		(MODULES_VADDR - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
@@ -77,7 +84,11 @@
  * private definitions which should NOT be used outside memory.h
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
  */
-#define __virt_to_phys(x)	(((phys_addr_t)(x) - PAGE_OFFSET + PHYS_OFFSET))
+#define __virt_to_phys(x) ({						\
+	long __x = (long)(x) - PAGE_OFFSET;				\
+	__x >= 0 ? (phys_addr_t)(__x + PHYS_OFFSET) : 			\
+		   (phys_addr_t)(__x + PHYS_OFFSET + kernel_va_offset); })
+
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
 
 /*
@@ -107,6 +118,8 @@ extern phys_addr_t		memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ memstart_addr; })
 
+extern u64 kernel_va_offset;
+
 /*
  * The maximum physical address that the linear direct mapping
  * of system RAM can cover. (PAGE_OFFSET can be interpreted as
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index a055be6125cf..50df839c754a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -36,8 +36,6 @@
 #include <asm/page.h>
 #include <asm/virt.h>
 
-#define __PHYS_OFFSET	(KERNEL_START - TEXT_OFFSET)
-
 #if (TEXT_OFFSET & 0xfff) != 0
 #error TEXT_OFFSET must be at least 4KB aligned
 #elif (PAGE_OFFSET & 0x1fffff) != 0
@@ -58,6 +56,8 @@
 
 #define KERNEL_START	_text
 #define KERNEL_END	_end
+#define KERNEL_BASE	(KERNEL_START - TEXT_OFFSET)
+
 
 /*
  * Initial memory map attributes.
@@ -230,7 +230,15 @@ section_table:
 ENTRY(stext)
 	bl	preserve_boot_args
 	bl	el2_setup			// Drop to EL1, w20=cpu_boot_mode
-	adrp	x24, __PHYS_OFFSET
+
+	/*
+	 * Before the linear mapping has been set up, __va() translations will
+	 * not produce usable virtual addresses unless we tweak PHYS_OFFSET to
+	 * compensate for the offset between the kernel mapping and the base of
+	 * the linear mapping. We will undo this in map_mem().
+	 */
+	adrp	x24, KERNEL_BASE + KIMAGE_OFFSET
+
 	bl	set_cpu_boot_mode_flag
 	bl	__create_page_tables		// x25=TTBR0, x26=TTBR1
 	/*
@@ -406,10 +414,10 @@ __create_page_tables:
 	 * Map the kernel image (starting with PHYS_OFFSET).
 	 */
 	mov	x0, x26				// swapper_pg_dir
-	mov	x5, #PAGE_OFFSET
+	ldr	x5, =KERNEL_BASE
 	create_pgd_entry x0, x5, x3, x6
 	ldr	x6, =KERNEL_END			// __va(KERNEL_END)
-	mov	x3, x24				// phys offset
+	adrp	x3, KERNEL_BASE			// real PHYS_OFFSET
 	create_block_map x0, x7, x3, x5, x6
 
 	/*
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 0b82c4c203fb..1f6d79eeda06 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -6,6 +6,7 @@
 
 #include <asm-generic/vmlinux.lds.h>
 #include <asm/thread_info.h>
+#include <asm/boot.h>
 #include <asm/memory.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
@@ -96,7 +97,7 @@ SECTIONS
 		*(.discard.*)
 	}
 
-	. = PAGE_OFFSET + TEXT_OFFSET;
+	. = KIMAGE_VADDR + TEXT_OFFSET;
 
 	.head.text : {
 		_text = .;
@@ -204,4 +205,10 @@ ASSERT(SIZEOF(.pgdir) <= ALIGNOF(.pgdir), ".pgdir size exceeds its alignment")
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
-ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
+ASSERT(_text == (KIMAGE_VADDR + TEXT_OFFSET), "HEAD is misaligned")
+
+/*
+ * Make sure the memory footprint of the kernel Image does not exceed the limit.
+ */
+ASSERT(_end - _text + TEXT_OFFSET <= MAX_KIMG_SIZE,
+	"Kernel Image memory footprint exceeds MAX_KIMG_SIZE")
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index f3d6221cd5bd..774f80dc877f 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -63,7 +63,8 @@ static struct addr_marker address_markers[] = {
 	{ PCI_IO_END,		"PCI I/O end" },
 	{ MODULES_VADDR,	"Modules start" },
 	{ MODULES_END,		"Modules end" },
-	{ PAGE_OFFSET,		"Kernel Mapping" },
+	{ KIMAGE_VADDR,		"Kernel Mapping" },
+	{ PAGE_OFFSET,		"Linear Mapping" },
 	{ -1,			NULL },
 };
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3f99cf1aaa0d..91a619482cc2 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -50,6 +50,8 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 struct page *empty_zero_page;
 EXPORT_SYMBOL(empty_zero_page);
 
+u64 kernel_va_offset __read_mostly;
+
 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 			      unsigned long size, pgprot_t vma_prot)
 {
@@ -436,6 +438,9 @@ static unsigned long __init bootstrap_region(struct bootstrap_pgtables *reg,
  * Bootstrap the linear ranges that cover the start of DRAM and swapper_pg_dir
  * so that the statically allocated page tables as well as newly allocated ones
  * are accessible via the linear mapping.
+ * Since@this point, PHYS_OFFSET is still biased to redirect __va()
+ * translations into the kernel text mapping, we need to apply an
+ * explicit va_offset to calculate linear virtual addresses.
  */
 static void __init bootstrap_linear_mapping(unsigned long va_offset)
 {
@@ -465,7 +470,10 @@ static void __init map_mem(void)
 {
 	struct memblock_region *reg;
 
-	bootstrap_linear_mapping(0);
+	bootstrap_linear_mapping(KIMAGE_OFFSET);
+
+	kernel_va_offset = KIMAGE_OFFSET;
+	memstart_addr -= KIMAGE_OFFSET;
 
 	/* map all the memory banks */
 	for_each_memblock(memory, reg) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 6/7] arm64: map linear region as non-executable
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2015-09-23  0:37 ` [PATCH v2 5/7] arm64: move kernel mapping out of linear region Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-09-23  0:37 ` [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we moved the kernel text out of the linear region, there
is no longer a reason to map the linear region as executable. This
also allows us to completely get rid of the __map_mem() variant that
only maps some of it executable if CONFIG_DEBUG_RODATA is selected.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 41 +-------------------
 1 file changed, 2 insertions(+), 39 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 91a619482cc2..4a1c9d0769f2 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -302,47 +302,10 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 				phys, virt, size, prot, late_alloc);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 {
-	/*
-	 * Set up the executable regions using the existing section mappings
-	 * for now. This will get more fine grained later once all memory
-	 * is mapped
-	 */
-	unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
-	unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
-
-	if (end < kernel_x_start) {
-		create_mapping(start, __phys_to_virt(start),
-			end - start, PAGE_KERNEL);
-	} else if (start >= kernel_x_end) {
-		create_mapping(start, __phys_to_virt(start),
-			end - start, PAGE_KERNEL);
-	} else {
-		if (start < kernel_x_start)
-			create_mapping(start, __phys_to_virt(start),
-				kernel_x_start - start,
-				PAGE_KERNEL);
-		create_mapping(kernel_x_start,
-				__phys_to_virt(kernel_x_start),
-				kernel_x_end - kernel_x_start,
-				PAGE_KERNEL_EXEC);
-		if (kernel_x_end < end)
-			create_mapping(kernel_x_end,
-				__phys_to_virt(kernel_x_end),
-				end - kernel_x_end,
-				PAGE_KERNEL);
-	}
-
-}
-#else
-static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
-{
-	create_mapping(start, __phys_to_virt(start), end - start,
-			PAGE_KERNEL_EXEC);
+	create_mapping(start, __phys_to_virt(start), end - start, PAGE_KERNEL);
 }
-#endif
 
 struct bootstrap_pgtables {
 	pte_t	pte[PTRS_PER_PTE];
@@ -427,7 +390,7 @@ static unsigned long __init bootstrap_region(struct bootstrap_pgtables *reg,
 #endif
 
 		create_mapping(__pa(vstart - va_offset), vstart, vend - vstart,
-			       PAGE_KERNEL_EXEC);
+			       PAGE_KERNEL);
 
 		return vend;
 	}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2015-09-23  0:37 ` [PATCH v2 6/7] arm64: map linear region as non-executable Ard Biesheuvel
@ 2015-09-23  0:37 ` Ard Biesheuvel
  2015-10-14 11:30   ` James Morse
  2015-09-24 16:37 ` [PATCH v2 0/7] arm64: relax Image placement rules Suzuki K. Poulose
  2015-10-13 17:07 ` Catalin Marinas
  8 siblings, 1 reply; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-23  0:37 UTC (permalink / raw)
  To: linux-arm-kernel

This relaxes the kernel Image placement requirements, so that it
may be placed at any 2 MB aligned offset in physical memory.

This is accomplished by ignoring PHYS_OFFSET when installing
memblocks, and accounting for the apparent virtual offset of
the kernel Image (in addition to the 64 MB that it is moved
below PAGE_OFFSET). As a result, virtual address references
below PAGE_OFFSET are correctly mapped onto physical references
into the kernel Image regardless of where it sits in memory.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/arm64/booting.txt | 12 ++---
 arch/arm64/include/asm/memory.h |  8 ++-
 arch/arm64/mm/init.c            | 51 +++++++++++++++++++-
 arch/arm64/mm/mmu.c             | 30 ++++++++++--
 4 files changed, 86 insertions(+), 15 deletions(-)

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 7d9d3c2286b2..baf207acd6dd 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -112,14 +112,14 @@ Header notes:
   depending on selected features, and is effectively unbound.
 
 The Image must be placed text_offset bytes from a 2MB aligned base
-address near the start of usable system RAM and called there. Memory
-below that base address is currently unusable by Linux, and therefore it
-is strongly recommended that this location is the start of system RAM.
-The region between the 2 MB aligned base address and the start of the
-image has no special significance to the kernel, and may be used for
-other purposes.
+address anywhere in usable system RAM and called there. The region
+between the 2 MB aligned base address and the start of the image has no
+special significance to the kernel, and may be used for other purposes.
 At least image_size bytes from the start of the image must be free for
 use by the kernel.
+NOTE: versions prior to v4.4 cannot make use of memory below the
+physical offset of the Image so it is recommended that the Image be
+placed as close as possible to the start of system RAM.
 
 Any memory described to the kernel (even that below the start of the
 image) which is not marked as reserved from the kernel (e.g., with a
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index bdea5b4c7be9..598661b268cc 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -121,12 +121,10 @@ extern phys_addr_t		memstart_addr;
 extern u64 kernel_va_offset;
 
 /*
- * The maximum physical address that the linear direct mapping
- * of system RAM can cover. (PAGE_OFFSET can be interpreted as
- * a 2's complement signed quantity and negated to derive the
- * maximum size of the linear mapping.)
+ * Allow all memory at the discovery stage. We will clip it later.
  */
-#define MAX_MEMBLOCK_ADDR	({ memstart_addr - PAGE_OFFSET - 1; })
+#define MIN_MEMBLOCK_ADDR	0
+#define MAX_MEMBLOCK_ADDR	U64_MAX
 
 /*
  * PFNs are used to describe any physical page; this means
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b9390eb1e29f..d3abc3555623 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -35,6 +35,7 @@
 #include <linux/efi.h>
 #include <linux/swiotlb.h>
 
+#include <asm/boot.h>
 #include <asm/fixmap.h>
 #include <asm/memory.h>
 #include <asm/mmu_context.h>
@@ -157,9 +158,57 @@ static int __init early_mem(char *p)
 }
 early_param("mem", early_mem);
 
+static void enforce_memory_limit(void)
+{
+	const phys_addr_t kbase = round_down(__pa(_text), MIN_KIMG_ALIGN);
+	u64 to_remove = memblock_phys_mem_size() - memory_limit;
+	phys_addr_t max_addr = 0;
+	struct memblock_region *r;
+
+	if (memory_limit == (phys_addr_t)ULLONG_MAX)
+		return;
+
+	/*
+	 * The kernel may be high up in physical memory, so try to apply the
+	 * limit below the kernel first, and only let the generic handling
+	 * take over if it turns out we haven't clipped enough memory yet.
+	 */
+	for_each_memblock(memory, r) {
+		if (r->base + r->size > kbase) {
+			u64 rem = min(to_remove, kbase - r->base);
+
+			max_addr = r->base + rem;
+			to_remove -= rem;
+			break;
+		}
+		if (to_remove <= r->size) {
+			max_addr = r->base + to_remove;
+			to_remove = 0;
+			break;
+		}
+		to_remove -= r->size;
+	}
+
+	/* truncate both memory and reserved regions */
+	memblock_remove_range(&memblock.memory, 0, max_addr);
+	memblock_remove_range(&memblock.reserved, 0, max_addr);
+
+	if (to_remove)
+		memblock_enforce_memory_limit(memory_limit);
+}
+
 void __init arm64_memblock_init(void)
 {
-	memblock_enforce_memory_limit(memory_limit);
+	/*
+	 * Remove the memory that we will not be able to cover
+	 * with the linear mapping.
+	 */
+	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+
+	memblock_remove(round_down(memblock_start_of_DRAM(), SZ_1G) +
+			linear_region_size, ULLONG_MAX);
+
+	enforce_memory_limit();
 
 	/*
 	 * Register the kernel text, kernel data, initrd, and initial
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 4a1c9d0769f2..675757c01eff 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/init.h>
+#include <linux/initrd.h>
 #include <linux/libfdt.h>
 #include <linux/mman.h>
 #include <linux/nodemask.h>
@@ -432,11 +433,34 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset)
 static void __init map_mem(void)
 {
 	struct memblock_region *reg;
+	u64 new_memstart_addr;
+	u64 new_va_offset;
 
-	bootstrap_linear_mapping(KIMAGE_OFFSET);
+	/*
+	 * Select a suitable value for the base of physical memory.
+	 * This should be equal to or below the lowest usable physical
+	 * memory address, and aligned to PUD/PMD size so that we can map
+	 * it efficiently.
+	 */
+	new_memstart_addr = round_down(memblock_start_of_DRAM(), SZ_1G);
+
+	/*
+	 * Calculate the offset between the kernel text mapping that exists
+	 * outside of the linear mapping, and its mapping in the linear region.
+	 */
+	new_va_offset = memstart_addr - new_memstart_addr;
+
+	bootstrap_linear_mapping(new_va_offset);
+
+	kernel_va_offset = new_va_offset;
+
+	/* Recalculate virtual addresses of initrd region */
+	if (initrd_start) {
+		initrd_start += new_va_offset;
+		initrd_end += new_va_offset;
+	}
 
-	kernel_va_offset = KIMAGE_OFFSET;
-	memstart_addr -= KIMAGE_OFFSET;
+	memstart_addr = new_memstart_addr;
 
 	/* map all the memory banks */
 	for_each_memblock(memory, reg) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable
  2015-09-23  0:37 ` [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
@ 2015-09-23  4:45   ` Mark Rutland
  2015-09-23 22:59   ` Rob Herring
  1 sibling, 0 replies; 22+ messages in thread
From: Mark Rutland @ 2015-09-23  4:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Sep 23, 2015 at 01:37:37AM +0100, Ard Biesheuvel wrote:
> By default, early_init_dt_add_memory_arch() ignores memory below
> the base of the kernel image since it won't be addressable via the
> linear mapping. However, this is not appropriate anymore once we
> decouple the kernel text mapping from the linear mapping, so archs
> may want to drop the low limit entirely. So allow the minimum to be
> overridden by setting MIN_MEMBLOCK_ADDR.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

As it's analagou to MAX_MEMBLOCK_ADDR, this makes sense to me.

FWIW:

Acked-by: Mark RUtland <mark.rutland@arm.com>

Mark.

> ---
>  drivers/of/fdt.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
> index 6e82bc42373b..5e7ef800a816 100644
> --- a/drivers/of/fdt.c
> +++ b/drivers/of/fdt.c
> @@ -967,13 +967,16 @@ int __init early_init_dt_scan_chosen(unsigned long node, const char *uname,
>  }
>  
>  #ifdef CONFIG_HAVE_MEMBLOCK
> +#ifndef MIN_MEMBLOCK_ADDR
> +#define MIN_MEMBLOCK_ADDR	__pa(PAGE_OFFSET)
> +#endif
>  #ifndef MAX_MEMBLOCK_ADDR
>  #define MAX_MEMBLOCK_ADDR	((phys_addr_t)~0)
>  #endif
>  
>  void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)
>  {
> -	const u64 phys_offset = __pa(PAGE_OFFSET);
> +	const u64 phys_offset = MIN_MEMBLOCK_ADDR;
>  
>  	if (!PAGE_ALIGNED(base)) {
>  		if (size < PAGE_SIZE - (base & ~PAGE_MASK)) {
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable
  2015-09-23  0:37 ` [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
  2015-09-23  4:45   ` Mark Rutland
@ 2015-09-23 22:59   ` Rob Herring
  1 sibling, 0 replies; 22+ messages in thread
From: Rob Herring @ 2015-09-23 22:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 22, 2015 at 7:37 PM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> By default, early_init_dt_add_memory_arch() ignores memory below
> the base of the kernel image since it won't be addressable via the
> linear mapping. However, this is not appropriate anymore once we
> decouple the kernel text mapping from the linear mapping, so archs
> may want to drop the low limit entirely. So allow the minimum to be
> overridden by setting MIN_MEMBLOCK_ADDR.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Acked-by: Rob Herring <robh@kernel.org>

> ---
>  drivers/of/fdt.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
> index 6e82bc42373b..5e7ef800a816 100644
> --- a/drivers/of/fdt.c
> +++ b/drivers/of/fdt.c
> @@ -967,13 +967,16 @@ int __init early_init_dt_scan_chosen(unsigned long node, const char *uname,
>  }
>
>  #ifdef CONFIG_HAVE_MEMBLOCK
> +#ifndef MIN_MEMBLOCK_ADDR
> +#define MIN_MEMBLOCK_ADDR      __pa(PAGE_OFFSET)
> +#endif
>  #ifndef MAX_MEMBLOCK_ADDR
>  #define MAX_MEMBLOCK_ADDR      ((phys_addr_t)~0)
>  #endif
>
>  void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)
>  {
> -       const u64 phys_offset = __pa(PAGE_OFFSET);
> +       const u64 phys_offset = MIN_MEMBLOCK_ADDR;
>
>         if (!PAGE_ALIGNED(base)) {
>                 if (size < PAGE_SIZE - (base & ~PAGE_MASK)) {
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2015-09-23  0:37 ` [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
@ 2015-09-24 16:37 ` Suzuki K. Poulose
  2015-09-24 16:38   ` Ard Biesheuvel
  2015-10-13 17:07 ` Catalin Marinas
  8 siblings, 1 reply; 22+ messages in thread
From: Suzuki K. Poulose @ 2015-09-24 16:37 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/09/15 01:37, Ard Biesheuvel wrote:
> This is a followup to the "arm64: update/clarify/relax Image and FDT placement
> rules" series I sent a while ago:
> (http://article.gmane.org/gmane.linux.ports.arm.kernel/407148)
>
> This has now been split in two series: this second series deals with the
> physical and virtual placement of the kernel Image.
>
> This series updates the mapping of the kernel Image and the linear mapping of
> system memory to allow more freedom in the choice of placement without affecting
> the accessibility of system RAM below the kernel Image, and the mapping
> efficiency (i.e., memory can always be mapped in 512 MB or 1 GB blocks).
>

Ard,

I gave your series a quick run and dumping the kernel page tables(with CONFIG_ARM64_PTDUMP)
I find this problem :

...

---[ Kernel Mapping ]---
0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF    MEM/NORMAL    *****
0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF    UXN MEM/NORMAL
0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF    UXN MEM/NORMAL
0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF    UXN MEM/NORMAL
---[ Linear Mapping ]---
0xffffffc000000000-0xffffffc040000000           1G     RW NX SHD AF    UXN MEM/NORMAL


Note that the first mapping in the kernel doesn't have UXN set, which is a regression.
I haven't started digging into it yet, but I thought I will point it out here, in case you
already fixed it.

Note: I see that you have used CONFIG_ARM64_64K_PAGES to handle section/table mapping
(which I have tried to cleanup in 16K page size series and which is not merged yet).
We should be careful when we merge our patches, as we could miss such new cases.


Thanks
Suzuki

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-09-24 16:37 ` [PATCH v2 0/7] arm64: relax Image placement rules Suzuki K. Poulose
@ 2015-09-24 16:38   ` Ard Biesheuvel
  2015-09-24 23:19     ` Ard Biesheuvel
  0 siblings, 1 reply; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-24 16:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 24 September 2015 at 09:37, Suzuki K. Poulose <Suzuki.Poulose@arm.com> wrote:
> On 23/09/15 01:37, Ard Biesheuvel wrote:
>>
>> This is a followup to the "arm64: update/clarify/relax Image and FDT
>> placement
>> rules" series I sent a while ago:
>> (http://article.gmane.org/gmane.linux.ports.arm.kernel/407148)
>>
>> This has now been split in two series: this second series deals with the
>> physical and virtual placement of the kernel Image.
>>
>> This series updates the mapping of the kernel Image and the linear mapping
>> of
>> system memory to allow more freedom in the choice of placement without
>> affecting
>> the accessibility of system RAM below the kernel Image, and the mapping
>> efficiency (i.e., memory can always be mapped in 512 MB or 1 GB blocks).
>>
>
> Ard,
>
> I gave your series a quick run and dumping the kernel page tables(with
> CONFIG_ARM64_PTDUMP)
> I find this problem :
>
> ...
>
> ---[ Kernel Mapping ]---
> 0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF
> MEM/NORMAL    *****
> 0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF    UXN
> MEM/NORMAL
> 0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF    UXN
> MEM/NORMAL
> 0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF    UXN
> MEM/NORMAL
> ---[ Linear Mapping ]---
> 0xffffffc000000000-0xffffffc040000000           1G     RW NX SHD AF    UXN
> MEM/NORMAL
>
>
> Note that the first mapping in the kernel doesn't have UXN set, which is a
> regression.
> I haven't started digging into it yet, but I thought I will point it out
> here, in case you
> already fixed it.
>

Ok, thanks for pointing that out. I will look into it.

> Note: I see that you have used CONFIG_ARM64_64K_PAGES to handle
> section/table mapping
> (which I have tried to cleanup in 16K page size series and which is not
> merged yet).
> We should be careful when we merge our patches, as we could miss such new
> cases.
>

I was aware of this, and I think it makes sense to the 16 KB pages to
be merged first, and then I will rebase these patches on top of it.

-- 
Ard.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-09-24 16:38   ` Ard Biesheuvel
@ 2015-09-24 23:19     ` Ard Biesheuvel
  2015-09-25  8:44       ` Suzuki K. Poulose
  0 siblings, 1 reply; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-24 23:19 UTC (permalink / raw)
  To: linux-arm-kernel

On 24 September 2015 at 09:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 24 September 2015 at 09:37, Suzuki K. Poulose <Suzuki.Poulose@arm.com> wrote:
>> On 23/09/15 01:37, Ard Biesheuvel wrote:
>>>
>>> This is a followup to the "arm64: update/clarify/relax Image and FDT
>>> placement
>>> rules" series I sent a while ago:
>>> (http://article.gmane.org/gmane.linux.ports.arm.kernel/407148)
>>>
>>> This has now been split in two series: this second series deals with the
>>> physical and virtual placement of the kernel Image.
>>>
>>> This series updates the mapping of the kernel Image and the linear mapping
>>> of
>>> system memory to allow more freedom in the choice of placement without
>>> affecting
>>> the accessibility of system RAM below the kernel Image, and the mapping
>>> efficiency (i.e., memory can always be mapped in 512 MB or 1 GB blocks).
>>>
>>
>> Ard,
>>
>> I gave your series a quick run and dumping the kernel page tables(with
>> CONFIG_ARM64_PTDUMP)
>> I find this problem :
>>
>> ...
>>
>> ---[ Kernel Mapping ]---
>> 0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF
>> MEM/NORMAL    *****
>> 0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF    UXN
>> MEM/NORMAL
>> 0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF    UXN
>> MEM/NORMAL
>> 0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF    UXN
>> MEM/NORMAL
>> ---[ Linear Mapping ]---
>> 0xffffffc000000000-0xffffffc040000000           1G     RW NX SHD AF    UXN
>> MEM/NORMAL
>>
>>
>> Note that the first mapping in the kernel doesn't have UXN set, which is a
>> regression.
>> I haven't started digging into it yet, but I thought I will point it out
>> here, in case you
>> already fixed it.
>>
>
> Ok, thanks for pointing that out. I will look into it.
>

Turns out that, since the kernel mapping is not overwritten by the
linear mapping, it retains the original permissions assigned in
head.S. So this is enough to fix it

"""
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2df4a55f00d4..fcd250cff4bf 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -62,8 +62,8 @@
 /*
  * Initial memory map attributes.
  */
-#define PTE_FLAGS      PTE_TYPE_PAGE | PTE_AF | PTE_SHARED
-#define PMD_FLAGS      PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S
+#define PTE_FLAGS      PTE_TYPE_PAGE | PTE_AF | PTE_SHARED | PTE_UXN
+#define PMD_FLAGS      PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S | PMD_SECT_UXN

 #ifdef CONFIG_ARM64_64K_PAGES
 #define MM_MMUFLAGS    PTE_ATTRINDX(MT_NORMAL) | PTE_FLAGS
"""


>> Note: I see that you have used CONFIG_ARM64_64K_PAGES to handle
>> section/table mapping
>> (which I have tried to cleanup in 16K page size series and which is not
>> merged yet).
>> We should be careful when we merge our patches, as we could miss such new
>> cases.
>>
>
> I was aware of this, and I think it makes sense to the 16 KB pages to
> be merged first, and then I will rebase these patches on top of it.
>

Do you have a git tree with the latest version?

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-09-24 23:19     ` Ard Biesheuvel
@ 2015-09-25  8:44       ` Suzuki K. Poulose
  2015-09-25 21:53         ` Ard Biesheuvel
  0 siblings, 1 reply; 22+ messages in thread
From: Suzuki K. Poulose @ 2015-09-25  8:44 UTC (permalink / raw)
  To: linux-arm-kernel

On 25/09/15 00:19, Ard Biesheuvel wrote:
> On 24 September 2015 at 09:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> On 24 September 2015 at 09:37, Suzuki K. Poulose <Suzuki.Poulose@arm.com> wrote:
>>> On 23/09/15 01:37, Ard Biesheuvel wrote:


>>>
>>> Ard,
>>>
>>> I gave your series a quick run and dumping the kernel page tables(with
>>> CONFIG_ARM64_PTDUMP)
>>> I find this problem :
>>>
>>> ...
>>>
>>> ---[ Kernel Mapping ]---
>>> 0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF
>>> MEM/NORMAL    *****
>>> 0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF    UXN
>>> MEM/NORMAL
>>> 0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF    UXN
>>> MEM/NORMAL
>>> 0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF    UXN
>>> MEM/NORMAL
>>> ---[ Linear Mapping ]---
>>> 0xffffffc000000000-0xffffffc040000000           1G     RW NX SHD AF    UXN
>>> MEM/NORMAL
>>>
>>>
>>> Note that the first mapping in the kernel doesn't have UXN set, which is a
>>> regression.
>>> I haven't started digging into it yet, but I thought I will point it out
>>> here, in case you
>>> already fixed it.
>>>
>>
>> Ok, thanks for pointing that out. I will look into it.
>>
>
> Turns out that, since the kernel mapping is not overwritten by the
> linear mapping, it retains the original permissions assigned in
> head.S. So this is enough to fix it
>
> """
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 2df4a55f00d4..fcd250cff4bf 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -62,8 +62,8 @@
>   /*
>    * Initial memory map attributes.
>    */
> -#define PTE_FLAGS      PTE_TYPE_PAGE | PTE_AF | PTE_SHARED
> -#define PMD_FLAGS      PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S
> +#define PTE_FLAGS      PTE_TYPE_PAGE | PTE_AF | PTE_SHARED | PTE_UXN
> +#define PMD_FLAGS      PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S | PMD_SECT_UXN
>
>   #ifdef CONFIG_ARM64_64K_PAGES
>   #define MM_MMUFLAGS    PTE_ATTRINDX(MT_NORMAL) | PTE_FLAGS
> """
>

Yes, that fixes it. With that I get :

---[ Kernel Mapping ]---
0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF    UXN MEM/NORMAL
0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF    UXN MEM/NORMAL
0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF    UXN MEM/NORMAL
0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF    UXN MEM/NORMAL
---[ Linear Mapping ]---
0xffffffc000000000-0xffffffc080000000           2G     RW NX SHD AF    UXN MEM/NORMAL
0xffffffc800000000-0xffffffc880000000           2G     RW NX SHD AF    UXN MEM/NORMAL



>
>>> Note: I see that you have used CONFIG_ARM64_64K_PAGES to handle
>>> section/table mapping
>>> (which I have tried to cleanup in 16K page size series and which is not
>>> merged yet).
>>> We should be careful when we merge our patches, as we could miss such new
>>> cases.
>>>
>>
>> I was aware of this, and I think it makes sense to the 16 KB pages to
>> be merged first, and then I will rebase these patches on top of it.
>>
>
> Do you have a git tree with the latest version?
>

Yes, it is available here :

git://linux-arm.org/linux-skp.git  16k/v2-4.3-rc1


Thanks
Suzuki

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-09-25  8:44       ` Suzuki K. Poulose
@ 2015-09-25 21:53         ` Ard Biesheuvel
  0 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-09-25 21:53 UTC (permalink / raw)
  To: linux-arm-kernel

On 25 September 2015 at 01:44, Suzuki K. Poulose <Suzuki.Poulose@arm.com> wrote:
> On 25/09/15 00:19, Ard Biesheuvel wrote:
>>
>> On 24 September 2015 at 09:38, Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> wrote:
>>>
>>> On 24 September 2015 at 09:37, Suzuki K. Poulose <Suzuki.Poulose@arm.com>
>>> wrote:
>>>>
>>>> On 23/09/15 01:37, Ard Biesheuvel wrote:
>
>
>
>>>>
>>>> Ard,
>>>>
>>>> I gave your series a quick run and dumping the kernel page tables(with
>>>> CONFIG_ARM64_PTDUMP)
>>>> I find this problem :
>>>>
>>>> ...
>>>>
>>>> ---[ Kernel Mapping ]---
>>>> 0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF
>>>> MEM/NORMAL    *****
>>>> 0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF
>>>> UXN
>>>> MEM/NORMAL
>>>> 0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF
>>>> UXN
>>>> MEM/NORMAL
>>>> 0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF
>>>> UXN
>>>> MEM/NORMAL
>>>> ---[ Linear Mapping ]---
>>>> 0xffffffc000000000-0xffffffc040000000           1G     RW NX SHD AF
>>>> UXN
>>>> MEM/NORMAL
>>>>
>>>>
>>>> Note that the first mapping in the kernel doesn't have UXN set, which is
>>>> a
>>>> regression.
>>>> I haven't started digging into it yet, but I thought I will point it out
>>>> here, in case you
>>>> already fixed it.
>>>>
>>>
>>> Ok, thanks for pointing that out. I will look into it.
>>>
>>
>> Turns out that, since the kernel mapping is not overwritten by the
>> linear mapping, it retains the original permissions assigned in
>> head.S. So this is enough to fix it
>>
>> """
>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
>> index 2df4a55f00d4..fcd250cff4bf 100644
>> --- a/arch/arm64/kernel/head.S
>> +++ b/arch/arm64/kernel/head.S
>> @@ -62,8 +62,8 @@
>>   /*
>>    * Initial memory map attributes.
>>    */
>> -#define PTE_FLAGS      PTE_TYPE_PAGE | PTE_AF | PTE_SHARED
>> -#define PMD_FLAGS      PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S
>> +#define PTE_FLAGS      PTE_TYPE_PAGE | PTE_AF | PTE_SHARED | PTE_UXN
>> +#define PMD_FLAGS      PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S |
>> PMD_SECT_UXN
>>
>>   #ifdef CONFIG_ARM64_64K_PAGES
>>   #define MM_MMUFLAGS    PTE_ATTRINDX(MT_NORMAL) | PTE_FLAGS
>> """
>>
>
> Yes, that fixes it. With that I get :
>
> ---[ Kernel Mapping ]---
> 0xffffffbffc000000-0xffffffbffc600000           6M     RW x  SHD AF    UXN
> MEM/NORMAL
> 0xffffffbffc600000-0xffffffbffc7f5000        2004K     RW x  SHD AF    UXN
> MEM/NORMAL
> 0xffffffbffc7f5000-0xffffffbffc875000         512K     RW NX SHD AF    UXN
> MEM/NORMAL
> 0xffffffbffc875000-0xffffffbffca00000        1580K     RW x  SHD AF    UXN
> MEM/NORMAL
> ---[ Linear Mapping ]---
> 0xffffffc000000000-0xffffffc080000000           2G     RW NX SHD AF    UXN
> MEM/NORMAL
> 0xffffffc800000000-0xffffffc880000000           2G     RW NX SHD AF    UXN
> MEM/NORMAL
>

Thanks.

Can I take that as a Tested-by ? :-)

>
>
>>
>>>> Note: I see that you have used CONFIG_ARM64_64K_PAGES to handle
>>>> section/table mapping
>>>> (which I have tried to cleanup in 16K page size series and which is not
>>>> merged yet).
>>>> We should be careful when we merge our patches, as we could miss such
>>>> new
>>>> cases.
>>>>
>>>
>>> I was aware of this, and I think it makes sense to the 16 KB pages to
>>> be merged first, and then I will rebase these patches on top of it.
>>>
>>
>> Do you have a git tree with the latest version?
>>
>
> Yes, it is available here :
>
> git://linux-arm.org/linux-skp.git  16k/v2-4.3-rc1
>

I rebased it, and the required changes are only minor.

I will post the rebased version once your changes have been merged.

-- 
Ard.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
                   ` (7 preceding siblings ...)
  2015-09-24 16:37 ` [PATCH v2 0/7] arm64: relax Image placement rules Suzuki K. Poulose
@ 2015-10-13 17:07 ` Catalin Marinas
  2015-10-13 17:14   ` Ard Biesheuvel
  8 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2015-10-13 17:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 22, 2015 at 05:37:36PM -0700, Ard Biesheuvel wrote:
> This is a followup to the "arm64: update/clarify/relax Image and FDT placement
> rules" series I sent a while ago:
> (http://article.gmane.org/gmane.linux.ports.arm.kernel/407148)
> 
> This has now been split in two series: this second series deals with the
> physical and virtual placement of the kernel Image.

Quick question, to refresh my memory: if this is the second series,
what is the first series?

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] arm64: relax Image placement rules
  2015-10-13 17:07 ` Catalin Marinas
@ 2015-10-13 17:14   ` Ard Biesheuvel
  0 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2015-10-13 17:14 UTC (permalink / raw)
  To: linux-arm-kernel


> On 13 okt. 2015, at 19:07, Catalin Marinas <catalin.marinas@arm.com> wrote:
> 
>> On Tue, Sep 22, 2015 at 05:37:36PM -0700, Ard Biesheuvel wrote:
>> This is a followup to the "arm64: update/clarify/relax Image and FDT placement
>> rules" series I sent a while ago:
>> (http://article.gmane.org/gmane.linux.ports.arm.kernel/407148)
>> 
>> This has now been split in two series: this second series deals with the
>> physical and virtual placement of the kernel Image.
> 
> Quick question, to refresh my memory: if this is the second series,
> what is the first series?
> 

That was primarily about the FDT, and most of that work is merged in one way or the other.

-- 
Ard.

> Thanks.
> 
> -- 
> Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory
  2015-09-23  0:37 ` [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
@ 2015-10-14 11:30   ` James Morse
  2015-10-14 13:25     ` Ard Biesheuvel
  0 siblings, 1 reply; 22+ messages in thread
From: James Morse @ 2015-10-14 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Ard,

On 23/09/15 01:37, Ard Biesheuvel wrote:
> This relaxes the kernel Image placement requirements, so that it
> may be placed at any 2 MB aligned offset in physical memory.
> 
> This is accomplished by ignoring PHYS_OFFSET when installing
> memblocks, and accounting for the apparent virtual offset of
> the kernel Image (in addition to the 64 MB that it is moved
> below PAGE_OFFSET). As a result, virtual address references
> below PAGE_OFFSET are correctly mapped onto physical references
> into the kernel Image regardless of where it sits in memory.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

[SNIP]

> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 4a1c9d0769f2..675757c01eff 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -21,6 +21,7 @@
>  #include <linux/kernel.h>
>  #include <linux/errno.h>
>  #include <linux/init.h>
> +#include <linux/initrd.h>
>  #include <linux/libfdt.h>
>  #include <linux/mman.h>
>  #include <linux/nodemask.h>
> @@ -432,11 +433,34 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset)
>  static void __init map_mem(void)
>  {
>  	struct memblock_region *reg;
> +	u64 new_memstart_addr;
> +	u64 new_va_offset;
>  
> -	bootstrap_linear_mapping(KIMAGE_OFFSET);
> +	/*
> +	 * Select a suitable value for the base of physical memory.
> +	 * This should be equal to or below the lowest usable physical
> +	 * memory address, and aligned to PUD/PMD size so that we can map
> +	 * it efficiently.
> +	 */
> +	new_memstart_addr = round_down(memblock_start_of_DRAM(), SZ_1G);
> +
> +	/*
> +	 * Calculate the offset between the kernel text mapping that exists
> +	 * outside of the linear mapping, and its mapping in the linear region.
> +	 */
> +	new_va_offset = memstart_addr - new_memstart_addr;
> +
> +	bootstrap_linear_mapping(new_va_offset);
> +
> +	kernel_va_offset = new_va_offset;
> +
> +	/* Recalculate virtual addresses of initrd region */
> +	if (initrd_start) {
> +		initrd_start += new_va_offset;
> +		initrd_end += new_va_offset;
> +	}

This breaks the build for me, with messages like:
> arch/arm64/mm/built-in.o: In function `map_mem':
> ... arch/arm64/mm/mmu.c:458: undefined reference to `initrd_start'

Wrapping the if with:
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD))

Solves the problem for me.


Thanks,

James

>  
> -	kernel_va_offset = KIMAGE_OFFSET;
> -	memstart_addr -= KIMAGE_OFFSET;
> +	memstart_addr = new_memstart_addr;
>  
>  	/* map all the memory banks */
>  	for_each_memblock(memory, reg) {
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory
  2015-10-14 11:30   ` James Morse
@ 2015-10-14 13:25     ` Ard Biesheuvel
  2015-10-14 16:34       ` Catalin Marinas
  0 siblings, 1 reply; 22+ messages in thread
From: Ard Biesheuvel @ 2015-10-14 13:25 UTC (permalink / raw)
  To: linux-arm-kernel

On 14 October 2015 at 12:30, James Morse <james.morse@arm.com> wrote:
> Hi Ard,
>
> On 23/09/15 01:37, Ard Biesheuvel wrote:
>> This relaxes the kernel Image placement requirements, so that it
>> may be placed at any 2 MB aligned offset in physical memory.
>>
>> This is accomplished by ignoring PHYS_OFFSET when installing
>> memblocks, and accounting for the apparent virtual offset of
>> the kernel Image (in addition to the 64 MB that it is moved
>> below PAGE_OFFSET). As a result, virtual address references
>> below PAGE_OFFSET are correctly mapped onto physical references
>> into the kernel Image regardless of where it sits in memory.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>
> [SNIP]
>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 4a1c9d0769f2..675757c01eff 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -21,6 +21,7 @@
>>  #include <linux/kernel.h>
>>  #include <linux/errno.h>
>>  #include <linux/init.h>
>> +#include <linux/initrd.h>
>>  #include <linux/libfdt.h>
>>  #include <linux/mman.h>
>>  #include <linux/nodemask.h>
>> @@ -432,11 +433,34 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset)
>>  static void __init map_mem(void)
>>  {
>>       struct memblock_region *reg;
>> +     u64 new_memstart_addr;
>> +     u64 new_va_offset;
>>
>> -     bootstrap_linear_mapping(KIMAGE_OFFSET);
>> +     /*
>> +      * Select a suitable value for the base of physical memory.
>> +      * This should be equal to or below the lowest usable physical
>> +      * memory address, and aligned to PUD/PMD size so that we can map
>> +      * it efficiently.
>> +      */
>> +     new_memstart_addr = round_down(memblock_start_of_DRAM(), SZ_1G);
>> +
>> +     /*
>> +      * Calculate the offset between the kernel text mapping that exists
>> +      * outside of the linear mapping, and its mapping in the linear region.
>> +      */
>> +     new_va_offset = memstart_addr - new_memstart_addr;
>> +
>> +     bootstrap_linear_mapping(new_va_offset);
>> +
>> +     kernel_va_offset = new_va_offset;
>> +
>> +     /* Recalculate virtual addresses of initrd region */
>> +     if (initrd_start) {
>> +             initrd_start += new_va_offset;
>> +             initrd_end += new_va_offset;
>> +     }
>
> This breaks the build for me, with messages like:
>> arch/arm64/mm/built-in.o: In function `map_mem':
>> ... arch/arm64/mm/mmu.c:458: undefined reference to `initrd_start'
>
> Wrapping the if with:
>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD))
>
> Solves the problem for me.
>
>

Thank you James

I will take this into account when I spin the next version (probably
after 16k pages support is merged)

-- 
Ard.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory
  2015-10-14 13:25     ` Ard Biesheuvel
@ 2015-10-14 16:34       ` Catalin Marinas
  2015-10-14 16:51         ` Ard Biesheuvel
  0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2015-10-14 16:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Oct 14, 2015 at 02:25:58PM +0100, Ard Biesheuvel wrote:
> I will take this into account when I spin the next version (probably
> after 16k pages support is merged)

I plan to merge 16K pages support in 4.4, waiting for the review to
settle and I'll queue them.

BTW, have you tested these patches with KVM? We were wondering if the
stage 1 hyp mapping gets confused.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory
  2015-10-14 16:34       ` Catalin Marinas
@ 2015-10-14 16:51         ` Ard Biesheuvel
  2015-10-15 10:04           ` James Morse
  0 siblings, 1 reply; 22+ messages in thread
From: Ard Biesheuvel @ 2015-10-14 16:51 UTC (permalink / raw)
  To: linux-arm-kernel

On 14 October 2015 at 17:34, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Wed, Oct 14, 2015 at 02:25:58PM +0100, Ard Biesheuvel wrote:
>> I will take this into account when I spin the next version (probably
>> after 16k pages support is merged)
>
> I plan to merge 16K pages support in 4.4, waiting for the review to
> settle and I'll queue them.
>
> BTW, have you tested these patches with KVM? We were wondering if the
> stage 1 hyp mapping gets confused.
>

I honestly don't remember, so consider that a 'no'. I will look into
it before reposting.

-- 
Ard,

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory
  2015-10-14 16:51         ` Ard Biesheuvel
@ 2015-10-15 10:04           ` James Morse
  0 siblings, 0 replies; 22+ messages in thread
From: James Morse @ 2015-10-15 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/10/15 17:51, Ard Biesheuvel wrote:
> On 14 October 2015 at 17:34, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> On Wed, Oct 14, 2015 at 02:25:58PM +0100, Ard Biesheuvel wrote:
>>> I will take this into account when I spin the next version (probably
>>> after 16k pages support is merged)
>>
>> I plan to merge 16K pages support in 4.4, waiting for the review to
>> settle and I'll queue them.
>>
>> BTW, have you tested these patches with KVM? We were wondering if the
>> stage 1 hyp mapping gets confused.
>>
> 
> I honestly don't remember, so consider that a 'no'. I will look into
> it before reposting.

I still had this set up:
Guests with and without this series both boot fine on a host with this series.


Thanks,

James

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-10-15 10:04 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-23  0:37 [PATCH v2 0/7] arm64: relax Image placement rules Ard Biesheuvel
2015-09-23  0:37 ` [PATCH v2 1/7] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
2015-09-23  4:45   ` Mark Rutland
2015-09-23 22:59   ` Rob Herring
2015-09-23  0:37 ` [PATCH v2 2/7] arm64: use more granular reservations for static page table allocations Ard Biesheuvel
2015-09-23  0:37 ` [PATCH v2 3/7] arm64: split off early mapping code from early_fixmap_init() Ard Biesheuvel
2015-09-23  0:37 ` [PATCH v2 4/7] arm64: mm: explicitly bootstrap the linear mapping Ard Biesheuvel
2015-09-23  0:37 ` [PATCH v2 5/7] arm64: move kernel mapping out of linear region Ard Biesheuvel
2015-09-23  0:37 ` [PATCH v2 6/7] arm64: map linear region as non-executable Ard Biesheuvel
2015-09-23  0:37 ` [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
2015-10-14 11:30   ` James Morse
2015-10-14 13:25     ` Ard Biesheuvel
2015-10-14 16:34       ` Catalin Marinas
2015-10-14 16:51         ` Ard Biesheuvel
2015-10-15 10:04           ` James Morse
2015-09-24 16:37 ` [PATCH v2 0/7] arm64: relax Image placement rules Suzuki K. Poulose
2015-09-24 16:38   ` Ard Biesheuvel
2015-09-24 23:19     ` Ard Biesheuvel
2015-09-25  8:44       ` Suzuki K. Poulose
2015-09-25 21:53         ` Ard Biesheuvel
2015-10-13 17:07 ` Catalin Marinas
2015-10-13 17:14   ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).