linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] arm64: memstart_addr alignment and vmemmap offset fixes
@ 2016-03-30 12:25 Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 1/3] arm64/mm: ensure memstart_addr remains sufficiently aligned Ard Biesheuvel
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 12:25 UTC (permalink / raw)
  To: linux-arm-kernel

This series wraps up some loose ends regarding the v4.6 changes in arm64 to
allow the kernel to reside at any [appropriately aligned] physical address.

Patch #1 fixes a bug where we fail to update memstart_addr after clipping
memory from the bottom if so required by the size of the linear region and
the placement of the kernel Image.

Patch #2 updates the constant ARM64_MEMSTART_ALIGN, which defines the minimum
alignment of memstart_addr, to account for the section alignment my sparsemem
vmemmap if that is enabled. This is a prerequisite for patch #3

Patch #3 reverts commit 36e5cd6b897e ("arm64: account for sparsemem section
alignment  when choosing vmemmap offset"), which was included as a bugfix in
v4.5-late, but is no longer required now that we can freely choose
memstart_addr, allowing us to take this minimum alignment into account. This
makes the rounding in the definition of vmemmap redundant.

Ard Biesheuvel (3):
  arm64/mm: ensure memstart_addr remains sufficiently aligned
  arm64: choose memstart_addr based on minimum sparsemem section
    alignment
  Revert "arm64: account for sparsemem section alignment when choosing
    vmemmap offset"

 arch/arm64/include/asm/kernel-pgtable.h | 21 +++++++++++++++++---
 arch/arm64/include/asm/pgtable.h        |  5 ++---
 arch/arm64/mm/init.c                    |  8 ++++++--
 3 files changed, 26 insertions(+), 8 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 1/3] arm64/mm: ensure memstart_addr remains sufficiently aligned
  2016-03-30 12:25 [PATCH v2 0/3] arm64: memstart_addr alignment and vmemmap offset fixes Ard Biesheuvel
@ 2016-03-30 12:25 ` Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 2/3] arm64: choose memstart_addr based on minimum sparsemem section alignment Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 3/3] Revert "arm64: account for sparsemem section alignment when choosing vmemmap offset" Ard Biesheuvel
  2 siblings, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 12:25 UTC (permalink / raw)
  To: linux-arm-kernel

After choosing memstart_addr to be the highest multiple of
ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
address, we clip the memblocks to the maximum size of the linear region.
Since the kernel may be high up in memory, we take care not to clip the
kernel itself, which means we have to clip some memory from the bottom if
this occurs, to ensure that the distance between the first and the last
usable physical memory address can be covered by the linear region.

However, we fail to update memstart_addr if this clipping from the bottom
occurs, which means that we may still end up with virtual addresses that
wrap into the userland range. So increment memstart_addr as appropriate to
prevent this from happening.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/init.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index ec4db8dfbe3a..82ced5fa1e66 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -190,8 +190,12 @@ void __init arm64_memblock_init(void)
 	 */
 	memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
 			ULLONG_MAX);
-	if (memblock_end_of_DRAM() > linear_region_size)
-		memblock_remove(0, memblock_end_of_DRAM() - linear_region_size);
+	if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
+		/* ensure that memstart_addr remains sufficiently aligned */
+		memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
+					 ARM64_MEMSTART_ALIGN);
+		memblock_remove(0, memstart_addr);
+	}
 
 	/*
 	 * Apply the memory limit if it was set. Since the kernel may be loaded
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 2/3] arm64: choose memstart_addr based on minimum sparsemem section alignment
  2016-03-30 12:25 [PATCH v2 0/3] arm64: memstart_addr alignment and vmemmap offset fixes Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 1/3] arm64/mm: ensure memstart_addr remains sufficiently aligned Ard Biesheuvel
@ 2016-03-30 12:25 ` Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 3/3] Revert "arm64: account for sparsemem section alignment when choosing vmemmap offset" Ard Biesheuvel
  2 siblings, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 12:25 UTC (permalink / raw)
  To: linux-arm-kernel

This redefines ARM64_MEMSTART_ALIGN in terms of the minimal alignment
required by sparsemem vmemmap. This comes down to using 1 GB for all
translation granules if CONFIG_SPARSEMEM_VMEMMAP is enabled.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/kernel-pgtable.h | 21 +++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 5c6375d8528b..7e51d1b57c0c 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -19,6 +19,7 @@
 #ifndef __ASM_KERNEL_PGTABLE_H
 #define __ASM_KERNEL_PGTABLE_H
 
+#include <asm/sparsemem.h>
 
 /*
  * The linear mapping and the start of memory are both 2M aligned (per
@@ -86,10 +87,24 @@
  * (64k granule), or a multiple that can be mapped using contiguous bits
  * in the page tables: 32 * PMD_SIZE (16k granule)
  */
-#ifdef CONFIG_ARM64_64K_PAGES
-#define ARM64_MEMSTART_ALIGN	SZ_512M
+#if defined(CONFIG_ARM64_4K_PAGES)
+#define ARM64_MEMSTART_SHIFT		PUD_SHIFT
+#elif defined(CONFIG_ARM64_16K_PAGES)
+#define ARM64_MEMSTART_SHIFT		(PMD_SHIFT + 5)
 #else
-#define ARM64_MEMSTART_ALIGN	SZ_1G
+#define ARM64_MEMSTART_SHIFT		PMD_SHIFT
+#endif
+
+/*
+ * sparsemem vmemmap imposes an additional requirement on the alignment of
+ * memstart_addr, due to the fact that the base of the vmemmap region
+ * has a direct correspondence, and needs to appear sufficiently aligned
+ * in the virtual address space.
+ */
+#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
+#define ARM64_MEMSTART_ALIGN	(1UL << SECTION_SIZE_BITS)
+#else
+#define ARM64_MEMSTART_ALIGN	(1UL << ARM64_MEMSTART_SHIFT)
 #endif
 
 #endif	/* __ASM_KERNEL_PGTABLE_H */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 3/3] Revert "arm64: account for sparsemem section alignment when choosing vmemmap offset"
  2016-03-30 12:25 [PATCH v2 0/3] arm64: memstart_addr alignment and vmemmap offset fixes Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 1/3] arm64/mm: ensure memstart_addr remains sufficiently aligned Ard Biesheuvel
  2016-03-30 12:25 ` [PATCH v2 2/3] arm64: choose memstart_addr based on minimum sparsemem section alignment Ard Biesheuvel
@ 2016-03-30 12:25 ` Ard Biesheuvel
  2 siblings, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2016-03-30 12:25 UTC (permalink / raw)
  To: linux-arm-kernel

This reverts commit 36e5cd6b897e17d03008f81e075625d8e43e52d0, since the
section alignment is now guaranteed by construction when choosing the
value of memstart_addr.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/pgtable.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 989fef16d461..aa6106ac050c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -32,14 +32,13 @@
  * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
  *	fixed mappings and modules
  */
-#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
+#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
 
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define VMEMMAP_START		(VMALLOC_END + SZ_64K)
-#define vmemmap			((struct page *)VMEMMAP_START - \
-				 SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))
+#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
 #define FIRST_USER_ADDRESS	0UL
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-03-30 12:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-30 12:25 [PATCH v2 0/3] arm64: memstart_addr alignment and vmemmap offset fixes Ard Biesheuvel
2016-03-30 12:25 ` [PATCH v2 1/3] arm64/mm: ensure memstart_addr remains sufficiently aligned Ard Biesheuvel
2016-03-30 12:25 ` [PATCH v2 2/3] arm64: choose memstart_addr based on minimum sparsemem section alignment Ard Biesheuvel
2016-03-30 12:25 ` [PATCH v2 3/3] Revert "arm64: account for sparsemem section alignment when choosing vmemmap offset" Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).