linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] arm64/mm: ensure memstart_addr remains sufficiently aligned
@ 2016-03-21 17:38 Ard Biesheuvel
  2016-03-21 17:38 ` [PATCH 2/2] arm64: choose memstart_addr based on minimum sparsemem section alignment Ard Biesheuvel
  0 siblings, 1 reply; 3+ messages in thread
From: Ard Biesheuvel @ 2016-03-21 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

After choosing memstart_addr to be the highest multiple of
ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
address, we clip the memblocks to the maximum size of the linear region.
Since the kernel may be high up in memory, we take care not to clip the
kernel itself, which means we have to clip some memory from the bottom if
this occurs, to ensure that the distance between the first and the last
usable physical memory address can be covered by the linear region.

However, we fail to update memstart_addr if this clipping from the bottom
occurs, which means that we may still end up with virtual addresses that
wrap into the userland range. So increment memstart_addr as appropriate to
prevent this from happening.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/init.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 61a38eaf0895..ef1f70f860b0 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -190,8 +190,12 @@ void __init arm64_memblock_init(void)
 	 */
 	memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
 			ULLONG_MAX);
-	if (memblock_end_of_DRAM() > linear_region_size)
-		memblock_remove(0, memblock_end_of_DRAM() - linear_region_size);
+	if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
+		/* ensure that memstart_addr remains sufficiently aligned */
+		memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
+					 ARM64_MEMSTART_ALIGN);
+		memblock_remove(0, memstart_addr);
+	}
 
 	/*
 	 * Apply the memory limit if it was set. Since the kernel may be loaded
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] arm64: choose memstart_addr based on minimum sparsemem section alignment
  2016-03-21 17:38 [PATCH 1/2] arm64/mm: ensure memstart_addr remains sufficiently aligned Ard Biesheuvel
@ 2016-03-21 17:38 ` Ard Biesheuvel
  2016-03-21 17:42   ` Ard Biesheuvel
  0 siblings, 1 reply; 3+ messages in thread
From: Ard Biesheuvel @ 2016-03-21 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

This reverts commit 36e5cd6b897e, which was needed in v4.5 and before to
ensure the correct alignment of the base of the vmemmap region. However,
since commit a7f8de168ace ("arm64: allow kernel Image to be loaded anywhere
in physical memory"), the alignment of memstart_addr itself can be freely
chosen, which means we can choose it such that additional rounding in the
definition of vmemmap is no longer necessary.

So redefine ARM64_MEMSTART_ALIGN in terms of the minimal alignment required
by sparsemem, and drop the redundant rounding in the definition of vmemmap.

Note that the net result of this change is that we align memstart_addr to
1 GB in all cases, since sparsemem is mandatory on arm64.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/kernel-pgtable.h | 17 +++++++++++++++--
 arch/arm64/include/asm/pgtable.h        |  5 ++---
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 5c6375d8528b..a144ae2953a2 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -19,6 +19,7 @@
 #ifndef __ASM_KERNEL_PGTABLE_H
 #define __ASM_KERNEL_PGTABLE_H
 
+#include <asm/sparsemem.h>
 
 /*
  * The linear mapping and the start of memory are both 2M aligned (per
@@ -87,9 +88,21 @@
  * in the page tables: 32 * PMD_SIZE (16k granule)
  */
 #ifdef CONFIG_ARM64_64K_PAGES
-#define ARM64_MEMSTART_ALIGN	SZ_512M
+#define ARM64_MEMSTART_BITS	29
 #else
-#define ARM64_MEMSTART_ALIGN	SZ_1G
+#define ARM64_MEMSTART_BITS	30
+#endif
+
+/*
+ * sparsemem imposes an additional requirement on the alignment of
+ * memstart_addr, due to the fact that the base of the vmemmap region
+ * has a direct correspondence, and needs to appear sufficiently aligned
+ * in the virtual address space.
+ */
+#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_BITS < SECTION_SIZE_BITS
+#define ARM64_MEMSTART_ALIGN	(1UL << SECTION_SIZE_BITS)
+#else
+#define ARM64_MEMSTART_ALIGN	(1UL << ARM64_MEMSTART_BITS)
 #endif
 
 #endif	/* __ASM_KERNEL_PGTABLE_H */
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 989fef16d461..aa6106ac050c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -32,14 +32,13 @@
  * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
  *	fixed mappings and modules
  */
-#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
+#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
 
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define VMEMMAP_START		(VMALLOC_END + SZ_64K)
-#define vmemmap			((struct page *)VMEMMAP_START - \
-				 SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))
+#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
 #define FIRST_USER_ADDRESS	0UL
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] arm64: choose memstart_addr based on minimum sparsemem section alignment
  2016-03-21 17:38 ` [PATCH 2/2] arm64: choose memstart_addr based on minimum sparsemem section alignment Ard Biesheuvel
@ 2016-03-21 17:42   ` Ard Biesheuvel
  0 siblings, 0 replies; 3+ messages in thread
From: Ard Biesheuvel @ 2016-03-21 17:42 UTC (permalink / raw)
  To: linux-arm-kernel

On 21 March 2016 at 18:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> This reverts commit 36e5cd6b897e, which was needed in v4.5 and before to
> ensure the correct alignment of the base of the vmemmap region. However,
> since commit a7f8de168ace ("arm64: allow kernel Image to be loaded anywhere
> in physical memory"), the alignment of memstart_addr itself can be freely
> chosen, which means we can choose it such that additional rounding in the
> definition of vmemmap is no longer necessary.
>
> So redefine ARM64_MEMSTART_ALIGN in terms of the minimal alignment required
> by sparsemem, and drop the redundant rounding in the definition of vmemmap.
>
> Note that the net result of this change is that we align memstart_addr to
> 1 GB in all cases, since sparsemem is mandatory on arm64.

This is not actually true, since the 1 GB alignment on 64k pages is
only used for sparsemem-vmemmap. Also, this patch is no longer a
straight revert of the late fix we sent for v4.5, so perhaps it is
more appropriate to split off the updated definition of
ARM64_MEMSTART_ALIGN, so that we can put the straight revert on top of
that?


>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/kernel-pgtable.h | 17 +++++++++++++++--
>  arch/arm64/include/asm/pgtable.h        |  5 ++---
>  2 files changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 5c6375d8528b..a144ae2953a2 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -19,6 +19,7 @@
>  #ifndef __ASM_KERNEL_PGTABLE_H
>  #define __ASM_KERNEL_PGTABLE_H
>
> +#include <asm/sparsemem.h>
>
>  /*
>   * The linear mapping and the start of memory are both 2M aligned (per
> @@ -87,9 +88,21 @@
>   * in the page tables: 32 * PMD_SIZE (16k granule)
>   */
>  #ifdef CONFIG_ARM64_64K_PAGES
> -#define ARM64_MEMSTART_ALIGN   SZ_512M
> +#define ARM64_MEMSTART_BITS    29
>  #else
> -#define ARM64_MEMSTART_ALIGN   SZ_1G
> +#define ARM64_MEMSTART_BITS    30
> +#endif
> +
> +/*
> + * sparsemem imposes an additional requirement on the alignment of
> + * memstart_addr, due to the fact that the base of the vmemmap region
> + * has a direct correspondence, and needs to appear sufficiently aligned
> + * in the virtual address space.
> + */
> +#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_BITS < SECTION_SIZE_BITS
> +#define ARM64_MEMSTART_ALIGN   (1UL << SECTION_SIZE_BITS)
> +#else
> +#define ARM64_MEMSTART_ALIGN   (1UL << ARM64_MEMSTART_BITS)
>  #endif
>
>  #endif /* __ASM_KERNEL_PGTABLE_H */
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 989fef16d461..aa6106ac050c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -32,14 +32,13 @@
>   * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
>   *     fixed mappings and modules
>   */
> -#define VMEMMAP_SIZE           ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
> +#define VMEMMAP_SIZE           ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
>
>  #define VMALLOC_START          (MODULES_END)
>  #define VMALLOC_END            (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
>
>  #define VMEMMAP_START          (VMALLOC_END + SZ_64K)
> -#define vmemmap                        ((struct page *)VMEMMAP_START - \
> -                                SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))
> +#define vmemmap                        ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
>
>  #define FIRST_USER_ADDRESS     0UL
>
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-03-21 17:42 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-21 17:38 [PATCH 1/2] arm64/mm: ensure memstart_addr remains sufficiently aligned Ard Biesheuvel
2016-03-21 17:38 ` [PATCH 2/2] arm64: choose memstart_addr based on minimum sparsemem section alignment Ard Biesheuvel
2016-03-21 17:42   ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).