linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses
@ 2016-02-22 20:54 Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use Ard Biesheuvel
                   ` (10 more replies)
  0 siblings, 11 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

As pointed out by Will, performing non-trivial arithmetic in the implementation
of __pa() may affect performance. So instead of allowing __pa() to deal with
addresses that are either covered by the kernel mappin in the vmalloc area, or
by the linear region, let's restrict ourselves to the latter.

This involves replacing some __pa() translations with a specific translation
for kernel symbols (__pa_symbol), but other than that it is fairly
straightforward (except KASAN is being a pain again, but for the purpose of this
discussion that should not matter too much)

Patch #1 is a fix for a bug that I spotted  while working on this.

Patch #2 introduces the new translations __kimg_to_phys and __pa_symbol. They
do exactly the same but the former matches the existing __phys_to_kimg, and the
latter already exists on x86 as well for translating 'symbols visible to C code'

Patch #3 - #9 replace various instances of __pa() translations involving virtual
addresses pointing into the kernel image.

Patch #10 finally removes the comparison against PAGE_OFFSET from the
implementation of __virt_to_phys(), so that it can only be used for addresses
that are known to be in the linear region.

git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-pa-linear-mapping

Ard Biesheuvel (10):
  arm64: mm: move assignment of 'high_memory' before its first use
  arm64: introduce __kimg_to_phys() and __pa_symbol()
  arm64: mm: avoid __pa translations in cpu_replace_ttbr1
  arm64: mm: avoid __pa translations on empty_zero_page
  arm64: mm: avoid __pa translations in early_fixmap_init
  arm64: mm: use __pa_symbol() not __pa() for section boundary symbols
  arm64: mm: avoid __pa translations for idmap_pg_dir and swapper_pg_dir
  arm64: vdso: avoid __pa translations
  arm64: insn: avoid __pa translations
  arm64: mm: restrict __pa() translations to linear virtual addresses

 arch/arm64/include/asm/memory.h      | 15 ++++++++----
 arch/arm64/include/asm/mmu_context.h | 10 ++++----
 arch/arm64/include/asm/pgalloc.h     |  2 --
 arch/arm64/include/asm/pgtable.h     |  2 +-
 arch/arm64/kernel/insn.c             |  2 +-
 arch/arm64/kernel/setup.c            |  8 +++----
 arch/arm64/kernel/vdso.c             |  5 ++--
 arch/arm64/mm/init.c                 | 13 +++++++----
 arch/arm64/mm/mmu.c                  | 24 ++++++++++----------
 9 files changed, 43 insertions(+), 38 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-23 12:14   ` Catalin Marinas
  2016-02-22 20:54 ` [RFC PATCH 02/10] arm64: introduce __kimg_to_phys() and __pa_symbol() Ard Biesheuvel
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

The variable 'high_memory' ends up being referenced in the call to
dma_contiguous_reserve(). So move the assignment before that call.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/init.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index e1f425fe5a81..017201982da3 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -235,6 +235,9 @@ void __init arm64_memblock_init(void)
 		arm64_dma_phys_limit = max_zone_dma_phys();
 	else
 		arm64_dma_phys_limit = PHYS_MASK + 1;
+
+	high_memory = __va((memblock_end_of_DRAM() & PAGE_MASK) - 1) + 1;
+
 	dma_contiguous_reserve(arm64_dma_phys_limit);
 
 	memblock_allow_resize();
@@ -259,7 +262,6 @@ void __init bootmem_init(void)
 	sparse_init();
 	zone_sizes_init(min, max);
 
-	high_memory = __va((max << PAGE_SHIFT) - 1) + 1;
 	max_pfn = max_low_pfn = max;
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 02/10] arm64: introduce __kimg_to_phys() and __pa_symbol()
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 03/10] arm64: mm: avoid __pa translations in cpu_replace_ttbr1 Ard Biesheuvel
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Before restricting the domain of __pa()'s argument to the linear mapping,
introduce alternatives __kimg_to_phys() and its alias __pa_symbol() [of
which the latter exists on x86 as well] that can be used to obtain the
physical address of a static object or function in the kernel text.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/memory.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 2296b32130a1..56d6739430f3 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -92,7 +92,10 @@
 				 (__x - kimage_voffset); })
 
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+
+#define __kimg_to_phys(x)	((phys_addr_t)(x) - kimage_voffset)
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
+#define __pa_symbol(x)		__kimg_to_phys(x)
 
 /*
  * Convert a page to/from a physical address
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 03/10] arm64: mm: avoid __pa translations in cpu_replace_ttbr1
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 02/10] arm64: introduce __kimg_to_phys() and __pa_symbol() Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 04/10] arm64: mm: avoid __pa translations on empty_zero_page Ard Biesheuvel
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Avoid the use of generic __pa() translations in cpu_replace_ttbr1(),
by changing its argument to a physical pgd[] address, and using
__pa_symbol() on the reference to swapper_pg_dir.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu_context.h | 6 ++----
 arch/arm64/mm/mmu.c                  | 4 ++--
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a00f7cf35bbd..40fc56f3ed52 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -130,15 +130,13 @@ static inline void cpu_install_idmap(void)
  * Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD,
  * avoiding the possibility of conflicting TLB entries being allocated.
  */
-static inline void cpu_replace_ttbr1(pgd_t *pgd)
+static inline void cpu_replace_ttbr1(phys_addr_t pgd_phys)
 {
 	typedef void (ttbr_replace_func)(phys_addr_t);
 	extern ttbr_replace_func idmap_cpu_replace_ttbr1;
 	ttbr_replace_func *replace_phys;
 
-	phys_addr_t pgd_phys = virt_to_phys(pgd);
-
-	replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1);
+	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
 	replace_phys(pgd_phys);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 00d166465ff4..fbba941a6e87 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -546,9 +546,9 @@ void __init paging_init(void)
 	 *
 	 * To do this we need to go via a temporary pgd.
 	 */
-	cpu_replace_ttbr1(__va(pgd_phys));
+	cpu_replace_ttbr1(pgd_phys);
 	memcpy(swapper_pg_dir, pgd, PAGE_SIZE);
-	cpu_replace_ttbr1(swapper_pg_dir);
+	cpu_replace_ttbr1(__pa_symbol(swapper_pg_dir));
 
 	pgd_clear_fixmap();
 	memblock_free(pgd_phys, PAGE_SIZE);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 04/10] arm64: mm: avoid __pa translations on empty_zero_page
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 03/10] arm64: mm: avoid __pa translations in cpu_replace_ttbr1 Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init Ard Biesheuvel
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

The variable empty_zero_page is part of the static kernel text, so we
should use __pa_symbol() not __pa() to take its physical address.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu_context.h | 2 +-
 arch/arm64/include/asm/pgtable.h     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 40fc56f3ed52..b03272fc501e 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -49,7 +49,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
  */
 static inline void cpu_set_reserved_ttbr0(void)
 {
-	unsigned long ttbr = virt_to_phys(empty_zero_page);
+	unsigned long ttbr = __pa_symbol(empty_zero_page);
 
 	asm(
 	"	msr	ttbr0_el1, %0			// set TTBR0\n"
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index a440f5a85d08..528482f53f12 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -117,7 +117,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
  * for zero-mapped memory areas etc..
  */
 extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
-#define ZERO_PAGE(vaddr)	virt_to_page(empty_zero_page)
+#define ZERO_PAGE(vaddr)	phys_to_page(__pa_symbol(empty_zero_page))
 
 #define pte_ERROR(pte)		__pte_error(__FILE__, __LINE__, pte_val(pte))
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 04/10] arm64: mm: avoid __pa translations on empty_zero_page Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-23 17:12   ` Catalin Marinas
  2016-02-22 20:54 ` [RFC PATCH 06/10] arm64: mm: use __pa_symbol() not __pa() for section boundary symbols Ard Biesheuvel
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Avoid using __pa() translations while populating the fixmap page tables,
by using __pa_symbol to take the physical addresses of bm_pud, bm_pmd and
bm_pte, and move to __pgd_populate/__pmd_populate/__pte_populate, which
takes physical addresses directly. Since the former two are now called
unconditionally, remove the BUILD_BUG()'s that prevent their use in case
their page table level is folded away.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/pgalloc.h | 2 --
 arch/arm64/mm/mmu.c              | 8 ++++----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index ff98585d085a..bc01d2b65225 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -54,7 +54,6 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
 #else
 static inline void __pud_populate(pud_t *pud, phys_addr_t pmd, pudval_t prot)
 {
-	BUILD_BUG();
 }
 #endif	/* CONFIG_PGTABLE_LEVELS > 2 */
 
@@ -83,7 +82,6 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
 #else
 static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t pud, pgdval_t prot)
 {
-	BUILD_BUG();
 }
 #endif	/* CONFIG_PGTABLE_LEVELS > 3 */
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index fbba941a6e87..e7340defa085 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
 
 	pgd = pgd_offset_k(addr);
 	if (CONFIG_PGTABLE_LEVELS > 3 &&
-	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
+	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
 		/*
 		 * We only end up here if the kernel mapping and the fixmap
 		 * share the top level pgd entry, which should only happen on
@@ -688,12 +688,12 @@ void __init early_fixmap_init(void)
 		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
 		pud = pud_offset_kimg(pgd, addr);
 	} else {
-		pgd_populate(&init_mm, pgd, bm_pud);
+		__pgd_populate(pgd, __pa_symbol(bm_pud), PUD_TYPE_TABLE);
 		pud = fixmap_pud(addr);
 	}
-	pud_populate(&init_mm, pud, bm_pmd);
+	__pud_populate(pud, __pa_symbol(bm_pmd), PUD_TYPE_TABLE);
 	pmd = fixmap_pmd(addr);
-	pmd_populate_kernel(&init_mm, pmd, bm_pte);
+	__pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
 
 	/*
 	 * The boot-ioremap range spans multiple pmds, for which
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 06/10] arm64: mm: use __pa_symbol() not __pa() for section boundary symbols
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 07/10] arm64: mm: avoid __pa translations for idmap_pg_dir and swapper_pg_dir Ard Biesheuvel
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Boundaries exported by asm/sections.h are relative to the kernel text
and not covered by the linear mapping. So use __pa_symbol() instead of
__pa() (or use __kimg_to_phys() if the virtual address is taken first)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/setup.c |  8 ++++----
 arch/arm64/mm/init.c      |  9 +++++----
 arch/arm64/mm/mmu.c       | 10 +++++-----
 3 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 42371f69def3..bb41ebabe017 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -201,10 +201,10 @@ static void __init request_standard_resources(void)
 	struct memblock_region *region;
 	struct resource *res;
 
-	kernel_code.start   = virt_to_phys(_text);
-	kernel_code.end     = virt_to_phys(_etext - 1);
-	kernel_data.start   = virt_to_phys(_sdata);
-	kernel_data.end     = virt_to_phys(_end - 1);
+	kernel_code.start   = __pa_symbol(_text);
+	kernel_code.end     = __pa_symbol(_etext - 1);
+	kernel_data.start   = __pa_symbol(_sdata);
+	kernel_data.end     = __pa_symbol(_end - 1);
 
 	for_each_memblock(memory, region) {
 		res = alloc_bootmem_low(sizeof(*res));
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 017201982da3..af98bf85ec8e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -181,7 +181,7 @@ void __init arm64_memblock_init(void)
 	 * linear mapping. Take care not to clip the kernel which may be
 	 * high in memory.
 	 */
-	memblock_remove(max(memstart_addr + linear_region_size, __pa(_end)),
+	memblock_remove(max(memstart_addr + linear_region_size, __pa_symbol(_end)),
 			ULLONG_MAX);
 	if (memblock_end_of_DRAM() > linear_region_size)
 		memblock_remove(0, memblock_end_of_DRAM() - linear_region_size);
@@ -193,7 +193,7 @@ void __init arm64_memblock_init(void)
 	 */
 	if (memory_limit != (phys_addr_t)ULLONG_MAX) {
 		memblock_enforce_memory_limit(memory_limit);
-		memblock_add(__pa(_text), (u64)(_end - _text));
+		memblock_add(__pa_symbol(_text), (u64)(_end - _text));
 	}
 
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
@@ -217,7 +217,7 @@ void __init arm64_memblock_init(void)
 	 * Register the kernel text, kernel data, initrd, and initial
 	 * pagetables with memblock.
 	 */
-	memblock_reserve(__pa(_text), _end - _text);
+	memblock_reserve(__pa_symbol(_text), _end - _text);
 #ifdef CONFIG_BLK_DEV_INITRD
 	if (initrd_start) {
 		memblock_reserve(initrd_start, initrd_end - initrd_start);
@@ -418,7 +418,8 @@ void __init mem_init(void)
 
 void free_initmem(void)
 {
-	free_initmem_default(0);
+	free_reserved_area(__va(__pa_symbol(__init_begin)),
+			   __va(__pa_symbol(__init_end)), 0, "unused kernel");
 	fixup_init();
 }
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e7340defa085..13517699bea6 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -387,8 +387,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
 {
-	unsigned long kernel_start = __pa(_stext);
-	unsigned long kernel_end = __pa(_etext);
+	unsigned long kernel_start = __pa_symbol(_stext);
+	unsigned long kernel_end = __pa_symbol(_etext);
 
 	/*
 	 * Take care not to create a writable alias for the
@@ -452,7 +452,7 @@ void mark_rodata_ro(void)
 	if (!IS_ENABLED(CONFIG_DEBUG_RODATA))
 		return;
 
-	create_mapping_late(__pa(_stext), (unsigned long)_stext,
+	create_mapping_late(__pa_symbol(_stext), (unsigned long)_stext,
 				(unsigned long)_etext - (unsigned long)_stext,
 				PAGE_KERNEL_ROX);
 }
@@ -470,7 +470,7 @@ void fixup_init(void)
 static void __init map_kernel_chunk(pgd_t *pgd, void *va_start, void *va_end,
 				    pgprot_t prot, struct vm_struct *vma)
 {
-	phys_addr_t pa_start = __pa(va_start);
+	phys_addr_t pa_start = __kimg_to_phys(va_start);
 	unsigned long size = va_end - va_start;
 
 	BUG_ON(!PAGE_ALIGNED(pa_start));
@@ -517,7 +517,7 @@ static void __init map_kernel(pgd_t *pgd)
 		 */
 		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
 		set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
-			__pud(__pa(bm_pmd) | PUD_TYPE_TABLE));
+			__pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE));
 		pud_clear_fixmap();
 	} else {
 		BUG();
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 07/10] arm64: mm: avoid __pa translations for idmap_pg_dir and swapper_pg_dir
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 06/10] arm64: mm: use __pa_symbol() not __pa() for section boundary symbols Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 08/10] arm64: vdso: avoid __pa translations Ard Biesheuvel
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

When taking the physical address of idmap_pg_dir or swapper_pg_dir,
which are static kernel globals, use __pa_symbol() not __pa().

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu_context.h | 2 +-
 arch/arm64/mm/mmu.c                  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index b03272fc501e..a8ba62955a62 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -123,7 +123,7 @@ static inline void cpu_install_idmap(void)
 	local_flush_tlb_all();
 	cpu_set_idmap_tcr_t0sz();
 
-	cpu_switch_mm(idmap_pg_dir, &init_mm);
+	cpu_do_switch_mm(__pa_symbol(idmap_pg_dir), &init_mm);
 }
 
 /*
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 13517699bea6..980dd9cbdc19 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -557,7 +557,7 @@ void __init paging_init(void)
 	 * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
 	 * allocated with it.
 	 */
-	memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE,
+	memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
 		      SWAPPER_DIR_SIZE - PAGE_SIZE);
 
 	bootmem_init();
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 08/10] arm64: vdso: avoid __pa translations
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 07/10] arm64: mm: avoid __pa translations for idmap_pg_dir and swapper_pg_dir Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 09/10] arm64: insn: " Ard Biesheuvel
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Use __pa_symbol() not __pa() when taking the physical address of the
VDSO symbols that are part of the kernel text.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/vdso.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 97bc68f4c689..065756b358b4 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -131,11 +131,12 @@ static int __init vdso_init(void)
 		return -ENOMEM;
 
 	/* Grab the vDSO data page. */
-	vdso_pagelist[0] = virt_to_page(vdso_data);
+	vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
 
 	/* Grab the vDSO code pages. */
 	for (i = 0; i < vdso_pages; i++)
-		vdso_pagelist[i + 1] = virt_to_page(&vdso_start + i * PAGE_SIZE);
+		vdso_pagelist[i + 1] = phys_to_page(__pa_symbol(&vdso_start) +
+						    i * PAGE_SIZE);
 
 	/* Populate the special mapping structures */
 	vdso_spec[0] = (struct vm_special_mapping) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 09/10] arm64: insn: avoid __pa translations
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (7 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 08/10] arm64: vdso: avoid __pa translations Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-22 20:54 ` [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses Ard Biesheuvel
  2016-02-23 10:03 ` [RFC PATCH 00/10] arm64: restrict __pa translation " Ard Biesheuvel
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

When taking the physical address of something that is known to be
covered by the kernel text, use __kimg_to_phys() not virt_to_phys()

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/insn.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7371455160e5..bc91e3c2cdb1 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -96,7 +96,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
 	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
-		page = virt_to_page(addr);
+		page = phys_to_page(__kimg_to_phys(addr));
 	else
 		return addr;
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (8 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 09/10] arm64: insn: " Ard Biesheuvel
@ 2016-02-22 20:54 ` Ard Biesheuvel
  2016-02-23 12:26   ` Will Deacon
  2016-02-23 10:03 ` [RFC PATCH 00/10] arm64: restrict __pa translation " Ard Biesheuvel
  10 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-22 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have replaced all occurrences of __pa() translations
involving virtual addresses that are covered by the kernel text,
we can redefine __virt_to_phys and __pa() etc to only take virtual
address that are covered by the linear mapping. This means we can
remove the comparison with PAGE_OFFSET in the definition of
__virt_to_phys().

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/memory.h | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 56d6739430f3..3b5dc5b243ac 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -86,11 +86,14 @@
  * private definitions which should NOT be used outside memory.h
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
  */
-#define __virt_to_phys(x) ({						\
+#ifndef CONFIG_DEBUG_VM
+#define __virt_to_phys(x)	(((phys_addr_t)(x) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#else
+#define __virt_to_phys(x)	({					\
 	phys_addr_t __x = (phys_addr_t)(x);				\
-	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
-				 (__x - kimage_voffset); })
-
+	BUG_ON(__x < PAGE_OFFSET);					\
+	(((phys_addr_t)__x & ~PAGE_OFFSET) + PHYS_OFFSET); })
+#endif
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
 
 #define __kimg_to_phys(x)	((phys_addr_t)(x) - kimage_voffset)
@@ -134,7 +137,6 @@
 #endif
 
 #ifndef __ASSEMBLY__
-#include <linux/bitops.h>
 
 extern phys_addr_t		memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses
  2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
                   ` (9 preceding siblings ...)
  2016-02-22 20:54 ` [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses Ard Biesheuvel
@ 2016-02-23 10:03 ` Ard Biesheuvel
  10 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-23 10:03 UTC (permalink / raw)
  To: linux-arm-kernel

On 22 February 2016 at 21:54, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> As pointed out by Will, performing non-trivial arithmetic in the implementation
> of __pa() may affect performance. So instead of allowing __pa() to deal with
> addresses that are either covered by the kernel mappin in the vmalloc area, or
> by the linear region, let's restrict ourselves to the latter.
>
> This involves replacing some __pa() translations with a specific translation
> for kernel symbols (__pa_symbol), but other than that it is fairly
> straightforward (except KASAN is being a pain again, but for the purpose of this
> discussion that should not matter too much)
>

With some additional s/__pa/__pa_symbol/ replacements for SMP and KVM,
this runs happily on my Seattle A0 (no kasan yet, though)


> Patch #1 is a fix for a bug that I spotted  while working on this.
>
> Patch #2 introduces the new translations __kimg_to_phys and __pa_symbol. They
> do exactly the same but the former matches the existing __phys_to_kimg, and the
> latter already exists on x86 as well for translating 'symbols visible to C code'
>
> Patch #3 - #9 replace various instances of __pa() translations involving virtual
> addresses pointing into the kernel image.
>
> Patch #10 finally removes the comparison against PAGE_OFFSET from the
> implementation of __virt_to_phys(), so that it can only be used for addresses
> that are known to be in the linear region.
>
> git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-pa-linear-mapping
>
> Ard Biesheuvel (10):
>   arm64: mm: move assignment of 'high_memory' before its first use
>   arm64: introduce __kimg_to_phys() and __pa_symbol()
>   arm64: mm: avoid __pa translations in cpu_replace_ttbr1
>   arm64: mm: avoid __pa translations on empty_zero_page
>   arm64: mm: avoid __pa translations in early_fixmap_init
>   arm64: mm: use __pa_symbol() not __pa() for section boundary symbols
>   arm64: mm: avoid __pa translations for idmap_pg_dir and swapper_pg_dir
>   arm64: vdso: avoid __pa translations
>   arm64: insn: avoid __pa translations
>   arm64: mm: restrict __pa() translations to linear virtual addresses
>
>  arch/arm64/include/asm/memory.h      | 15 ++++++++----
>  arch/arm64/include/asm/mmu_context.h | 10 ++++----
>  arch/arm64/include/asm/pgalloc.h     |  2 --
>  arch/arm64/include/asm/pgtable.h     |  2 +-
>  arch/arm64/kernel/insn.c             |  2 +-
>  arch/arm64/kernel/setup.c            |  8 +++----
>  arch/arm64/kernel/vdso.c             |  5 ++--
>  arch/arm64/mm/init.c                 | 13 +++++++----
>  arch/arm64/mm/mmu.c                  | 24 ++++++++++----------
>  9 files changed, 43 insertions(+), 38 deletions(-)
>
> --
> 2.5.0
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use
  2016-02-22 20:54 ` [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use Ard Biesheuvel
@ 2016-02-23 12:14   ` Catalin Marinas
  2016-02-23 12:18     ` Ard Biesheuvel
  0 siblings, 1 reply; 25+ messages in thread
From: Catalin Marinas @ 2016-02-23 12:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 22, 2016 at 09:54:23PM +0100, Ard Biesheuvel wrote:
> The variable 'high_memory' ends up being referenced in the call to
> dma_contiguous_reserve(). So move the assignment before that call.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Does this need cc stable? It looks like we've had this issue for a while
and it ends up being used in cma_declare_contiguous(). So we either get
a high enough value to not affect this function or we don't trigger this
code path very often.

-- 
Catalin

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use
  2016-02-23 12:14   ` Catalin Marinas
@ 2016-02-23 12:18     ` Ard Biesheuvel
  0 siblings, 0 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-23 12:18 UTC (permalink / raw)
  To: linux-arm-kernel

On 23 February 2016 at 13:14, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Mon, Feb 22, 2016 at 09:54:23PM +0100, Ard Biesheuvel wrote:
>> The variable 'high_memory' ends up being referenced in the call to
>> dma_contiguous_reserve(). So move the assignment before that call.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>
> Does this need cc stable? It looks like we've had this issue for a while
> and it ends up being used in cma_declare_contiguous(). So we either get
> a high enough value to not affect this function or we don't trigger this
> code path very often.
>

I would guess cc'ing stable would be appropriate, although I think
keeping it at 0x0 has no ill side effects, other than that the CMA
layer cannot figure out if the allocation is entirely above or
entirely below the highmem limit, and of course, that is kind of moot
on arm64

I haven't noticed any runtime changes with this fixed

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses
  2016-02-22 20:54 ` [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses Ard Biesheuvel
@ 2016-02-23 12:26   ` Will Deacon
  2016-02-23 12:29     ` Ard Biesheuvel
  0 siblings, 1 reply; 25+ messages in thread
From: Will Deacon @ 2016-02-23 12:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 22, 2016 at 09:54:32PM +0100, Ard Biesheuvel wrote:
> Now that we have replaced all occurrences of __pa() translations
> involving virtual addresses that are covered by the kernel text,
> we can redefine __virt_to_phys and __pa() etc to only take virtual
> address that are covered by the linear mapping. This means we can
> remove the comparison with PAGE_OFFSET in the definition of
> __virt_to_phys().
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/memory.h | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 56d6739430f3..3b5dc5b243ac 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -86,11 +86,14 @@
>   * private definitions which should NOT be used outside memory.h
>   * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
>   */
> -#define __virt_to_phys(x) ({						\
> +#ifndef CONFIG_DEBUG_VM
> +#define __virt_to_phys(x)	(((phys_addr_t)(x) & ~PAGE_OFFSET) + PHYS_OFFSET)
> +#else
> +#define __virt_to_phys(x)	({					\
>  	phys_addr_t __x = (phys_addr_t)(x);				\
> -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
> -				 (__x - kimage_voffset); })
> -
> +	BUG_ON(__x < PAGE_OFFSET);					\
> +	(((phys_addr_t)__x & ~PAGE_OFFSET) + PHYS_OFFSET); })

What's the #include-hell like if you try to use VM_BUG_ON instead?

Will

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses
  2016-02-23 12:26   ` Will Deacon
@ 2016-02-23 12:29     ` Ard Biesheuvel
  2016-02-23 12:31       ` Will Deacon
  2016-02-23 17:22       ` Catalin Marinas
  0 siblings, 2 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-23 12:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 23 February 2016 at 13:26, Will Deacon <will.deacon@arm.com> wrote:
> On Mon, Feb 22, 2016 at 09:54:32PM +0100, Ard Biesheuvel wrote:
>> Now that we have replaced all occurrences of __pa() translations
>> involving virtual addresses that are covered by the kernel text,
>> we can redefine __virt_to_phys and __pa() etc to only take virtual
>> address that are covered by the linear mapping. This means we can
>> remove the comparison with PAGE_OFFSET in the definition of
>> __virt_to_phys().
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>>  arch/arm64/include/asm/memory.h | 12 +++++++-----
>>  1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 56d6739430f3..3b5dc5b243ac 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -86,11 +86,14 @@
>>   * private definitions which should NOT be used outside memory.h
>>   * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
>>   */
>> -#define __virt_to_phys(x) ({                                         \
>> +#ifndef CONFIG_DEBUG_VM
>> +#define __virt_to_phys(x)    (((phys_addr_t)(x) & ~PAGE_OFFSET) + PHYS_OFFSET)
>> +#else
>> +#define __virt_to_phys(x)    ({                                      \
>>       phys_addr_t __x = (phys_addr_t)(x);                             \
>> -     __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :   \
>> -                              (__x - kimage_voffset); })
>> -
>> +     BUG_ON(__x < PAGE_OFFSET);                                      \
>> +     (((phys_addr_t)__x & ~PAGE_OFFSET) + PHYS_OFFSET); })
>
> What's the #include-hell like if you try to use VM_BUG_ON instead?
>

The #include hell would not change I think, since

include/linux/mmdebug.h:18:#define VM_BUG_ON(cond) BUG_ON(cond)

but it would certainly make this code look a lot cleaner.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses
  2016-02-23 12:29     ` Ard Biesheuvel
@ 2016-02-23 12:31       ` Will Deacon
  2016-02-23 17:22       ` Catalin Marinas
  1 sibling, 0 replies; 25+ messages in thread
From: Will Deacon @ 2016-02-23 12:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 23, 2016 at 01:29:21PM +0100, Ard Biesheuvel wrote:
> On 23 February 2016 at 13:26, Will Deacon <will.deacon@arm.com> wrote:
> > On Mon, Feb 22, 2016 at 09:54:32PM +0100, Ard Biesheuvel wrote:
> >> Now that we have replaced all occurrences of __pa() translations
> >> involving virtual addresses that are covered by the kernel text,
> >> we can redefine __virt_to_phys and __pa() etc to only take virtual
> >> address that are covered by the linear mapping. This means we can
> >> remove the comparison with PAGE_OFFSET in the definition of
> >> __virt_to_phys().
> >>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >> ---
> >>  arch/arm64/include/asm/memory.h | 12 +++++++-----
> >>  1 file changed, 7 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> >> index 56d6739430f3..3b5dc5b243ac 100644
> >> --- a/arch/arm64/include/asm/memory.h
> >> +++ b/arch/arm64/include/asm/memory.h
> >> @@ -86,11 +86,14 @@
> >>   * private definitions which should NOT be used outside memory.h
> >>   * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
> >>   */
> >> -#define __virt_to_phys(x) ({                                         \
> >> +#ifndef CONFIG_DEBUG_VM
> >> +#define __virt_to_phys(x)    (((phys_addr_t)(x) & ~PAGE_OFFSET) + PHYS_OFFSET)
> >> +#else
> >> +#define __virt_to_phys(x)    ({                                      \
> >>       phys_addr_t __x = (phys_addr_t)(x);                             \
> >> -     __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :   \
> >> -                              (__x - kimage_voffset); })
> >> -
> >> +     BUG_ON(__x < PAGE_OFFSET);                                      \
> >> +     (((phys_addr_t)__x & ~PAGE_OFFSET) + PHYS_OFFSET); })
> >
> > What's the #include-hell like if you try to use VM_BUG_ON instead?
> >
> 
> The #include hell would not change I think, since
> 
> include/linux/mmdebug.h:18:#define VM_BUG_ON(cond) BUG_ON(cond)
> 
> but it would certainly make this code look a lot cleaner.

Yup, and likewise for PHYS_OFFSET.

Will

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-22 20:54 ` [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init Ard Biesheuvel
@ 2016-02-23 17:12   ` Catalin Marinas
  2016-02-23 17:16     ` Ard Biesheuvel
  0 siblings, 1 reply; 25+ messages in thread
From: Catalin Marinas @ 2016-02-23 17:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
>  
>  	pgd = pgd_offset_k(addr);
>  	if (CONFIG_PGTABLE_LEVELS > 3 &&
> -	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
> +	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
>  		/*
>  		 * We only end up here if the kernel mapping and the fixmap
>  		 * share the top level pgd entry, which should only happen on

Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
check here, so this patch does not apply.

-- 
Catalin

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-23 17:12   ` Catalin Marinas
@ 2016-02-23 17:16     ` Ard Biesheuvel
  2016-02-23 17:26       ` Catalin Marinas
  0 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-23 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

On 23 February 2016 at 18:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
>>
>>       pgd = pgd_offset_k(addr);
>>       if (CONFIG_PGTABLE_LEVELS > 3 &&
>> -         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
>> +         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
>>               /*
>>                * We only end up here if the kernel mapping and the fixmap
>>                * share the top level pgd entry, which should only happen on
>
> Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
> check here, so this patch does not apply.
>

This is actually based on the kaslr branch, not for-next/core

I'm happy to rebase and resend. I have added some patches for kasan,
kvm and smp as well.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses
  2016-02-23 12:29     ` Ard Biesheuvel
  2016-02-23 12:31       ` Will Deacon
@ 2016-02-23 17:22       ` Catalin Marinas
  1 sibling, 0 replies; 25+ messages in thread
From: Catalin Marinas @ 2016-02-23 17:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 23, 2016 at 01:29:21PM +0100, Ard Biesheuvel wrote:
> On 23 February 2016 at 13:26, Will Deacon <will.deacon@arm.com> wrote:
> > On Mon, Feb 22, 2016 at 09:54:32PM +0100, Ard Biesheuvel wrote:
> >> Now that we have replaced all occurrences of __pa() translations
> >> involving virtual addresses that are covered by the kernel text,
> >> we can redefine __virt_to_phys and __pa() etc to only take virtual
> >> address that are covered by the linear mapping. This means we can
> >> remove the comparison with PAGE_OFFSET in the definition of
> >> __virt_to_phys().
> >>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >> ---
> >>  arch/arm64/include/asm/memory.h | 12 +++++++-----
> >>  1 file changed, 7 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> >> index 56d6739430f3..3b5dc5b243ac 100644
> >> --- a/arch/arm64/include/asm/memory.h
> >> +++ b/arch/arm64/include/asm/memory.h
> >> @@ -86,11 +86,14 @@
> >>   * private definitions which should NOT be used outside memory.h
> >>   * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
> >>   */
> >> -#define __virt_to_phys(x) ({                                         \
> >> +#ifndef CONFIG_DEBUG_VM
> >> +#define __virt_to_phys(x)    (((phys_addr_t)(x) & ~PAGE_OFFSET) + PHYS_OFFSET)
> >> +#else
> >> +#define __virt_to_phys(x)    ({                                      \
> >>       phys_addr_t __x = (phys_addr_t)(x);                             \
> >> -     __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :   \
> >> -                              (__x - kimage_voffset); })
> >> -
> >> +     BUG_ON(__x < PAGE_OFFSET);                                      \
> >> +     (((phys_addr_t)__x & ~PAGE_OFFSET) + PHYS_OFFSET); })
> >
> > What's the #include-hell like if you try to use VM_BUG_ON instead?
> >
> 
> The #include hell would not change I think, since
> 
> include/linux/mmdebug.h:18:#define VM_BUG_ON(cond) BUG_ON(cond)
> 
> but it would certainly make this code look a lot cleaner.

I just tried the fixup below on top of your PHYS_OFFSET patch from
yesterday and it seems to work (I can fold it in once I finish testing
more build configurations):

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d862c62a33b8..e02bfef18552 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -132,14 +132,11 @@
 
 #ifndef __ASSEMBLY__
 #include <linux/bitops.h>
+#include <linux/mmdebug.h>
 
 extern phys_addr_t		memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
-#ifndef CONFIG_DEBUG_VM
-#define PHYS_OFFSET		({ memstart_addr; })
-#else
-#define PHYS_OFFSET		({ BUG_ON(memstart_addr & 1); memstart_addr; })
-#endif
+#define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
 
 /* the offset between the kernel virtual and physical mappings */
 extern u64			kimage_voffset;

-- 
Catalin

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-23 17:16     ` Ard Biesheuvel
@ 2016-02-23 17:26       ` Catalin Marinas
  2016-02-23 17:27         ` Ard Biesheuvel
  0 siblings, 1 reply; 25+ messages in thread
From: Catalin Marinas @ 2016-02-23 17:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 23, 2016 at 06:16:50PM +0100, Ard Biesheuvel wrote:
> On 23 February 2016 at 18:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
> >> --- a/arch/arm64/mm/mmu.c
> >> +++ b/arch/arm64/mm/mmu.c
> >> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
> >>
> >>       pgd = pgd_offset_k(addr);
> >>       if (CONFIG_PGTABLE_LEVELS > 3 &&
> >> -         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
> >> +         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
> >>               /*
> >>                * We only end up here if the kernel mapping and the fixmap
> >>                * share the top level pgd entry, which should only happen on
> >
> > Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
> > check here, so this patch does not apply.
> >
> 
> This is actually based on the kaslr branch, not for-next/core
> 
> I'm happy to rebase and resend.

That's fine, no need to resend. I plan to move those as well onto
for-next/core but wanted some more testing first on the initial part.

> I have added some patches for kasan, kvm and smp as well.

Patches related to the __pa clean-up? Or something else?

-- 
Catalin

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-23 17:26       ` Catalin Marinas
@ 2016-02-23 17:27         ` Ard Biesheuvel
  2016-02-23 17:32           ` Catalin Marinas
  0 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-23 17:27 UTC (permalink / raw)
  To: linux-arm-kernel

On 23 February 2016 at 18:26, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Feb 23, 2016 at 06:16:50PM +0100, Ard Biesheuvel wrote:
>> On 23 February 2016 at 18:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> > On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
>> >> --- a/arch/arm64/mm/mmu.c
>> >> +++ b/arch/arm64/mm/mmu.c
>> >> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
>> >>
>> >>       pgd = pgd_offset_k(addr);
>> >>       if (CONFIG_PGTABLE_LEVELS > 3 &&
>> >> -         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
>> >> +         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
>> >>               /*
>> >>                * We only end up here if the kernel mapping and the fixmap
>> >>                * share the top level pgd entry, which should only happen on
>> >
>> > Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
>> > check here, so this patch does not apply.
>> >
>>
>> This is actually based on the kaslr branch, not for-next/core
>>
>> I'm happy to rebase and resend.
>
> That's fine, no need to resend. I plan to move those as well onto
> for-next/core but wanted some more testing first on the initial part.
>
>> I have added some patches for kasan, kvm and smp as well.
>
> Patches related to the __pa clean-up? Or something else?
>

No, related to the __pa restriction

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-23 17:27         ` Ard Biesheuvel
@ 2016-02-23 17:32           ` Catalin Marinas
  2016-02-23 17:43             ` Ard Biesheuvel
  0 siblings, 1 reply; 25+ messages in thread
From: Catalin Marinas @ 2016-02-23 17:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 23, 2016 at 06:27:27PM +0100, Ard Biesheuvel wrote:
> On 23 February 2016 at 18:26, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Tue, Feb 23, 2016 at 06:16:50PM +0100, Ard Biesheuvel wrote:
> >> On 23 February 2016 at 18:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> > On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
> >> >> --- a/arch/arm64/mm/mmu.c
> >> >> +++ b/arch/arm64/mm/mmu.c
> >> >> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
> >> >>
> >> >>       pgd = pgd_offset_k(addr);
> >> >>       if (CONFIG_PGTABLE_LEVELS > 3 &&
> >> >> -         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
> >> >> +         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
> >> >>               /*
> >> >>                * We only end up here if the kernel mapping and the fixmap
> >> >>                * share the top level pgd entry, which should only happen on
> >> >
> >> > Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
> >> > check here, so this patch does not apply.
> >> >
> >>
> >> This is actually based on the kaslr branch, not for-next/core
> >>
> >> I'm happy to rebase and resend.
> >
> > That's fine, no need to resend. I plan to move those as well onto
> > for-next/core but wanted some more testing first on the initial part.
> >
> >> I have added some patches for kasan, kvm and smp as well.
> >
> > Patches related to the __pa clean-up? Or something else?
> 
> No, related to the __pa restriction

OK. I'll wait for a while before merging this series to get more reviews
and testing. I picked the first patch though (high_memory fix).

-- 
Catalin

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-23 17:32           ` Catalin Marinas
@ 2016-02-23 17:43             ` Ard Biesheuvel
  2016-02-23 18:10               ` Catalin Marinas
  0 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2016-02-23 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 23 February 2016 at 18:32, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Feb 23, 2016 at 06:27:27PM +0100, Ard Biesheuvel wrote:
>> On 23 February 2016 at 18:26, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> > On Tue, Feb 23, 2016 at 06:16:50PM +0100, Ard Biesheuvel wrote:
>> >> On 23 February 2016 at 18:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> >> > On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
>> >> >> --- a/arch/arm64/mm/mmu.c
>> >> >> +++ b/arch/arm64/mm/mmu.c
>> >> >> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
>> >> >>
>> >> >>       pgd = pgd_offset_k(addr);
>> >> >>       if (CONFIG_PGTABLE_LEVELS > 3 &&
>> >> >> -         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
>> >> >> +         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
>> >> >>               /*
>> >> >>                * We only end up here if the kernel mapping and the fixmap
>> >> >>                * share the top level pgd entry, which should only happen on
>> >> >
>> >> > Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
>> >> > check here, so this patch does not apply.
>> >> >
>> >>
>> >> This is actually based on the kaslr branch, not for-next/core
>> >>
>> >> I'm happy to rebase and resend.
>> >
>> > That's fine, no need to resend. I plan to move those as well onto
>> > for-next/core but wanted some more testing first on the initial part.
>> >
>> >> I have added some patches for kasan, kvm and smp as well.
>> >
>> > Patches related to the __pa clean-up? Or something else?
>>
>> No, related to the __pa restriction
>
> OK. I'll wait for a while before merging this series to get more reviews
> and testing. I picked the first patch though (high_memory fix).
>

OK, I'll hold off for now. I have enough stuff in flight as it is.
However, I suppose that you will want to address the performance
concern at some point

If anyone wants to do any benchmarking:
git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-pa-linear-mapping

(updated for kasan, KVM, smp etc)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init
  2016-02-23 17:43             ` Ard Biesheuvel
@ 2016-02-23 18:10               ` Catalin Marinas
  0 siblings, 0 replies; 25+ messages in thread
From: Catalin Marinas @ 2016-02-23 18:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 23, 2016 at 06:43:37PM +0100, Ard Biesheuvel wrote:
> On 23 February 2016 at 18:32, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Tue, Feb 23, 2016 at 06:27:27PM +0100, Ard Biesheuvel wrote:
> >> On 23 February 2016 at 18:26, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> > On Tue, Feb 23, 2016 at 06:16:50PM +0100, Ard Biesheuvel wrote:
> >> >> On 23 February 2016 at 18:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> >> > On Mon, Feb 22, 2016 at 09:54:27PM +0100, Ard Biesheuvel wrote:
> >> >> >> --- a/arch/arm64/mm/mmu.c
> >> >> >> +++ b/arch/arm64/mm/mmu.c
> >> >> >> @@ -679,7 +679,7 @@ void __init early_fixmap_init(void)
> >> >> >>
> >> >> >>       pgd = pgd_offset_k(addr);
> >> >> >>       if (CONFIG_PGTABLE_LEVELS > 3 &&
> >> >> >> -         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
> >> >> >> +         !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
> >> >> >>               /*
> >> >> >>                * We only end up here if the kernel mapping and the fixmap
> >> >> >>                * share the top level pgd entry, which should only happen on
> >> >> >
> >> >> > Do I miss any patches? The for-next/core branch has a pgd_none(*pgd)
> >> >> > check here, so this patch does not apply.
> >> >> >
> >> >>
> >> >> This is actually based on the kaslr branch, not for-next/core
> >> >>
> >> >> I'm happy to rebase and resend.
> >> >
> >> > That's fine, no need to resend. I plan to move those as well onto
> >> > for-next/core but wanted some more testing first on the initial part.
> >> >
> >> >> I have added some patches for kasan, kvm and smp as well.
> >> >
> >> > Patches related to the __pa clean-up? Or something else?
> >>
> >> No, related to the __pa restriction
> >
> > OK. I'll wait for a while before merging this series to get more reviews
> > and testing. I picked the first patch though (high_memory fix).
> 
> OK, I'll hold off for now. I have enough stuff in flight as it is.
> However, I suppose that you will want to address the performance
> concern at some point

Yes, though we first need to identify how real this concern is.
PHYS_OFFSET was a low-hanging fruit, so worth merging without additional
benchmarking.

The __pa clean-up, while nice, is relatively intrusive and would need
acks from the KVM guys as well. I wouldn't rush into merging it unless
it shows some benefits in benchmarks.

One concern I have is that we still find something in the generic code
doing a virt_to_phys() on kernel image addresses which would render all
this clean-up unnecessary (since most of the changes are not on the
critical path, we do them just to simplify virt_to_phys()).

> If anyone wants to do any benchmarking:
> git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-pa-linear-mapping

Thanks. We'll give this a try in the next few days.

-- 
Catalin

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2016-02-23 18:10 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-22 20:54 [RFC PATCH 00/10] arm64: restrict __pa translation to linear virtual addresses Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 01/10] arm64: mm: move assignment of 'high_memory' before its first use Ard Biesheuvel
2016-02-23 12:14   ` Catalin Marinas
2016-02-23 12:18     ` Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 02/10] arm64: introduce __kimg_to_phys() and __pa_symbol() Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 03/10] arm64: mm: avoid __pa translations in cpu_replace_ttbr1 Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 04/10] arm64: mm: avoid __pa translations on empty_zero_page Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 05/10] arm64: mm: avoid __pa translations in early_fixmap_init Ard Biesheuvel
2016-02-23 17:12   ` Catalin Marinas
2016-02-23 17:16     ` Ard Biesheuvel
2016-02-23 17:26       ` Catalin Marinas
2016-02-23 17:27         ` Ard Biesheuvel
2016-02-23 17:32           ` Catalin Marinas
2016-02-23 17:43             ` Ard Biesheuvel
2016-02-23 18:10               ` Catalin Marinas
2016-02-22 20:54 ` [RFC PATCH 06/10] arm64: mm: use __pa_symbol() not __pa() for section boundary symbols Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 07/10] arm64: mm: avoid __pa translations for idmap_pg_dir and swapper_pg_dir Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 08/10] arm64: vdso: avoid __pa translations Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 09/10] arm64: insn: " Ard Biesheuvel
2016-02-22 20:54 ` [RFC PATCH 10/10] arm64: mm: restrict __pa() translations to linear virtual addresses Ard Biesheuvel
2016-02-23 12:26   ` Will Deacon
2016-02-23 12:29     ` Ard Biesheuvel
2016-02-23 12:31       ` Will Deacon
2016-02-23 17:22       ` Catalin Marinas
2016-02-23 10:03 ` [RFC PATCH 00/10] arm64: restrict __pa translation " Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).