linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings
@ 2016-10-11 12:39 Ard Biesheuvel
  2016-10-11 12:39 ` [PATCH v2 1/3] arm64: mm: BUG on unsupported manipulations of live " Ard Biesheuvel
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2016-10-11 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

This 3-piece series is a followup to the single patch 'arm64: mmu: set the
contiguous for kernel mappings when appropriate' sent out on the 10th [0].

This v2 addresses the following issues:
- the contiguous attribute is also useful for contigous PMD mappings on 16k
  granule kernels (i.e., 1 GB blocks)
- the function parameter 'block_mappings_allowed' does not clearly convey
  whether contiguous page mappings should be used, so it is renamed to
  'page_mappings_only', and its meaning inverted
- instead of BUGging on changes in the PTE_CONT attribute in PMD or PTE entries
  that have been populated already, BUG on any modification except for
  permission attributes, which don't require break-before-make when changed.

[0] http://marc.info/?l=linux-arm-kernel&m=147612332130714

Ard Biesheuvel (3):
  arm64: mm: BUG on unsupported manipulations of live kernel mappings
  arm64: mm: replace 'block_mappings_allowed' with 'page_mappings_only'
  arm64: mm: set the contiguous bit for kernel mappings where
    appropriate

 arch/arm64/include/asm/mmu.h |   2 +-
 arch/arm64/kernel/efi.c      |   8 +-
 arch/arm64/mm/mmu.c          | 135 +++++++++++++-------
 3 files changed, 93 insertions(+), 52 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/3] arm64: mm: BUG on unsupported manipulations of live kernel mappings
  2016-10-11 12:39 [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
@ 2016-10-11 12:39 ` Ard Biesheuvel
  2016-10-11 12:39 ` [PATCH v2 2/3] arm64: mm: replace 'block_mappings_allowed' with 'page_mappings_only' Ard Biesheuvel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2016-10-11 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we take care not manipulate the live kernel page tables in a
way that may lead to TLB conflicts, the case where a table mapping is
replaced by a block mapping can no longer occur. So remove the handling
of this at the PUD and PMD levels, and instead, BUG() on any occurrence
of live kernel page table manipulations that modify anything other than
the permission bits.

Since mark_rodata_ro() is the only caller where the kernel mappings that
are being manipulated are actually live, drop the various conditional
flush_tlb_all() invocations, and add a single call to mark_rodata_ro()
instead.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 68 ++++++++++++--------
 1 file changed, 41 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 05615a3fdc6f..1cbadd20e741 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -28,8 +28,6 @@
 #include <linux/memblock.h>
 #include <linux/fs.h>
 #include <linux/io.h>
-#include <linux/slab.h>
-#include <linux/stop_machine.h>
 
 #include <asm/barrier.h>
 #include <asm/cputype.h>
@@ -95,6 +93,12 @@ static phys_addr_t __init early_pgtable_alloc(void)
 	return phys;
 }
 
+/*
+ * The following mapping attributes may be updated in live
+ * kernel mappings without the need for break-before-make.
+ */
+static const pteval_t modifiable_attr_mask = PTE_PXN | PTE_RDONLY | PTE_WRITE;
+
 static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
 				  unsigned long end, unsigned long pfn,
 				  pgprot_t prot,
@@ -115,8 +119,18 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
 
 	pte = pte_set_fixmap_offset(pmd, addr);
 	do {
+		pte_t old_pte = *pte;
+
 		set_pte(pte, pfn_pte(pfn, prot));
 		pfn++;
+
+		/*
+		 * After the PTE entry has been populated once, we
+		 * only allow updates to the permission attributes.
+		 */
+		BUG_ON(!pte_none(old_pte) &&
+		       ((pte_val(old_pte) ^ pte_val(*pte)) &
+			~modifiable_attr_mask) != 0);
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 
 	pte_clear_fixmap();
@@ -146,27 +160,28 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
 
 	pmd = pmd_set_fixmap_offset(pud, addr);
 	do {
+		pmd_t old_pmd = *pmd;
+
 		next = pmd_addr_end(addr, end);
+
 		/* try section mapping first */
 		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
 		      allow_block_mappings) {
-			pmd_t old_pmd =*pmd;
 			pmd_set_huge(pmd, phys, prot);
+
 			/*
-			 * Check for previous table entries created during
-			 * boot (__create_page_tables) and flush them.
+			 * After the PMD entry has been populated once, we
+			 * only allow updates to the permission attributes.
 			 */
-			if (!pmd_none(old_pmd)) {
-				flush_tlb_all();
-				if (pmd_table(old_pmd)) {
-					phys_addr_t table = pmd_page_paddr(old_pmd);
-					if (!WARN_ON_ONCE(slab_is_available()))
-						memblock_free(table, PAGE_SIZE);
-				}
-			}
+			BUG_ON(!pmd_none(old_pmd) &&
+			       ((pmd_val(old_pmd) ^ pmd_val(*pmd)) &
+				~modifiable_attr_mask) != 0);
 		} else {
 			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
 				       prot, pgtable_alloc);
+
+			BUG_ON(!pmd_none(old_pmd) &&
+			       pmd_val(old_pmd) != pmd_val(*pmd));
 		}
 		phys += next - addr;
 	} while (pmd++, addr = next, addr != end);
@@ -204,33 +219,29 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end,
 
 	pud = pud_set_fixmap_offset(pgd, addr);
 	do {
+		pud_t old_pud = *pud;
+
 		next = pud_addr_end(addr, end);
 
 		/*
 		 * For 4K granule only, attempt to put down a 1GB block
 		 */
 		if (use_1G_block(addr, next, phys) && allow_block_mappings) {
-			pud_t old_pud = *pud;
 			pud_set_huge(pud, phys, prot);
 
 			/*
-			 * If we have an old value for a pud, it will
-			 * be pointing to a pmd table that we no longer
-			 * need (from swapper_pg_dir).
-			 *
-			 * Look up the old pmd table and free it.
+			 * After the PUD entry has been populated once, we
+			 * only allow updates to the permission attributes.
 			 */
-			if (!pud_none(old_pud)) {
-				flush_tlb_all();
-				if (pud_table(old_pud)) {
-					phys_addr_t table = pud_page_paddr(old_pud);
-					if (!WARN_ON_ONCE(slab_is_available()))
-						memblock_free(table, PAGE_SIZE);
-				}
-			}
+			BUG_ON(!pud_none(old_pud) &&
+			       ((pud_val(old_pud) ^ pud_val(*pud)) &
+				~modifiable_attr_mask) != 0);
 		} else {
 			alloc_init_pmd(pud, addr, next, phys, prot,
 				       pgtable_alloc, allow_block_mappings);
+
+			BUG_ON(!pud_none(old_pud) &&
+			       pud_val(old_pud) != pud_val(*pud));
 		}
 		phys += next - addr;
 	} while (pud++, addr = next, addr != end);
@@ -396,6 +407,9 @@ void mark_rodata_ro(void)
 	section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
 	create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_all();
 }
 
 static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/3] arm64: mm: replace 'block_mappings_allowed' with 'page_mappings_only'
  2016-10-11 12:39 [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
  2016-10-11 12:39 ` [PATCH v2 1/3] arm64: mm: BUG on unsupported manipulations of live " Ard Biesheuvel
@ 2016-10-11 12:39 ` Ard Biesheuvel
  2016-10-11 12:39 ` [PATCH v2 3/3] arm64: mm: set the contiguous bit for kernel mappings where appropriate Ard Biesheuvel
  2016-10-11 15:46 ` [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2016-10-11 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation of adding support for contiguous PTE and PMD mappings,
let's replace 'block_mappings_allowed' with 'page_mappings_only', which
will be a more accurate description of the nature of the setting once we
add such contiguous mappings into the mix.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h |  2 +-
 arch/arm64/kernel/efi.c      |  8 ++---
 arch/arm64/mm/mmu.c          | 32 ++++++++++----------
 3 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 8d9fce037b2f..a81454ad5455 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -34,7 +34,7 @@ extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
 extern void init_mem_pgprot(void);
 extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
-			       pgprot_t prot, bool allow_block_mappings);
+			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
 
 #endif
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index ba9bee389fd5..5d17f377d905 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -62,8 +62,8 @@ struct screen_info screen_info __section(.data);
 int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
 {
 	pteval_t prot_val = create_mapping_protection(md);
-	bool allow_block_mappings = (md->type != EFI_RUNTIME_SERVICES_CODE &&
-				     md->type != EFI_RUNTIME_SERVICES_DATA);
+	bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE ||
+				   md->type == EFI_RUNTIME_SERVICES_DATA);
 
 	if (!PAGE_ALIGNED(md->phys_addr) ||
 	    !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) {
@@ -76,12 +76,12 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
 		 * from the MMU routines. So avoid block mappings altogether in
 		 * that case.
 		 */
-		allow_block_mappings = false;
+		page_mappings_only = true;
 	}
 
 	create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
 			   md->num_pages << EFI_PAGE_SHIFT,
-			   __pgprot(prot_val | PTE_NG), allow_block_mappings);
+			   __pgprot(prot_val | PTE_NG), page_mappings_only);
 	return 0;
 }
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 1cbadd20e741..ee3cda6c41a7 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -139,7 +139,7 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
 static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
 				  phys_addr_t phys, pgprot_t prot,
 				  phys_addr_t (*pgtable_alloc)(void),
-				  bool allow_block_mappings)
+				  bool page_mappings_only)
 {
 	pmd_t *pmd;
 	unsigned long next;
@@ -166,7 +166,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
 
 		/* try section mapping first */
 		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
-		      allow_block_mappings) {
+		      !page_mappings_only) {
 			pmd_set_huge(pmd, phys, prot);
 
 			/*
@@ -204,7 +204,7 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next,
 static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end,
 				  phys_addr_t phys, pgprot_t prot,
 				  phys_addr_t (*pgtable_alloc)(void),
-				  bool allow_block_mappings)
+				  bool page_mappings_only)
 {
 	pud_t *pud;
 	unsigned long next;
@@ -226,7 +226,7 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end,
 		/*
 		 * For 4K granule only, attempt to put down a 1GB block
 		 */
-		if (use_1G_block(addr, next, phys) && allow_block_mappings) {
+		if (use_1G_block(addr, next, phys) && !page_mappings_only) {
 			pud_set_huge(pud, phys, prot);
 
 			/*
@@ -238,7 +238,7 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end,
 				~modifiable_attr_mask) != 0);
 		} else {
 			alloc_init_pmd(pud, addr, next, phys, prot,
-				       pgtable_alloc, allow_block_mappings);
+				       pgtable_alloc, page_mappings_only);
 
 			BUG_ON(!pud_none(old_pud) &&
 			       pud_val(old_pud) != pud_val(*pud));
@@ -253,7 +253,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
 				 unsigned long virt, phys_addr_t size,
 				 pgprot_t prot,
 				 phys_addr_t (*pgtable_alloc)(void),
-				 bool allow_block_mappings)
+				 bool page_mappings_only)
 {
 	unsigned long addr, length, end, next;
 	pgd_t *pgd = pgd_offset_raw(pgdir, virt);
@@ -273,7 +273,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
 	do {
 		next = pgd_addr_end(addr, end);
 		alloc_init_pud(pgd, addr, next, phys, prot, pgtable_alloc,
-			       allow_block_mappings);
+			       page_mappings_only);
 		phys += next - addr;
 	} while (pgd++, addr = next, addr != end);
 }
@@ -302,17 +302,17 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 			&phys, virt);
 		return;
 	}
-	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, true);
+	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, false);
 }
 
 void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
-			       pgprot_t prot, bool allow_block_mappings)
+			       pgprot_t prot, bool page_mappings_only)
 {
 	BUG_ON(mm == &init_mm);
 
 	__create_pgd_mapping(mm->pgd, phys, virt, size, prot,
-			     pgd_pgtable_alloc, allow_block_mappings);
+			     pgd_pgtable_alloc, page_mappings_only);
 }
 
 static void create_mapping_late(phys_addr_t phys, unsigned long virt,
@@ -325,7 +325,7 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 	}
 
 	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
-			     NULL, !debug_pagealloc_enabled());
+			     NULL, debug_pagealloc_enabled());
 }
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
@@ -343,7 +343,7 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 		__create_pgd_mapping(pgd, start, __phys_to_virt(start),
 				     end - start, PAGE_KERNEL,
 				     early_pgtable_alloc,
-				     !debug_pagealloc_enabled());
+				     debug_pagealloc_enabled());
 		return;
 	}
 
@@ -356,13 +356,13 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     __phys_to_virt(start),
 				     kernel_start - start, PAGE_KERNEL,
 				     early_pgtable_alloc,
-				     !debug_pagealloc_enabled());
+				     debug_pagealloc_enabled());
 	if (kernel_end < end)
 		__create_pgd_mapping(pgd, kernel_end,
 				     __phys_to_virt(kernel_end),
 				     end - kernel_end, PAGE_KERNEL,
 				     early_pgtable_alloc,
-				     !debug_pagealloc_enabled());
+				     debug_pagealloc_enabled());
 
 	/*
 	 * Map the linear alias of the [_text, __init_begin) interval as
@@ -372,7 +372,7 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
 			     kernel_end - kernel_start, PAGE_KERNEL_RO,
-			     early_pgtable_alloc, !debug_pagealloc_enabled());
+			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
 static void __init map_mem(pgd_t *pgd)
@@ -422,7 +422,7 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
 	BUG_ON(!PAGE_ALIGNED(size));
 
 	__create_pgd_mapping(pgd, pa_start, (unsigned long)va_start, size, prot,
-			     early_pgtable_alloc, !debug_pagealloc_enabled());
+			     early_pgtable_alloc, debug_pagealloc_enabled());
 
 	vma->addr	= va_start;
 	vma->phys_addr	= pa_start;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 3/3] arm64: mm: set the contiguous bit for kernel mappings where appropriate
  2016-10-11 12:39 [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
  2016-10-11 12:39 ` [PATCH v2 1/3] arm64: mm: BUG on unsupported manipulations of live " Ard Biesheuvel
  2016-10-11 12:39 ` [PATCH v2 2/3] arm64: mm: replace 'block_mappings_allowed' with 'page_mappings_only' Ard Biesheuvel
@ 2016-10-11 12:39 ` Ard Biesheuvel
  2016-10-11 15:46 ` [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2016-10-11 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we no longer allow live kernel PMDs to be split, it is safe to
start using the contiguous bit for kernel mappings. So set the contiguous
bit in the kernel page mappings for regions whose size and alignment are
suitable for this. This includes contiguous level 2 mappings for 16k
granule kernels.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 35 +++++++++++++++++---
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ee3cda6c41a7..c2bf7396888b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -102,8 +102,10 @@ static const pteval_t modifiable_attr_mask = PTE_PXN | PTE_RDONLY | PTE_WRITE;
 static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
 				  unsigned long end, unsigned long pfn,
 				  pgprot_t prot,
-				  phys_addr_t (*pgtable_alloc)(void))
+				  phys_addr_t (*pgtable_alloc)(void),
+				  bool page_mappings_only)
 {
+	pgprot_t __prot = prot;
 	pte_t *pte;
 
 	BUG_ON(pmd_sect(*pmd));
@@ -121,7 +123,18 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
 	do {
 		pte_t old_pte = *pte;
 
-		set_pte(pte, pfn_pte(pfn, prot));
+		/*
+		 * Set the contiguous bit for the subsequent group of PTEs if
+		 * its size and alignment are suitable.
+		 */
+		if (((addr | PFN_PHYS(pfn)) & ~CONT_PTE_MASK) == 0) {
+			if (!page_mappings_only && end - addr >= CONT_PTE_SIZE)
+				__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			else
+				__prot = prot;
+		}
+
+		set_pte(pte, pfn_pte(pfn, __prot));
 		pfn++;
 
 		/*
@@ -141,6 +154,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
 				  phys_addr_t (*pgtable_alloc)(void),
 				  bool page_mappings_only)
 {
+	pgprot_t __prot = prot;
 	pmd_t *pmd;
 	unsigned long next;
 
@@ -164,10 +178,22 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
 
 		next = pmd_addr_end(addr, end);
 
+		/*
+		 * For 16K granule only, attempt to put down a 1 GB block by
+		 * stringing 32 PMD block mappings together.
+		 */
+		if (IS_ENABLED(CONFIG_ARM64_16K_PAGES) &&
+		    ((addr | phys) & ~CONT_PMD_MASK) == 0) {
+			if (!page_mappings_only && end - addr >= CONT_PMD_SIZE)
+				__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			else
+				__prot = prot;
+		}
+
 		/* try section mapping first */
 		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
 		      !page_mappings_only) {
-			pmd_set_huge(pmd, phys, prot);
+			pmd_set_huge(pmd, phys, __prot);
 
 			/*
 			 * After the PMD entry has been populated once, we
@@ -178,7 +204,8 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
 				~modifiable_attr_mask) != 0);
 		} else {
 			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
-				       prot, pgtable_alloc);
+				       prot, pgtable_alloc,
+				       page_mappings_only);
 
 			BUG_ON(!pmd_none(old_pmd) &&
 			       pmd_val(old_pmd) != pmd_val(*pmd));
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings
  2016-10-11 12:39 [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2016-10-11 12:39 ` [PATCH v2 3/3] arm64: mm: set the contiguous bit for kernel mappings where appropriate Ard Biesheuvel
@ 2016-10-11 15:46 ` Ard Biesheuvel
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2016-10-11 15:46 UTC (permalink / raw)
  To: linux-arm-kernel

On 11 October 2016 at 13:39, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> This 3-piece series is a followup to the single patch 'arm64: mmu: set the
> contiguous for kernel mappings when appropriate' sent out on the 10th [0].
>
> This v2 addresses the following issues:
> - the contiguous attribute is also useful for contigous PMD mappings on 16k
>   granule kernels (i.e., 1 GB blocks)
> - the function parameter 'block_mappings_allowed' does not clearly convey
>   whether contiguous page mappings should be used, so it is renamed to
>   'page_mappings_only', and its meaning inverted
> - instead of BUGging on changes in the PTE_CONT attribute in PMD or PTE entries
>   that have been populated already, BUG on any modification except for
>   permission attributes, which don't require break-before-make when changed.
>
> [0] http://marc.info/?l=linux-arm-kernel&m=147612332130714
>

In addition to the fact, as mentioned by Will, that contiguous PMDs
are supported for 4k and 64k granules too, this series fails to deal
with folded PUDs and PMDs (which should result in contiguous PGDs for
2 level or 3 level configurations)

Since this is no longer a drive-by patch, I wil take some time for the
v3 and address all the corner cases, and do some more elaborate
testing.

-- 
Ard.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-10-11 15:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-11 12:39 [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel
2016-10-11 12:39 ` [PATCH v2 1/3] arm64: mm: BUG on unsupported manipulations of live " Ard Biesheuvel
2016-10-11 12:39 ` [PATCH v2 2/3] arm64: mm: replace 'block_mappings_allowed' with 'page_mappings_only' Ard Biesheuvel
2016-10-11 12:39 ` [PATCH v2 3/3] arm64: mm: set the contiguous bit for kernel mappings where appropriate Ard Biesheuvel
2016-10-11 15:46 ` [PATCH v2 0/3] arm64/mm: use the contiguous attribute for kernel mappings Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).