linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Page protections for arm64
@ 2014-04-18  0:47 Laura Abbott
  2014-04-18  0:47 ` [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support Laura Abbott
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Laura Abbott @ 2014-04-18  0:47 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

These are a couple of semi-related patches to set memory on arm64 to something
other than read/write/execute everywhere. The CONFIG_DEBUG_SET_MODULE_RONX is
actually v2 of the previous patch[1] but it seemed reasonable to put this with
the other work to map regular memory with better protections. The direction
for the arm64 work is roughly based on arm.

Thanks,
Laura


[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-March/238246.html

Laura Abbott (3):
  arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support
  arm64: Treat handle_arch_irq as a function pointer
  arm64: WIP: add better page protections to arm64

 arch/arm64/Kconfig.debug            |  21 +++++
 arch/arm64/include/asm/cacheflush.h |   4 +
 arch/arm64/kernel/entry.S           |   4 +-
 arch/arm64/mm/Makefile              |   2 +-
 arch/arm64/mm/init.c                |   1 +
 arch/arm64/mm/mm.h                  |   2 +
 arch/arm64/mm/mmu.c                 | 173 ++++++++++++++++++++++++++++++++----
 arch/arm64/mm/pageattr.c            | 120 +++++++++++++++++++++++++
 8 files changed, 309 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/mm/pageattr.c

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support
  2014-04-18  0:47 [PATCH 0/3] Page protections for arm64 Laura Abbott
@ 2014-04-18  0:47 ` Laura Abbott
  2014-05-02 14:07   ` Steve Capper
  2014-04-18  0:47 ` [PATCH 2/3] arm64: Treat handle_arch_irq as a function pointer Laura Abbott
  2014-04-18  0:47 ` [PATCH 3/3] arm64: WIP: add better page protections to arm64 Laura Abbott
  2 siblings, 1 reply; 8+ messages in thread
From: Laura Abbott @ 2014-04-18  0:47 UTC (permalink / raw)
  To: linux-arm-kernel

In a similar fashion to other architecture, add the infrastructure
and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
enabled, module ranges will be marked read-only/no-execute as
appropriate.

Change-Id: I4251a0929b1fe6f43f84b14f0a64fed30769700e
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm64/Kconfig.debug            |  11 ++++
 arch/arm64/include/asm/cacheflush.h |   4 ++
 arch/arm64/mm/Makefile              |   2 +-
 arch/arm64/mm/pageattr.c            | 120 ++++++++++++++++++++++++++++++++++++
 4 files changed, 136 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/mm/pageattr.c

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index d10ec33..53979ac 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -37,4 +37,15 @@ config PID_IN_CONTEXTIDR
 	  instructions during context switch. Say Y here only if you are
 	  planning to use hardware trace tools with this kernel.
 
+config DEBUG_SET_MODULE_RONX
+        bool "Set loadable kernel module data as NX and text as RO"
+        depends on MODULES
+        help
+          This option helps catch unintended modifications to loadable
+          kernel module's text and read-only data. It also prevents execution
+          of module data. Such protection may interfere with run-time code
+          patching and dynamic kernel tracing - and they might also protect
+          against certain classes of kernel exploits.
+          If in doubt, say "N".
+
 endmenu
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 4c60e64..c12f837 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -157,4 +157,8 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
 {
 }
 
+int set_memory_ro(unsigned long addr, int numpages);
+int set_memory_rw(unsigned long addr, int numpages);
+int set_memory_x(unsigned long addr, int numpages);
+int set_memory_nx(unsigned long addr, int numpages);
 #endif
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index b51d364..25b1114 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -1,5 +1,5 @@
 obj-y				:= dma-mapping.o extable.o fault.o init.o \
 				   cache.o copypage.o flush.o \
 				   ioremap.o mmap.o pgd.o mmu.o \
-				   context.o tlb.o proc.o
+				   context.o tlb.o proc.o pageattr.o
 obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
new file mode 100644
index 0000000..e48f980
--- /dev/null
+++ b/arch/arm64/mm/pageattr.c
@@ -0,0 +1,120 @@
+/*
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
+
+static pte_t clear_pte_bit(pte_t pte, pgprot_t prot)
+{
+	pte_val(pte) &= ~pgprot_val(prot);
+	return pte;
+}
+
+static pte_t set_pte_bit(pte_t pte, pgprot_t prot)
+{
+	pte_val(pte) |= pgprot_val(prot);
+	return pte;
+}
+
+static int __change_memory(pte_t *ptep, pgtable_t token, unsigned long addr,
+			pgprot_t prot, bool set)
+{
+	pte_t pte;
+
+	if (set)
+		pte = set_pte_bit(*ptep, prot);
+	else
+		pte = clear_pte_bit(*ptep, prot);
+	set_pte(ptep, pte);
+	return 0;
+}
+
+static int set_page_range(pte_t *ptep, pgtable_t token, unsigned long addr,
+			void *data)
+{
+	pgprot_t prot = (pgprot_t)data;
+
+	return __change_memory(ptep, token, addr, prot, true);
+}
+
+static int clear_page_range(pte_t *ptep, pgtable_t token, unsigned long addr,
+			void *data)
+{
+	pgprot_t prot = (pgprot_t)data;
+
+	return __change_memory(ptep, token, addr, prot, false);
+}
+
+static int change_memory_common(unsigned long addr, int numpages,
+				pgprot_t prot, bool set)
+{
+	unsigned long start = addr;
+	unsigned long size = PAGE_SIZE*numpages;
+	unsigned long end = start + size;
+	int ret;
+
+	if (start < MODULES_VADDR || start >= MODULES_END)
+		return -EINVAL;
+
+	if (end < MODULES_VADDR || end >= MODULES_END)
+		return -EINVAL;
+
+	if (set)
+		ret = apply_to_page_range(&init_mm, start, size,
+					set_page_range, (void *)prot);
+	else
+		ret = apply_to_page_range(&init_mm, start, size,
+					clear_page_range, (void *)prot);
+
+	flush_tlb_kernel_range(start, end);
+	return ret;
+}
+
+static int change_memory_set_bit(unsigned long addr, int numpages,
+					pgprot_t prot)
+{
+	return change_memory_common(addr, numpages, prot, true);
+}
+
+static int change_memory_clear_bit(unsigned long addr, int numpages,
+					pgprot_t prot)
+{
+	return change_memory_common(addr, numpages, prot, false);
+}
+
+int set_memory_ro(unsigned long addr, int numpages)
+{
+	return change_memory_set_bit(addr, numpages, __pgprot(PTE_RDONLY));
+}
+EXPORT_SYMBOL_GPL(set_memory_ro);
+
+int set_memory_rw(unsigned long addr, int numpages)
+{
+	return change_memory_clear_bit(addr, numpages, __pgprot(PTE_RDONLY));
+}
+EXPORT_SYMBOL_GPL(set_memory_rw);
+
+int set_memory_nx(unsigned long addr, int numpages)
+{
+	return change_memory_set_bit(addr, numpages, __pgprot(PTE_PXN));
+}
+EXPORT_SYMBOL_GPL(set_memory_nx);
+
+int set_memory_x(unsigned long addr, int numpages)
+{
+	return change_memory_clear_bit(addr, numpages, __pgprot(PTE_PXN));
+}
+EXPORT_SYMBOL_GPL(set_memory_x);
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] arm64: Treat handle_arch_irq as a function pointer
  2014-04-18  0:47 [PATCH 0/3] Page protections for arm64 Laura Abbott
  2014-04-18  0:47 ` [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support Laura Abbott
@ 2014-04-18  0:47 ` Laura Abbott
  2014-04-22 10:48   ` Will Deacon
  2014-04-18  0:47 ` [PATCH 3/3] arm64: WIP: add better page protections to arm64 Laura Abbott
  2 siblings, 1 reply; 8+ messages in thread
From: Laura Abbott @ 2014-04-18  0:47 UTC (permalink / raw)
  To: linux-arm-kernel

handle_arch_irq isn't actually text, it's just a function
pointer. It doesn't need to be stored in the text section
and doing so causes problems if we ever want to make the
kernel text readonly. Move it to the data section.

Change-Id: I682ff82f5c8c429129dd81bbcc91571f278c62e2
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm64/kernel/entry.S | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 39ac630..5051f30 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -139,7 +139,8 @@ tsk	.req	x28		// current thread_info
  * Interrupt handling.
  */
 	.macro	irq_handler
-	ldr	x1, handle_arch_irq
+	ldr	x1, =handle_arch_irq
+	ldr	x1, [x1]
 	mov	x0, sp
 	blr	x1
 	.endm
@@ -678,5 +679,6 @@ ENTRY(sys_rt_sigreturn_wrapper)
 	b	sys_rt_sigreturn
 ENDPROC(sys_rt_sigreturn_wrapper)
 
+.data
 ENTRY(handle_arch_irq)
 	.quad	0
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] arm64: WIP: add better page protections to arm64
  2014-04-18  0:47 [PATCH 0/3] Page protections for arm64 Laura Abbott
  2014-04-18  0:47 ` [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support Laura Abbott
  2014-04-18  0:47 ` [PATCH 2/3] arm64: Treat handle_arch_irq as a function pointer Laura Abbott
@ 2014-04-18  0:47 ` Laura Abbott
  2 siblings, 0 replies; 8+ messages in thread
From: Laura Abbott @ 2014-04-18  0:47 UTC (permalink / raw)
  To: linux-arm-kernel

Add page protections for arm64 similar to those in arm or in
progress for arm. This is for security reasons. The flow is
currently:

- Map all memory as either RWX or RW. We round to the nearest
  section to avoid creating page tables before everything is mapped
- Once everything is mapped, if either end of the RWX section should
  not be X, we split the PMD and remap as necessary
- When initmem is to be freed, we change the permissions back to
  RW (using stop machine if necessary to flush the TLB)
- If CONFIG_DEBUG_RODATA is set, the read only sections are set
  read only.

TODO:
- actually make init rodata ro?
- Kconfig option to align up to section boundary (ran into relocation
  truncation errors, need to debug more)
- Anything to do with ftrace/kprobes

Change-Id: I219b57fd628edc283da1a3e238fc4cc8185a686e
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm64/Kconfig.debug |  10 +++
 arch/arm64/mm/init.c     |   1 +
 arch/arm64/mm/mm.h       |   2 +
 arch/arm64/mm/mmu.c      | 173 ++++++++++++++++++++++++++++++++++++++++++-----
 4 files changed, 170 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index 53979ac..bfb1aec 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -48,4 +48,14 @@ config DEBUG_SET_MODULE_RONX
           against certain classes of kernel exploits.
           If in doubt, say "N".
 
+config DEBUG_RODATA
+	bool "Make kernel text and rodata read-only"
+	help
+	  If this is set, kernel text and rodata will be made read-only. This
+	  is to help catch accidental or malicious attempts to change the
+	  kernel's executable code. Additionally splits rodata from kernel
+	  text so it can be made explicitly non-executable.
+
+          If in doubt, say Y
+
 endmenu
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 51d5352..bc74a3a 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -325,6 +325,7 @@ void __init mem_init(void)
 
 void free_initmem(void)
 {
+	fixup_init();
 	free_initmem_default(0);
 }
 
diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
index d519f4f..82347d8 100644
--- a/arch/arm64/mm/mm.h
+++ b/arch/arm64/mm/mm.h
@@ -1,2 +1,4 @@
 extern void __init bootmem_init(void);
 extern void __init arm64_swiotlb_init(void);
+
+void fixup_init(void);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6b7e895..e94789c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -26,6 +26,7 @@
 #include <linux/memblock.h>
 #include <linux/fs.h>
 #include <linux/io.h>
+#include <linux/stop_machine.h>
 
 #include <asm/cputype.h>
 #include <asm/sections.h>
@@ -167,26 +168,67 @@ static void __init *early_alloc(unsigned long sz)
 	return ptr;
 }
 
-static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
-				  unsigned long end, unsigned long pfn)
+/*
+ * remap a PMD into pages
+ */
+static noinline void __ref split_pmd(pmd_t *pmd, pgprot_t prot, bool early)
+{
+	pte_t *pte, *start_pte;
+	u64 val;
+	unsigned long pfn;
+	int i = 0;
+
+	val = pmd_val(*pmd);
+
+	if (early)
+		start_pte = pte = early_alloc(PTRS_PER_PTE*sizeof(pte_t));
+	else
+		start_pte = pte = (pte_t *)__get_free_page(PGALLOC_GFP);
+
+	BUG_ON(!pte);
+
+
+	pfn = __phys_to_pfn(val & PHYS_MASK);
+
+	do {
+		set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC));
+		pfn++;
+	} while (pte++, i++, i < PTRS_PER_PTE);
+
+
+	__pmd_populate(pmd, __pa(start_pte), PMD_TYPE_TABLE);
+	flush_tlb_all();
+}
+
+static void __ref alloc_init_pte(pmd_t *pmd, unsigned long addr,
+				  unsigned long end, unsigned long pfn,
+				  pgprot_t prot, bool early)
 {
 	pte_t *pte;
 
 	if (pmd_none(*pmd)) {
-		pte = early_alloc(PTRS_PER_PTE * sizeof(pte_t));
+		if (early)
+			pte = early_alloc(PTRS_PER_PTE * sizeof(pte_t));
+		else
+			pte = (pte_t *)__get_free_page(PGALLOC_GFP);
+		BUG_ON(!pte);
 		__pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
 	}
-	BUG_ON(pmd_bad(*pmd));
+
+	if (pmd_bad(*pmd))
+		split_pmd(pmd, prot, early);
 
 	pte = pte_offset_kernel(pmd, addr);
 	do {
-		set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC));
+		set_pte(pte, pfn_pte(pfn, prot));
 		pfn++;
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 }
 
-static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
-				  unsigned long end, phys_addr_t phys)
+static void __ref alloc_init_pmd(pud_t *pud, unsigned long addr,
+				  unsigned long end, phys_addr_t phys,
+				  pgprot_t sect_prot, pgprot_t pte_prot,
+				  bool early)
 {
 	pmd_t *pmd;
 	unsigned long next;
@@ -195,7 +237,11 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
 	 * Check for initial section mappings in the pgd/pud and remove them.
 	 */
 	if (pud_none(*pud) || pud_bad(*pud)) {
-		pmd = early_alloc(PTRS_PER_PMD * sizeof(pmd_t));
+		if (early)
+			pmd = early_alloc(PTRS_PER_PMD * sizeof(pmd_t));
+		else
+			pmd = pmd_alloc_one(&init_mm, addr);
+		BUG_ON(!pmd);
 		pud_populate(&init_mm, pud, pmd);
 	}
 
@@ -213,21 +259,25 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
 			if (!pmd_none(old_pmd))
 				flush_tlb_all();
 		} else {
-			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys));
+			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
+					pte_prot, early);
 		}
 		phys += next - addr;
 	} while (pmd++, addr = next, addr != end);
 }
 
-static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
-				  unsigned long end, unsigned long phys)
+static void __ref alloc_init_pud(pgd_t *pgd, unsigned long addr,
+				  unsigned long end, unsigned long phys,
+				  pgprot_t sect_prot, pgprot_t pte_prot,
+				  bool early)
 {
 	pud_t *pud = pud_offset(pgd, addr);
 	unsigned long next;
 
 	do {
 		next = pud_addr_end(addr, end);
-		alloc_init_pmd(pud, addr, next, phys);
+		alloc_init_pmd(pud, addr, next, phys, sect_prot, pte_prot,
+				early);
 		phys += next - addr;
 	} while (pud++, addr = next, addr != end);
 }
@@ -236,8 +286,10 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
  * Create the page directory entries and any necessary page tables for the
  * mapping specified by 'md'.
  */
-static void __init create_mapping(phys_addr_t phys, unsigned long virt,
-				  phys_addr_t size)
+static void __ref __create_mapping(phys_addr_t phys, unsigned long virt,
+				  phys_addr_t size,
+				  pgprot_t sect_prot, pgprot_t pte_prot,
+				  bool early)
 {
 	unsigned long addr, length, end, next;
 	pgd_t *pgd;
@@ -255,15 +307,37 @@ static void __init create_mapping(phys_addr_t phys, unsigned long virt,
 	end = addr + length;
 	do {
 		next = pgd_addr_end(addr, end);
-		alloc_init_pud(pgd, addr, next, phys);
+		alloc_init_pud(pgd, addr, next, phys, sect_prot, pte_prot,
+				early);
 		phys += next - addr;
 	} while (pgd++, addr = next, addr != end);
 }
 
+static void __ref create_mapping(phys_addr_t phys, unsigned long virt,
+				  phys_addr_t size,
+				  pgprot_t sect_prot, pgprot_t pte_prot)
+{
+	return __create_mapping(phys, virt, size, sect_prot, pte_prot, true);
+}
+
+static void __ref create_mapping_late(phys_addr_t phys, unsigned long virt,
+				  phys_addr_t size,
+				  pgprot_t sect_prot, pgprot_t pte_prot)
+{
+	return __create_mapping(phys, virt, size, sect_prot, pte_prot, false);
+}
+
 static void __init map_mem(void)
 {
 	struct memblock_region *reg;
 	phys_addr_t limit;
+	/*
+	 * Set up the executable regions using the exising section mappings
+	 * foir now. This will get more fine grained later once all memory
+	 * is mapped
+	 */
+	unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
+	unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
 
 	/*
 	 * Temporarily limit the memblock range. We need to do this as
@@ -301,13 +375,79 @@ static void __init map_mem(void)
 		}
 #endif
 
-		create_mapping(start, __phys_to_virt(start), end - start);
+		if (end < kernel_x_start) {
+			create_mapping(start, __phys_to_virt(start), end - start,
+				prot_sect_kernel, pgprot_default);
+		} else if (start >= kernel_x_end) {
+			create_mapping(start, __phys_to_virt(start), end - start,
+				prot_sect_kernel | PMD_SECT_PXN, pgprot_default | PTE_PXN);
+		} else {
+			if (start < kernel_x_start)
+				create_mapping(start, __phys_to_virt(start), kernel_x_start - start,
+					prot_sect_kernel | PMD_SECT_PXN, pgprot_default | PTE_PXN);
+			create_mapping(kernel_x_start, __phys_to_virt(kernel_x_start), kernel_x_end - kernel_x_start,
+				prot_sect_kernel, pgprot_default);
+			if (kernel_x_end < end)
+				create_mapping(kernel_x_end, __phys_to_virt(kernel_x_end), end - kernel_x_end,
+					prot_sect_kernel | PMD_SECT_PXN, pgprot_default | PTE_PXN);
+
+
+		}
+
 	}
 
 	/* Limit no longer required. */
 	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 }
 
+void __init fixup_executable(void)
+{
+	/* now that we are actually fully mapped, make the start/end more fine grained */
+	if (!IS_ALIGNED((unsigned long)_stext, SECTION_SIZE)) {
+		unsigned long aligned_start = round_down(__pa(_stext), SECTION_SIZE);
+
+		create_mapping(aligned_start, __phys_to_virt(aligned_start),
+				__pa(_stext) - aligned_start,
+				prot_sect_kernel | PMD_SECT_PXN,
+				pgprot_default | PTE_PXN);
+	}
+
+	if (!IS_ALIGNED((unsigned long)__init_end, SECTION_SIZE)) {
+		unsigned long aligned_end = round_up(__pa(__init_end), SECTION_SIZE);
+		create_mapping(__pa(__init_end), (unsigned long)__init_end,
+				aligned_end - __pa(__init_end),
+				prot_sect_kernel | PMD_SECT_PXN, pgprot_default | PTE_PXN);
+	}
+}
+
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void)
+{
+	create_mapping_late(__pa(_stext), (unsigned long)_stext, (unsigned long)_etext - (unsigned long)_stext,
+				prot_sect_kernel | PMD_SECT_RDONLY,
+				pgprot_default | PTE_RDONLY);
+
+}
+#endif
+
+static int __flush_mappings(void *unused)
+{
+	flush_tlb_kernel_range((unsigned long)__init_begin, (unsigned long)__init_end);
+	return 0;
+}
+
+void __ref fixup_init(void)
+{
+	phys_addr_t start = __pa(__init_begin);
+	phys_addr_t end = __pa(__init_end);
+
+	create_mapping_late(start, (unsigned long)__init_begin,
+			end - start,
+			prot_sect_kernel | PMD_SECT_PXN, pgprot_default | PTE_PXN);
+	if (!IS_ALIGNED(start, SECTION_SIZE) || !IS_ALIGNED(end, SECTION_SIZE))
+		stop_machine(__flush_mappings, NULL, NULL);
+}
+
 /*
  * paging_init() sets up the page tables, initialises the zone memory
  * maps and sets up the zero page.
@@ -317,6 +457,7 @@ void __init paging_init(void)
 	void *zero_page;
 
 	map_mem();
+	fixup_executable();
 
 	/*
 	 * Finally flush the caches and tlb to ensure that we're in a
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] arm64: Treat handle_arch_irq as a function pointer
  2014-04-18  0:47 ` [PATCH 2/3] arm64: Treat handle_arch_irq as a function pointer Laura Abbott
@ 2014-04-22 10:48   ` Will Deacon
  0 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2014-04-22 10:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 18, 2014 at 01:47:02AM +0100, Laura Abbott wrote:
> handle_arch_irq isn't actually text, it's just a function
> pointer. It doesn't need to be stored in the text section
> and doing so causes problems if we ever want to make the
> kernel text readonly. Move it to the data section.
> 
> Change-Id: I682ff82f5c8c429129dd81bbcc91571f278c62e2
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---
>  arch/arm64/kernel/entry.S | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 39ac630..5051f30 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -139,7 +139,8 @@ tsk	.req	x28		// current thread_info
>   * Interrupt handling.
>   */
>  	.macro	irq_handler
> -	ldr	x1, handle_arch_irq
> +	ldr	x1, =handle_arch_irq
> +	ldr	x1, [x1]
>  	mov	x0, sp
>  	blr	x1
>  	.endm
> @@ -678,5 +679,6 @@ ENTRY(sys_rt_sigreturn_wrapper)
>  	b	sys_rt_sigreturn
>  ENDPROC(sys_rt_sigreturn_wrapper)
>  
> +.data
>  ENTRY(handle_arch_irq)

Probably worth dropping the ENTRY directive if we're putting this in .data
and using an explicit .align instead.

Will

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support
  2014-04-18  0:47 ` [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support Laura Abbott
@ 2014-05-02 14:07   ` Steve Capper
  2014-05-02 15:30     ` Will Deacon
  0 siblings, 1 reply; 8+ messages in thread
From: Steve Capper @ 2014-05-02 14:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Apr 17, 2014 at 05:47:01PM -0700, Laura Abbott wrote:
> In a similar fashion to other architecture, add the infrastructure
> and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
> enabled, module ranges will be marked read-only/no-execute as
> appropriate.
> 
> Change-Id: I4251a0929b1fe6f43f84b14f0a64fed30769700e
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---

[ ... ]

> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> new file mode 100644
> index 0000000..e48f980
> --- /dev/null
> +++ b/arch/arm64/mm/pageattr.c

[ ... ]

> +static int change_memory_common(unsigned long addr, int numpages,
> +				pgprot_t prot, bool set)
> +{
> +	unsigned long start = addr;
> +	unsigned long size = PAGE_SIZE*numpages;
> +	unsigned long end = start + size;
> +	int ret;
> +
> +	if (start < MODULES_VADDR || start >= MODULES_END)
> +		return -EINVAL;
> +
> +	if (end < MODULES_VADDR || end >= MODULES_END)
> +		return -EINVAL;
> +
> +	if (set)
> +		ret = apply_to_page_range(&init_mm, start, size,
> +					set_page_range, (void *)prot);
> +	else
> +		ret = apply_to_page_range(&init_mm, start, size,
> +					clear_page_range, (void *)prot);
> +
> +	flush_tlb_kernel_range(start, end);

Could you please add an isb() here? (We're about to nuke the one in
flush_tlb_kernel_range).

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support
  2014-05-02 14:07   ` Steve Capper
@ 2014-05-02 15:30     ` Will Deacon
  2014-05-06 17:53       ` Laura Abbott
  0 siblings, 1 reply; 8+ messages in thread
From: Will Deacon @ 2014-05-02 15:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, May 02, 2014 at 03:07:11PM +0100, Steve Capper wrote:
> On Thu, Apr 17, 2014 at 05:47:01PM -0700, Laura Abbott wrote:
> > In a similar fashion to other architecture, add the infrastructure
> > and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
> > enabled, module ranges will be marked read-only/no-execute as
> > appropriate.
> > 
> > Change-Id: I4251a0929b1fe6f43f84b14f0a64fed30769700e
> > Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> > ---
> 
> [ ... ]
> 
> > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > new file mode 100644
> > index 0000000..e48f980
> > --- /dev/null
> > +++ b/arch/arm64/mm/pageattr.c
> 
> [ ... ]
> 
> > +static int change_memory_common(unsigned long addr, int numpages,
> > +				pgprot_t prot, bool set)
> > +{
> > +	unsigned long start = addr;
> > +	unsigned long size = PAGE_SIZE*numpages;
> > +	unsigned long end = start + size;
> > +	int ret;
> > +
> > +	if (start < MODULES_VADDR || start >= MODULES_END)
> > +		return -EINVAL;
> > +
> > +	if (end < MODULES_VADDR || end >= MODULES_END)
> > +		return -EINVAL;
> > +
> > +	if (set)
> > +		ret = apply_to_page_range(&init_mm, start, size,
> > +					set_page_range, (void *)prot);
> > +	else
> > +		ret = apply_to_page_range(&init_mm, start, size,
> > +					clear_page_range, (void *)prot);
> > +
> > +	flush_tlb_kernel_range(start, end);
> 
> Could you please add an isb() here? (We're about to nuke the one in
> flush_tlb_kernel_range).

Thinking about this even more (too much?), how does this work with SMP
anyway? You need each CPU to execute an isb(), so this just a race that
is dealt with already (probably treated as benign)?

Will

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support
  2014-05-02 15:30     ` Will Deacon
@ 2014-05-06 17:53       ` Laura Abbott
  0 siblings, 0 replies; 8+ messages in thread
From: Laura Abbott @ 2014-05-06 17:53 UTC (permalink / raw)
  To: linux-arm-kernel

On 5/2/2014 8:30 AM, Will Deacon wrote:
> On Fri, May 02, 2014 at 03:07:11PM +0100, Steve Capper wrote:
>> On Thu, Apr 17, 2014 at 05:47:01PM -0700, Laura Abbott wrote:
>>> In a similar fashion to other architecture, add the infrastructure
>>> and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
>>> enabled, module ranges will be marked read-only/no-execute as
>>> appropriate.
>>>
>>> Change-Id: I4251a0929b1fe6f43f84b14f0a64fed30769700e
>>> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
>>> ---
>>
>> [ ... ]
>>
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> new file mode 100644
>>> index 0000000..e48f980
>>> --- /dev/null
>>> +++ b/arch/arm64/mm/pageattr.c
>>
>> [ ... ]
>>
>>> +static int change_memory_common(unsigned long addr, int numpages,
>>> +				pgprot_t prot, bool set)
>>> +{
>>> +	unsigned long start = addr;
>>> +	unsigned long size = PAGE_SIZE*numpages;
>>> +	unsigned long end = start + size;
>>> +	int ret;
>>> +
>>> +	if (start < MODULES_VADDR || start >= MODULES_END)
>>> +		return -EINVAL;
>>> +
>>> +	if (end < MODULES_VADDR || end >= MODULES_END)
>>> +		return -EINVAL;
>>> +
>>> +	if (set)
>>> +		ret = apply_to_page_range(&init_mm, start, size,
>>> +					set_page_range, (void *)prot);
>>> +	else
>>> +		ret = apply_to_page_range(&init_mm, start, size,
>>> +					clear_page_range, (void *)prot);
>>> +
>>> +	flush_tlb_kernel_range(start, end);
>>
>> Could you please add an isb() here? (We're about to nuke the one in
>> flush_tlb_kernel_range).
> 
> Thinking about this even more (too much?), how does this work with SMP
> anyway? You need each CPU to execute an isb(), so this just a race that
> is dealt with already (probably treated as benign)?
>

Yes unless we want to IPI an isb I think this should be a mostly benign
race. I say 'mostly' only because this is a security/debug feature so
there could be a hole to take advantage of. Then again, because we map
and then set permissions later there is always a chance of a race.

I'll add the isb for v2 based on Will's patch set.

Laura

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-05-06 17:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-18  0:47 [PATCH 0/3] Page protections for arm64 Laura Abbott
2014-04-18  0:47 ` [PATCH 1/3] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support Laura Abbott
2014-05-02 14:07   ` Steve Capper
2014-05-02 15:30     ` Will Deacon
2014-05-06 17:53       ` Laura Abbott
2014-04-18  0:47 ` [PATCH 2/3] arm64: Treat handle_arch_irq as a function pointer Laura Abbott
2014-04-22 10:48   ` Will Deacon
2014-04-18  0:47 ` [PATCH 3/3] arm64: WIP: add better page protections to arm64 Laura Abbott

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).