linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH V1 0/8] KASAN ppc64 support
@ 2015-08-17  6:36 Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 1/8] powerpc/mm: Add virt_to_pfn and use this instead of opencoding Aneesh Kumar K.V
                   ` (8 more replies)
  0 siblings, 9 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

Hi,

This patchset implements kernel address sanitizer for ppc64.
Since ppc64 virtual address range is divided into different regions,
we can't have one contigous area for the kasan shadow range. Hence
we don't support the INLINE kasan instrumentation. With Outline
instrumentation, we override the shadow_to_mem and mem_to_shadow
callbacks, so that we map only the kernel linear range (ie,
region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
and vmemmap ) we return the address of the zero page. This
works because kasan doesn't track both vmemmap and vmalloc address.

Aneesh Kumar K.V (8):
  powerpc/mm: Add virt_to_pfn and use this instead of opencoding
  kasan: MODULE_VADDR is not available on all archs
  kasan: Rename kasan_enabled to kasan_report_enabled
  kasan: Don't use kasan shadow pointer in generic functions
  kasan: Enable arch to hook into kasan callbacks.
  kasan: Allow arch to overrride kasan shadow offsets
  powerpc/mm: kasan: Add kasan support for ppc64
  powerpc: Disable kasan for kernel/ and mm/ directory

-aneesh

 arch/powerpc/include/asm/kasan.h         | 74 ++++++++++++++++++++++++++++++++
 arch/powerpc/include/asm/page.h          |  5 ++-
 arch/powerpc/include/asm/pgtable-ppc64.h |  1 +
 arch/powerpc/include/asm/ppc_asm.h       | 10 +++++
 arch/powerpc/include/asm/string.h        | 13 ++++++
 arch/powerpc/kernel/Makefile             |  2 +
 arch/powerpc/kernel/prom_init_check.sh   |  2 +-
 arch/powerpc/kernel/setup_64.c           |  3 ++
 arch/powerpc/lib/mem_64.S                |  6 ++-
 arch/powerpc/lib/memcpy_64.S             |  3 +-
 arch/powerpc/lib/ppc_ksyms.c             | 10 +++++
 arch/powerpc/mm/Makefile                 |  4 ++
 arch/powerpc/mm/kasan_init.c             | 44 +++++++++++++++++++
 arch/powerpc/mm/slb_low.S                |  4 ++
 arch/powerpc/platforms/Kconfig.cputype   |  1 +
 include/linux/kasan.h                    |  3 ++
 mm/kasan/kasan.c                         |  9 ++++
 mm/kasan/kasan.h                         | 20 ++++++++-
 mm/kasan/report.c                        | 19 ++++----
 19 files changed, 216 insertions(+), 17 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/mm/kasan_init.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 1/8] powerpc/mm: Add virt_to_pfn and use this instead of opencoding
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 2/8] kasan: MODULE_VADDR is not available on all archs Aneesh Kumar K.V
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

This add helper virt_to_pfn and remove the opencoded usage of the
same.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/page.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 71294a6e976e..168ca67e39b3 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -127,9 +127,10 @@ extern long long virt_phys_offset;
 #define pfn_valid(pfn)		((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr)
 #endif
 
-#define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_pfn(kaddr)	(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
-#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
 
 /*
  * On Book-E parts we need __va to parse the device tree and we can't
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 2/8] kasan: MODULE_VADDR is not available on all archs
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 1/8] powerpc/mm: Add virt_to_pfn and use this instead of opencoding Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 3/8] kasan: Rename kasan_enabled to kasan_report_enabled Aneesh Kumar K.V
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

Conditionalize the check using #ifdef

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 mm/kasan/report.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index e07c94fbd0ac..71ce7548d914 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -85,9 +85,14 @@ static void print_error_description(struct kasan_access_info *info)
 
 static inline bool kernel_or_module_addr(const void *addr)
 {
-	return (addr >= (void *)_stext && addr < (void *)_end)
-		|| (addr >= (void *)MODULES_VADDR
-			&& addr < (void *)MODULES_END);
+	if (addr >= (void *)_stext && addr < (void *)_end)
+		return true;
+#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
+	if (addr >= (void *)MODULES_VADDR
+			&& addr < (void *)MODULES_END)
+		return true;
+#endif
+	return false;
 }
 
 static inline bool init_task_stack_addr(const void *addr)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 3/8] kasan: Rename kasan_enabled to kasan_report_enabled
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 1/8] powerpc/mm: Add virt_to_pfn and use this instead of opencoding Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 2/8] kasan: MODULE_VADDR is not available on all archs Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions Aneesh Kumar K.V
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

The function only disable/enable reporting. In the later patch
we will be adding a kasan early enable/disable. Rename kasan_enabled
to properly reflect its function.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 mm/kasan/kasan.h  | 2 +-
 mm/kasan/report.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index c242adf6bc85..a6b46cc94907 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -63,7 +63,7 @@ static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 		<< KASAN_SHADOW_SCALE_SHIFT);
 }
 
-static inline bool kasan_enabled(void)
+static inline bool kasan_report_enabled(void)
 {
 	return !current->kasan_depth;
 }
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 71ce7548d914..d19d01823a68 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -225,7 +225,7 @@ void kasan_report(unsigned long addr, size_t size,
 {
 	struct kasan_access_info info;
 
-	if (likely(!kasan_enabled()))
+	if (likely(!kasan_report_enabled()))
 		return;
 
 	info.access_addr = (void *)addr;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
                   ` (2 preceding siblings ...)
  2015-08-17  6:36 ` [RFC PATCH V1 3/8] kasan: Rename kasan_enabled to kasan_report_enabled Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17 11:36   ` Andrey Ryabinin
  2015-08-17  6:36 ` [RFC PATCH V1 5/8] kasan: Enable arch to hook into kasan callbacks Aneesh Kumar K.V
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

We can't use generic functions like print_hex_dump to access kasan
shadow region. This require us to setup another kasan shadow region
for the address passed (kasan shadow address). Most architecture won't
be able to do that. Hence remove dumping kasan shadow region dump. If
we really want to do this we will have to have a kasan internal implemen
tation of print_hex_dump for which we will disable address sanitizer
operation.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 mm/kasan/report.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index d19d01823a68..79fbc5d14bd2 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -170,12 +170,6 @@ static void print_shadow_for_address(const void *addr)
 		snprintf(buffer, sizeof(buffer),
 			(i == 0) ? ">%p: " : " %p: ", kaddr);
 
-		kasan_disable_current();
-		print_hex_dump(KERN_ERR, buffer,
-			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
-			shadow_row, SHADOW_BYTES_PER_ROW, 0);
-		kasan_enable_current();
-
 		if (row_is_guilty(shadow_row, shadow))
 			pr_err("%*c\n",
 				shadow_pointer_offset(shadow_row, shadow),
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 5/8] kasan: Enable arch to hook into kasan callbacks.
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
                   ` (3 preceding siblings ...)
  2015-08-17  6:36 ` [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 6/8] kasan: Allow arch to overrride kasan shadow offsets Aneesh Kumar K.V
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

We add enable/disable callbacks in this patch which architecture
can implemement. We will use this in the later patches for architecture
like ppc64, that cannot have early zero page kasan shadow region for the
entire virtual address space. Such architectures also cannot use
inline kasan support.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 mm/kasan/kasan.c |  9 +++++++++
 mm/kasan/kasan.h | 15 +++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7b28e9cdf1c7..e4d33afd0eaf 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -43,6 +43,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 {
 	void *shadow_start, *shadow_end;
 
+	if (!kasan_enabled())
+		return;
+
 	shadow_start = kasan_mem_to_shadow(address);
 	shadow_end = kasan_mem_to_shadow(address + size);
 
@@ -51,6 +54,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 
 void kasan_unpoison_shadow(const void *address, size_t size)
 {
+	if (!kasan_enabled())
+		return;
+
 	kasan_poison_shadow(address, size, 0);
 
 	if (size & KASAN_SHADOW_MASK) {
@@ -238,6 +244,9 @@ static __always_inline void check_memory_region(unsigned long addr,
 {
 	struct kasan_access_info info;
 
+	if (!kasan_enabled())
+		return;
+
 	if (unlikely(size == 0))
 		return;
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index a6b46cc94907..deb547d5a916 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -68,6 +68,21 @@ static inline bool kasan_report_enabled(void)
 	return !current->kasan_depth;
 }
 
+#ifndef kasan_enabled
+/*
+ * Some archs may want to disable kasan callbacks.
+ */
+static inline bool kasan_enabled(void)
+{
+	return true;
+}
+#define kasan_enabled kasan_enabled
+#else
+#ifdef CONFIG_KASAN_INLINE
+#error "Kasan inline support cannot work with KASAN arch hooks"
+#endif
+#endif
+
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 6/8] kasan: Allow arch to overrride kasan shadow offsets
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
                   ` (4 preceding siblings ...)
  2015-08-17  6:36 ` [RFC PATCH V1 5/8] kasan: Enable arch to hook into kasan callbacks Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17  6:36 ` [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64 Aneesh Kumar K.V
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

Some archs may want to provide kasan shadow memory as a constant
offset from the address. Such arch even though cannot use inline kasan
support, they can work with outofline kasan support.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 include/linux/kasan.h | 3 +++
 mm/kasan/kasan.h      | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5486d777b706..e458ca64cdaf 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -15,11 +15,14 @@ struct vm_struct;
 #include <asm/kasan.h>
 #include <linux/sched.h>
 
+#ifndef kasan_mem_to_shadow
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
 	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
 		+ KASAN_SHADOW_OFFSET;
 }
+#define kasan_mem_to_shadow kasan_mem_to_shadow
+#endif
 
 /* Enable reporting bugs after kasan_disable_current() */
 static inline void kasan_enable_current(void)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index deb547d5a916..c0686f2b1224 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -57,11 +57,14 @@ struct kasan_global {
 void kasan_report_error(struct kasan_access_info *info);
 void kasan_report_user_access(struct kasan_access_info *info);
 
+#ifndef kasan_shadow_to_mem
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 {
 	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
 		<< KASAN_SHADOW_SCALE_SHIFT);
 }
+#define kasan_shadow_to_mem kasan_shadow_to_mem
+#endif
 
 static inline bool kasan_report_enabled(void)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
                   ` (5 preceding siblings ...)
  2015-08-17  6:36 ` [RFC PATCH V1 6/8] kasan: Allow arch to overrride kasan shadow offsets Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17 12:13   ` Andrey Ryabinin
  2015-08-17  6:36 ` [RFC PATCH V1 8/8] powerpc: Disable kasan for kernel/ and mm/ directory Aneesh Kumar K.V
  2015-08-17  6:54 ` [RFC PATCH V1 0/8] KASAN ppc64 support Benjamin Herrenschmidt
  8 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

We use the region with region ID 0xe as the kasan shadow region. Since
we use hash page table, we can't have the early zero page based shadow
region support. Hence we disable kasan in the early code and runtime
enable this. We could imporve the condition using static keys. (but
that is for a later patch). We also can't support inline instrumentation
because our kernel mapping doesn't give us a large enough free window
to map the entire range. For VMALLOC and VMEMMAP region we just
return a zero page instead of having a translation bolted into the
htab. This simplifies handling VMALLOC and VMEMAP area. Kasan is not
tracking both the region as of now

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/kasan.h         | 74 ++++++++++++++++++++++++++++++++
 arch/powerpc/include/asm/pgtable-ppc64.h |  1 +
 arch/powerpc/include/asm/ppc_asm.h       | 10 +++++
 arch/powerpc/include/asm/string.h        | 13 ++++++
 arch/powerpc/kernel/Makefile             |  1 +
 arch/powerpc/kernel/prom_init_check.sh   |  2 +-
 arch/powerpc/kernel/setup_64.c           |  3 ++
 arch/powerpc/lib/mem_64.S                |  6 ++-
 arch/powerpc/lib/memcpy_64.S             |  3 +-
 arch/powerpc/lib/ppc_ksyms.c             | 10 +++++
 arch/powerpc/mm/Makefile                 |  3 ++
 arch/powerpc/mm/kasan_init.c             | 44 +++++++++++++++++++
 arch/powerpc/mm/slb_low.S                |  4 ++
 arch/powerpc/platforms/Kconfig.cputype   |  1 +
 14 files changed, 171 insertions(+), 4 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/mm/kasan_init.c

diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
new file mode 100644
index 000000000000..51e76e698bb9
--- /dev/null
+++ b/arch/powerpc/include/asm/kasan.h
@@ -0,0 +1,74 @@
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_KASAN
+/*
+ * KASAN_SHADOW_START: We use a new region for kasan mapping
+ * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
+ */
+#define KASAN_SHADOW_START      (KASAN_REGION_ID << REGION_SHIFT)
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1UL << (PGTABLE_RANGE - 3)))
+/*
+ * This value is used to map an address to the corresponding shadow
+ * address by the following formula:
+ *     shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * This applies to the linear mapping.
+ * Hence 0xc000000000000000 -> 0xe000000000000000
+ * We use an internal zero page as the shadow address for vmall and vmemmap
+ * region, since we don't track both of them now.
+ *
+ */
+#define KASAN_SHADOW_KERNEL_OFFSET	((KASAN_REGION_ID << REGION_SHIFT) - \
+					 (KERNEL_REGION_ID << (REGION_SHIFT - 3)))
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+#define kasan_mem_to_shadow kasan_mem_to_shadow
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	unsigned long offset = 0;
+
+	switch (REGION_ID(addr)) {
+	case KERNEL_REGION_ID:
+		offset = KASAN_SHADOW_KERNEL_OFFSET;
+		break;
+	default:
+		return (void *)kasan_zero_page;
+	}
+	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ offset;
+}
+
+#define kasan_shadow_to_mem kasan_shadow_to_mem
+static inline void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+	unsigned long offset = 0;
+
+	switch (REGION_ID(shadow_addr)) {
+	case KASAN_REGION_ID:
+		offset = KASAN_SHADOW_KERNEL_OFFSET;
+		break;
+	default:
+		pr_err("Shadow memory whose origin not found %p\n", shadow_addr);
+		BUG();
+	}
+	return (void *)(((unsigned long)shadow_addr - offset)
+			<< KASAN_SHADOW_SCALE_SHIFT);
+}
+
+#define kasan_enabled kasan_enabled
+extern bool __kasan_enabled;
+static inline bool kasan_enabled(void)
+{
+	return __kasan_enabled;
+}
+
+void kasan_init(void);
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+#endif
diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index 3bb7488bd24b..369ce5442aa6 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -80,6 +80,7 @@
 #define KERNEL_REGION_ID	(REGION_ID(PAGE_OFFSET))
 #define VMEMMAP_REGION_ID	(0xfUL)	/* Server only */
 #define USER_REGION_ID		(0UL)
+#define KASAN_REGION_ID		(0xeUL) /* Server only */
 
 /*
  * Defines the address of the vmemap area, in its own region on
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index dd0fc18d8103..e75ae67e804e 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -226,6 +226,11 @@ name:
 
 #define DOTSYM(a)	a
 
+#define KASAN_OVERRIDE(x, y) \
+	.weak x;	     \
+	.set x, y
+
+
 #else
 
 #define XGLUE(a,b) a##b
@@ -263,6 +268,11 @@ GLUE(.,name):
 
 #define DOTSYM(a)	GLUE(.,a)
 
+#define KASAN_OVERRIDE(x, y)	\
+	.weak x;		\
+	.set x, y;		\
+	.weak DOTSYM(x);	\
+	.set DOTSYM(x), DOTSYM(y)
 #endif
 
 #else /* 32-bit */
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index e40010abcaf1..b10a4c01cdbf 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -27,6 +27,19 @@ extern void * memmove(void *,const void *,__kernel_size_t);
 extern int memcmp(const void *,const void *,__kernel_size_t);
 extern void * memchr(const void *,int,__kernel_size_t);
 
+extern void * __memset(void *, int, __kernel_size_t);
+extern void * __memcpy(void *, const void *, __kernel_size_t);
+extern void * __memmove(void *, const void *, __kernel_size_t);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
 #endif /* __KERNEL__ */
 
 #endif	/* _ASM_POWERPC_STRING_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 12868b1c4e05..a158a5fd82c2 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -26,6 +26,7 @@ CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog
 CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog
 endif
 
+KASAN_SANITIZE_prom_init.o := n
 obj-y				:= cputable.o ptrace.o syscalls.o \
 				   irq.o align.o signal_32.o pmc.o vdso.o \
 				   process.o systbl.o idle.o \
diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
index 12640f7e726b..e25777956123 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -17,7 +17,7 @@
 # it to the list below:
 
 WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush
-_end enter_prom memcpy memset reloc_offset __secondary_hold
+_end enter_prom __memcpy __memset memcpy memset reloc_offset __secondary_hold
 __secondary_hold_acknowledge __secondary_hold_spinloop __start
 strcmp strcpy strlcpy strlen strncmp strstr logo_linux_clut224
 reloc_got2 kernstart_addr memstart_addr linux_banner _stext
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index bdcbb716f4d6..4b766638ead9 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -69,6 +69,7 @@
 #include <asm/kvm_ppc.h>
 #include <asm/hugetlb.h>
 #include <asm/epapr_hcalls.h>
+#include <asm/kasan.h>
 
 #ifdef DEBUG
 #define DBG(fmt...) udbg_printf(fmt)
@@ -708,6 +709,8 @@ void __init setup_arch(char **cmdline_p)
 	/* Initialize the MMU context management stuff */
 	mmu_context_init();
 
+	kasan_init();
+
 	/* Interrupt code needs to be 64K-aligned */
 	if ((unsigned long)_stext & 0xffff)
 		panic("Kernelbase not 64K-aligned (0x%lx)!\n",
diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
index 43435c6892fb..0e1f811babdd 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -12,7 +12,8 @@
 #include <asm/errno.h>
 #include <asm/ppc_asm.h>
 
-_GLOBAL(memset)
+KASAN_OVERRIDE(memset,__memset)
+_GLOBAL(__memset)
 	neg	r0,r3
 	rlwimi	r4,r4,8,16,23
 	andi.	r0,r0,7			/* # bytes to be 8-byte aligned */
@@ -77,7 +78,8 @@ _GLOBAL(memset)
 	stb	r4,0(r6)
 	blr
 
-_GLOBAL_TOC(memmove)
+KASAN_OVERRIDE(memmove,__memmove)
+_GLOBAL_TOC(__memmove)
 	cmplw	0,r3,r4
 	bgt	backwards_memcpy
 	b	memcpy
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index 32a06ec395d2..396b44181ec1 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -10,7 +10,8 @@
 #include <asm/ppc_asm.h>
 
 	.align	7
-_GLOBAL_TOC(memcpy)
+KASAN_OVERRIDE(memcpy,__memcpy)
+_GLOBAL_TOC(__memcpy)
 BEGIN_FTR_SECTION
 #ifdef __LITTLE_ENDIAN__
 	cmpdi	cr7,r5,0
diff --git a/arch/powerpc/lib/ppc_ksyms.c b/arch/powerpc/lib/ppc_ksyms.c
index c7f8e9586316..3a27b08bee26 100644
--- a/arch/powerpc/lib/ppc_ksyms.c
+++ b/arch/powerpc/lib/ppc_ksyms.c
@@ -9,6 +9,16 @@ EXPORT_SYMBOL(memmove);
 EXPORT_SYMBOL(memcmp);
 EXPORT_SYMBOL(memchr);
 
+#ifdef CONFIG_PPC64
+/*
+ * There symbols are needed with kasan. We only
+ * have that enabled for ppc64 now.
+ */
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memmove);
+#endif
+
 EXPORT_SYMBOL(strcpy);
 EXPORT_SYMBOL(strncpy);
 EXPORT_SYMBOL(strcat);
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 3eb73a38220d..ffe8f8d92883 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -37,3 +37,6 @@ obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
 obj-$(CONFIG_HIGHMEM)		+= highmem.o
 obj-$(CONFIG_PPC_COPRO_BASE)	+= copro_fault.o
 obj-$(CONFIG_SPAPR_TCE_IOMMU)	+= mmu_context_iommu.o
+
+obj-$(CONFIG_KASAN)		+= kasan_init.o
+KASAN_SANITIZE_kasan_init.o	:= n
diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan_init.c
new file mode 100644
index 000000000000..9deba6019fbf
--- /dev/null
+++ b/arch/powerpc/mm/kasan_init.c
@@ -0,0 +1,44 @@
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/kasan.h>
+
+bool __kasan_enabled = false;
+unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
+void __init kasan_init(void)
+{
+	unsigned long k_start, k_end;
+	struct memblock_region *reg;
+	unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
+
+
+	for_each_memblock(memory, reg) {
+		void *p;
+		void *start = __va(reg->base);
+		void *end = __va(reg->base + reg->size);
+		int node = pfn_to_nid(virt_to_pfn(start));
+
+		if (start >= end)
+			break;
+
+		k_start = (unsigned long)kasan_mem_to_shadow(start);
+		k_end = (unsigned long)kasan_mem_to_shadow(end);
+		for (; k_start < k_end; k_start += page_size) {
+			p = vmemmap_alloc_block(page_size, node);
+			if (!p) {
+				pr_info("Disabled Kasan, for lack of free mem\n");
+				/* Free the stuff or panic ? */
+				return;
+			}
+			htab_bolt_mapping(k_start, k_start + page_size,
+					  __pa(p), pgprot_val(PAGE_KERNEL),
+					  mmu_vmemmap_psize, mmu_kernel_ssize);
+		}
+	}
+	/*
+	 * At this point kasan is fully initialized. Enable error messages
+	 */
+	init_task.kasan_depth = 0;
+	__kasan_enabled = true;
+	pr_info("Kernel address sanitizer initialized\n");
+}
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index 736d18b3cefd..154bd8a0b437 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -80,11 +80,15 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
 	/* Check virtual memmap region. To be patches at kernel boot */
 	cmpldi	cr0,r9,0xf
 	bne	1f
+2:
 .globl slb_miss_kernel_load_vmemmap
 slb_miss_kernel_load_vmemmap:
 	li	r11,0
 	b	6f
 1:
+	/* Kasan region same as vmemmap mapping */
+	cmpldi	cr0,r9,0xe
+	beq	2b
 #endif /* CONFIG_SPARSEMEM_VMEMMAP */
 
 	/* vmalloc mapping gets the encoding from the PACA as the mapping
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index c140e94c7c72..7a7c9d54f80e 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -75,6 +75,7 @@ config PPC_BOOK3S_64
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
 	select ARCH_SUPPORTS_NUMA_BALANCING
 	select IRQ_WORK
+	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP
 
 config PPC_BOOK3E_64
 	bool "Embedded processors"
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH V1 8/8] powerpc: Disable kasan for kernel/ and mm/ directory
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
                   ` (6 preceding siblings ...)
  2015-08-17  6:36 ` [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64 Aneesh Kumar K.V
@ 2015-08-17  6:36 ` Aneesh Kumar K.V
  2015-08-17  6:54 ` [RFC PATCH V1 0/8] KASAN ppc64 support Benjamin Herrenschmidt
  8 siblings, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  6:36 UTC (permalink / raw)
  To: benh, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev, Aneesh Kumar K.V

Without this we hit a boot hang, which i am still looking. Till then
disable kasan for kernel/ and mm/ directory

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/Makefile | 1 +
 arch/powerpc/mm/Makefile     | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index a158a5fd82c2..b3b0079bccad 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -3,6 +3,7 @@
 #
 
 CFLAGS_ptrace.o		+= -DUTS_MACHINE='"$(UTS_MACHINE)"'
+KASAN_SANITIZE := n
 
 subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
 
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index ffe8f8d92883..2be2a6998ecb 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -3,6 +3,7 @@
 #
 
 subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
+KASAN_SANITIZE := n
 
 ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
                   ` (7 preceding siblings ...)
  2015-08-17  6:36 ` [RFC PATCH V1 8/8] powerpc: Disable kasan for kernel/ and mm/ directory Aneesh Kumar K.V
@ 2015-08-17  6:54 ` Benjamin Herrenschmidt
  2015-08-17  9:50   ` Aneesh Kumar K.V
  8 siblings, 1 reply; 27+ messages in thread
From: Benjamin Herrenschmidt @ 2015-08-17  6:54 UTC (permalink / raw)
  To: Aneesh Kumar K.V, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev

On Mon, 2015-08-17 at 12:06 +0530, Aneesh Kumar K.V wrote:
> Hi,
> 
> This patchset implements kernel address sanitizer for ppc64.
> Since ppc64 virtual address range is divided into different regions,
> we can't have one contigous area for the kasan shadow range. Hence
> we don't support the INLINE kasan instrumentation. With Outline
> instrumentation, we override the shadow_to_mem and mem_to_shadow
> callbacks, so that we map only the kernel linear range (ie,
> region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
> and vmemmap ) we return the address of the zero page. This
> works because kasan doesn't track both vmemmap and vmalloc address.
So bear with me, I don't know anything about KASAN, but if you want a
shadow, can't you just add a fixed offset to the address and thus
effectively shadow each region independently while keeping the inline
helpers ?

Cheers,
Ben.

> Aneesh Kumar K.V (8):
>   powerpc/mm: Add virt_to_pfn and use this instead of opencoding
>   kasan: MODULE_VADDR is not available on all archs
>   kasan: Rename kasan_enabled to kasan_report_enabled
>   kasan: Don't use kasan shadow pointer in generic functions
>   kasan: Enable arch to hook into kasan callbacks.
>   kasan: Allow arch to overrride kasan shadow offsets
>   powerpc/mm: kasan: Add kasan support for ppc64
>   powerpc: Disable kasan for kernel/ and mm/ directory
> 
> -aneesh
> 
>  arch/powerpc/include/asm/kasan.h         | 74 
> ++++++++++++++++++++++++++++++++
>  arch/powerpc/include/asm/page.h          |  5 ++-
>  arch/powerpc/include/asm/pgtable-ppc64.h |  1 +
>  arch/powerpc/include/asm/ppc_asm.h       | 10 +++++
>  arch/powerpc/include/asm/string.h        | 13 ++++++
>  arch/powerpc/kernel/Makefile             |  2 +
>  arch/powerpc/kernel/prom_init_check.sh   |  2 +-
>  arch/powerpc/kernel/setup_64.c           |  3 ++
>  arch/powerpc/lib/mem_64.S                |  6 ++-
>  arch/powerpc/lib/memcpy_64.S             |  3 +-
>  arch/powerpc/lib/ppc_ksyms.c             | 10 +++++
>  arch/powerpc/mm/Makefile                 |  4 ++
>  arch/powerpc/mm/kasan_init.c             | 44 +++++++++++++++++++
>  arch/powerpc/mm/slb_low.S                |  4 ++
>  arch/powerpc/platforms/Kconfig.cputype   |  1 +
>  include/linux/kasan.h                    |  3 ++
>  mm/kasan/kasan.c                         |  9 ++++
>  mm/kasan/kasan.h                         | 20 ++++++++-
>  mm/kasan/report.c                        | 19 ++++----
>  19 files changed, 216 insertions(+), 17 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/kasan.h
>  create mode 100644 arch/powerpc/mm/kasan_init.c
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17  6:54 ` [RFC PATCH V1 0/8] KASAN ppc64 support Benjamin Herrenschmidt
@ 2015-08-17  9:50   ` Aneesh Kumar K.V
  2015-08-17 10:01     ` Benjamin Herrenschmidt
  2015-08-17 11:29     ` Andrey Ryabinin
  0 siblings, 2 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17  9:50 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev

Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:

> On Mon, 2015-08-17 at 12:06 +0530, Aneesh Kumar K.V wrote:
>> Hi,
>> 
>> This patchset implements kernel address sanitizer for ppc64.
>> Since ppc64 virtual address range is divided into different regions,
>> we can't have one contigous area for the kasan shadow range. Hence
>> we don't support the INLINE kasan instrumentation. With Outline
>> instrumentation, we override the shadow_to_mem and mem_to_shadow
>> callbacks, so that we map only the kernel linear range (ie,
>> region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
>> and vmemmap ) we return the address of the zero page. This
>> works because kasan doesn't track both vmemmap and vmalloc address.
> So bear with me, I don't know anything about KASAN, but if you want a
> shadow, can't you just add a fixed offset to the address and thus
> effectively shadow each region independently while keeping the inline
> helpers ?
>

For kernel linear mapping, our address space looks like
0xc000000000000000 - 0xc0003fffffffffff  (64TB)

We can't have virtual address(effective address) above that range
in 0xc region. Hence in-order to shadow the linear mapping, I am using
region 0xe. ie, the shadow mapping now looks like

0xc000000000000000 -> 0xe000000000000000 

ie, shadow offset now is 0xc800000000000000

For inline instrumentation, we need to have a constant shadow offset. We
can't use the same shadow offset for region 0xd and 0xf because, the
mapping would then end up as
vmalloc:
0xc800000000000000 +  (0xd000000000000000ULL >> 3)  
0xe200000000000000

vmemmap:
0xc800000000000000 +  (0xf000000000000000ULL >> 3)  
0xe600000000000000

and we can't have that logical address range, because our valid ranges
are

0xc000000000000000 - 0xc0003fffffffffff 
0xd000000000000000 - 0xd0003fffffffffff 
0xe000000000000000 - 0xe0003fffffffffff 
0xf000000000000000 - 0xf0003fffffffffff 
 
Because of the above I concluded that we may not be able to do
inline instrumentation. Now if we are not doing inline instrumentation,
we can simplify kasan support by not creating a shadow mapping at all
for vmalloc and vmemmap region. Hence the idea of returning the address
of a zero page for anything other than kernel linear map region.

Another reason why inline instrumentation is difficult is that for
inline instrumentation to work, we need to create a mapping for _possible_
virtual address space before kasan is fully initialized. ie, we need
to create page table entries for the shadow of the entire 64TB range,
with zero page, even though we have lesser ram. We definitely can't bolt
those entries. I am yet to get the shadow for kernel linear mapping to
work without bolting. Also we will have to get the page table allocated
for that, because we can't share page table entries. Our fault
path use pte entries for storing hash slot index.


NOTE:
If we are ok to steal part of that 64TB range, for kasan mapping , ie
we make shadow of each region part of the same region, may be we can
get inline instrumentation to work. But that still doesn't solve the
page table allocation overhead issue mentioned above.

-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17  9:50   ` Aneesh Kumar K.V
@ 2015-08-17 10:01     ` Benjamin Herrenschmidt
  2015-08-17 10:50       ` Aneesh Kumar K.V
  2015-08-17 11:29     ` Andrey Ryabinin
  1 sibling, 1 reply; 27+ messages in thread
From: Benjamin Herrenschmidt @ 2015-08-17 10:01 UTC (permalink / raw)
  To: Aneesh Kumar K.V, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev

On Mon, 2015-08-17 at 15:20 +0530, Aneesh Kumar K.V wrote:

> For kernel linear mapping, our address space looks like
> 0xc000000000000000 - 0xc0003fffffffffff  (64TB)
> 
> We can't have virtual address(effective address) above that range
> in 0xc region. Hence in-order to shadow the linear mapping, I am 
> using region 0xe. ie, the shadow mapping now looks liwe
> 
> 0xc000000000000000 -> 0xe000000000000000 

Why ? IE. Why can't you put the shadow at address +64T and have it work
for everything ?
.../...

> Another reason why inline instrumentation is difficult is that for
> inline instrumentation to work, we need to create a mapping for 
> _possible_
> virtual address space before kasan is fully initialized. ie, we need
> to create page table entries for the shadow of the entire 64TB range,
> with zero page, even though we have lesser ram. We definitely can't 
> bolt those entries. I am yet to get the shadow for kernel linear 
> mapping to work without bolting. Also we will have to get the page 
> table allocated for that, because we can't share page table entries. 
> Our fault path use pte entries for storing hash slot index.

Hrm, that means we might want to start considering a page table to
cover the linear mapping...

> If we are ok to steal part of that 64TB range, for kasan mapping , ie
> we make shadow of each region part of the same region, may be we can
> get inline instrumentation to work. But that still doesn't solve the
> page table allocation overhead issue mentioned above.
> 
> -aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17 10:01     ` Benjamin Herrenschmidt
@ 2015-08-17 10:50       ` Aneesh Kumar K.V
  2015-08-17 11:21         ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-17 10:50 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev

Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:

> On Mon, 2015-08-17 at 15:20 +0530, Aneesh Kumar K.V wrote:
>
>> For kernel linear mapping, our address space looks like
>> 0xc000000000000000 - 0xc0003fffffffffff  (64TB)
>> 
>> We can't have virtual address(effective address) above that range
>> in 0xc region. Hence in-order to shadow the linear mapping, I am 
>> using region 0xe. ie, the shadow mapping now looks liwe
>> 
>> 0xc000000000000000 -> 0xe000000000000000 
>
> Why ? IE. Why can't you put the shadow at address +64T and have it work
> for everything ?
> .../...

Above +64TB ? How will that work ? We have check in different parts of
code like below, where we check each region's top address is within 64TB range. 

PGTABLE_RANGE and (ESID_BITS + SID_SHIFT) and all dependendent on 64TB
range. (46 bits).


static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
				     int ssize)
{
	/*
	 * Bad address. We return VSID 0 for that
	 */
	if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
		return 0;

	if (ssize == MMU_SEGSIZE_256M)
		return vsid_scramble((context << ESID_BITS)
				     | (ea >> SID_SHIFT), 256M);
	return vsid_scramble((context << ESID_BITS_1T)
			     | (ea >> SID_SHIFT_1T), 1T);
}



>> Another reason why inline instrumentation is difficult is that for
>> inline instrumentation to work, we need to create a mapping for 
>> _possible_
>> virtual address space before kasan is fully initialized. ie, we need
>> to create page table entries for the shadow of the entire 64TB range,
>> with zero page, even though we have lesser ram. We definitely can't 
>> bolt those entries. I am yet to get the shadow for kernel linear 
>> mapping to work without bolting. Also we will have to get the page 
>> table allocated for that, because we can't share page table entries. 
>> Our fault path use pte entries for storing hash slot index.
>
> Hrm, that means we might want to start considering a page table to
> cover the linear mapping...

But that would require us to get a large zero page ? Are you suggesting
to use 16G page ? 


>
>> If we are ok to steal part of that 64TB range, for kasan mapping , ie
>> we make shadow of each region part of the same region, may be we can
>> get inline instrumentation to work. But that still doesn't solve the
>> page table allocation overhead issue mentioned above.
>> 

-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17 10:50       ` Aneesh Kumar K.V
@ 2015-08-17 11:21         ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 27+ messages in thread
From: Benjamin Herrenschmidt @ 2015-08-17 11:21 UTC (permalink / raw)
  To: Aneesh Kumar K.V, paulus, mpe, ryabinin.a.a; +Cc: linuxppc-dev

On Mon, 2015-08-17 at 16:20 +0530, Aneesh Kumar K.V wrote:
> Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:
> 
> > On Mon, 2015-08-17 at 15:20 +0530, Aneesh Kumar K.V wrote:
> > 
> > > For kernel linear mapping, our address space looks like
> > > 0xc000000000000000 - 0xc0003fffffffffff  (64TB)
> > > 
> > > We can't have virtual address(effective address) above that range
> > > in 0xc region. Hence in-order to shadow the linear mapping, I am 
> > > using region 0xe. ie, the shadow mapping now looks liwe
> > > 
> > > 0xc000000000000000 -> 0xe000000000000000 
> > 
> > Why ? IE. Why can't you put the shadow at address +64T and have it 
> > work
> > for everything ?
> > .../...
> 
> Above +64TB ? How will that work ? We have check in different parts 
> of
> code like below, where we check each region's top address is within 
> 64TB range. 
> 
> PGTABLE_RANGE and (ESID_BITS + SID_SHIFT) and all dependendent on 
> 64TB
> range. (46 bits).

For the VSID we could just mask the address with 64T-1. Depends if it's
some place we want to actually bound check or not. In general though,
we can safely assume that a region will never be bigger than
PGTABLE_RANGE so having another PGTABLE_RANGE zone making the kasan bit
somewhat makes sense. Or if you want KSAN to actually use page tables
make it PGTABLE_RANGE/2 and use the upper half. I don't understand
enough of what ksan does ...


> static inline unsigned long get_vsid(unsigned long context, unsigned 
> long ea,
> 				     int ssize)
> {
> 	/*
> 	 * Bad address. We return VSID 0 for that
> 	 */
> 	if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
> 		return 0;
> 
> 	if (ssize == MMU_SEGSIZE_256M)
> 		return vsid_scramble((context << ESID_BITS)
> 				     | (ea >> SID_SHIFT), 256M);
> 	return vsid_scramble((context << ESID_BITS_1T)
> 			     | (ea >> SID_SHIFT_1T), 1T);
> }
> 
> 
> 
> > > Another reason why inline instrumentation is difficult is that 
> > > for
> > > inline instrumentation to work, we need to create a mapping for 
> > > _possible_
> > > virtual address space before kasan is fully initialized. ie, we 
> > > need
> > > to create page table entries for the shadow of the entire 64TB 
> > > range,
> > > with zero page, even though we have lesser ram. We definitely 
> > > can't 
> > > bolt those entries. I am yet to get the shadow for kernel linear 
> > > mapping to work without bolting. Also we will have to get the 
> > > page 
> > > table allocated for that, because we can't share page table 
> > > entries. 
> > > Our fault path use pte entries for storing hash slot index.
> > 
> > Hrm, that means we might want to start considering a page table to
> > cover the linear mapping...
> 
> But that would require us to get a large zero page ? Are you 
> suggesting
> to use 16G page ? 
> 
> 
> > 
> > > If we are ok to steal part of that 64TB range, for kasan mapping 
> > > , ie
> > > we make shadow of each region part of the same region, may be we 
> > > can
> > > get inline instrumentation to work. But that still doesn't solve 
> > > the
> > > page table allocation overhead issue mentioned above.
> > > 
> 
> -aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17  9:50   ` Aneesh Kumar K.V
  2015-08-17 10:01     ` Benjamin Herrenschmidt
@ 2015-08-17 11:29     ` Andrey Ryabinin
  2015-08-18  5:42       ` Aneesh Kumar K.V
  1 sibling, 1 reply; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-17 11:29 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

2015-08-17 12:50 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
> Because of the above I concluded that we may not be able to do
> inline instrumentation. Now if we are not doing inline instrumentation,
> we can simplify kasan support by not creating a shadow mapping at all
> for vmalloc and vmemmap region. Hence the idea of returning the address
> of a zero page for anything other than kernel linear map region.
>

Yes, mapping zero page needed only for inline instrumentation.
You simply don't need to check shadow for vmalloc/vmemmap.

So, instead of redefining kasan_mem_to_shadow() I'd suggest to
add one more arch hook. Something like:

bool kasan_tracks_vaddr(unsigned long addr)
{
     return REGION_ID(addr) == KERNEL_REGION_ID;
}

And in check_memory_region():
       if (!(kasan_enabled() && kasan_tracks_vaddr(addr)))
               return;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions
  2015-08-17  6:36 ` [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions Aneesh Kumar K.V
@ 2015-08-17 11:36   ` Andrey Ryabinin
  2015-08-18  5:29     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-17 11:36 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Benjamin Herrenschmidt, paulus, mpe; +Cc: linuxppc-dev

On 08/17/2015 09:36 AM, Aneesh Kumar K.V wrote:
> We can't use generic functions like print_hex_dump to access kasan
> shadow region. This require us to setup another kasan shadow region
> for the address passed (kasan shadow address). Most architecture won't
> be able to do that. Hence remove dumping kasan shadow region dump. If
> we really want to do this we will have to have a kasan internal implemen
> tation of print_hex_dump for which we will disable address sanitizer
> operation.
>

I didn't understand that.
Yes, you don't have shadow for shadow. But, for shadow addresses you
return return (void *)kasan_zero_page in kasan_mem_to_shadow(), so we
should be fine to access shadow in generic code.

And with kasan_tracks_vaddr(), this should work too.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64
  2015-08-17  6:36 ` [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64 Aneesh Kumar K.V
@ 2015-08-17 12:13   ` Andrey Ryabinin
  2015-08-17 12:17     ` Andrey Ryabinin
  2015-08-18  5:34     ` Aneesh Kumar K.V
  0 siblings, 2 replies; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-17 12:13 UTC (permalink / raw)
  To: Aneesh Kumar K.V, benh, paulus, mpe; +Cc: linuxppc-dev

On 08/17/2015 09:36 AM, Aneesh Kumar K.V wrote:
> We use the region with region ID 0xe as the kasan shadow region. Since
> we use hash page table, we can't have the early zero page based shadow
> region support. Hence we disable kasan in the early code and runtime
> enable this. We could imporve the condition using static keys. (but
> that is for a later patch). We also can't support inline instrumentation
> because our kernel mapping doesn't give us a large enough free window
> to map the entire range. For VMALLOC and VMEMMAP region we just
> return a zero page instead of having a translation bolted into the
> htab. This simplifies handling VMALLOC and VMEMAP area. Kasan is not
> tracking both the region as of now
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/kasan.h         | 74 ++++++++++++++++++++++++++++++++
>  arch/powerpc/include/asm/pgtable-ppc64.h |  1 +
>  arch/powerpc/include/asm/ppc_asm.h       | 10 +++++
>  arch/powerpc/include/asm/string.h        | 13 ++++++
>  arch/powerpc/kernel/Makefile             |  1 +
>  arch/powerpc/kernel/prom_init_check.sh   |  2 +-
>  arch/powerpc/kernel/setup_64.c           |  3 ++
>  arch/powerpc/lib/mem_64.S                |  6 ++-
>  arch/powerpc/lib/memcpy_64.S             |  3 +-
>  arch/powerpc/lib/ppc_ksyms.c             | 10 +++++
>  arch/powerpc/mm/Makefile                 |  3 ++
>  arch/powerpc/mm/kasan_init.c             | 44 +++++++++++++++++++
>  arch/powerpc/mm/slb_low.S                |  4 ++
>  arch/powerpc/platforms/Kconfig.cputype   |  1 +
>  14 files changed, 171 insertions(+), 4 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/kasan.h
>  create mode 100644 arch/powerpc/mm/kasan_init.c
> 

Did you disable stack instrumentation (in scripts/Makefile.kasa),
or you version of gcc doesn't support it (e.g. like 4.9.x on x86) ?

Because this can't work with stack instrumentation as you don't have shadow for stack in early code.

But this should be doable, as I think. All you need is to setup shadow for init task's
stack before executing any instrumented function. 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64
  2015-08-17 12:13   ` Andrey Ryabinin
@ 2015-08-17 12:17     ` Andrey Ryabinin
  2015-08-18  5:36       ` Aneesh Kumar K.V
  2015-08-18  5:34     ` Aneesh Kumar K.V
  1 sibling, 1 reply; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-17 12:17 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Benjamin Herrenschmidt, paulus, mpe; +Cc: linuxppc-dev

2015-08-17 15:13 GMT+03:00 Andrey Ryabinin <ryabinin.a.a@gmail.com>:
>
> Did you disable stack instrumentation (in scripts/Makefile.kasa),
> or you version of gcc doesn't support it (e.g. like 4.9.x on x86) ?
>
> Because this can't work with stack instrumentation as you don't have shadow for stack in early code.
>
> But this should be doable, as I think. All you need is to setup shadow for init task's
> stack before executing any instrumented function.

And you also need to define CONFIG_KASAN_SHADOW_OFFSET, so it will be
passed to GCC
via -fasan-shadow-offset= option.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions
  2015-08-17 11:36   ` Andrey Ryabinin
@ 2015-08-18  5:29     ` Aneesh Kumar K.V
  2015-08-18  9:12       ` Andrey Ryabinin
  0 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-18  5:29 UTC (permalink / raw)
  To: Andrey Ryabinin, Benjamin Herrenschmidt, paulus, mpe; +Cc: linuxppc-dev

Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:

> On 08/17/2015 09:36 AM, Aneesh Kumar K.V wrote:
>> We can't use generic functions like print_hex_dump to access kasan
>> shadow region. This require us to setup another kasan shadow region
>> for the address passed (kasan shadow address). Most architecture won't
>> be able to do that. Hence remove dumping kasan shadow region dump. If
>> we really want to do this we will have to have a kasan internal implemen
>> tation of print_hex_dump for which we will disable address sanitizer
>> operation.
>>
>
> I didn't understand that.
> Yes, you don't have shadow for shadow. But, for shadow addresses you
> return return (void *)kasan_zero_page in kasan_mem_to_shadow(), so we
> should be fine to access shadow in generic code.
>

But in general IMHO it is not correct to pass shadow address to generic
functions, because that requires arch to setup shadow for the shadow.
With one of the initial implementation of ppc64 support, I had page
table entries setup for vmalloc and vmemmap shadow and that is when I
hit the issue. We cannot expect arch to setup shadow regions like what is
expected here. If we really need to print the shadow memory content, we
could possibly make a copy of print_hex_dump in kasan_init.c . Let me
know whether you think printing shadow area content is needed.

-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64
  2015-08-17 12:13   ` Andrey Ryabinin
  2015-08-17 12:17     ` Andrey Ryabinin
@ 2015-08-18  5:34     ` Aneesh Kumar K.V
  1 sibling, 0 replies; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-18  5:34 UTC (permalink / raw)
  To: Andrey Ryabinin, benh, paulus, mpe; +Cc: linuxppc-dev

Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:

> On 08/17/2015 09:36 AM, Aneesh Kumar K.V wrote:
>> We use the region with region ID 0xe as the kasan shadow region. Since
>> we use hash page table, we can't have the early zero page based shadow
>> region support. Hence we disable kasan in the early code and runtime
>> enable this. We could imporve the condition using static keys. (but
>> that is for a later patch). We also can't support inline instrumentation
>> because our kernel mapping doesn't give us a large enough free window
>> to map the entire range. For VMALLOC and VMEMMAP region we just
>> return a zero page instead of having a translation bolted into the
>> htab. This simplifies handling VMALLOC and VMEMAP area. Kasan is not
>> tracking both the region as of now
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/kasan.h         | 74 ++++++++++++++++++++++++++++++++
>>  arch/powerpc/include/asm/pgtable-ppc64.h |  1 +
>>  arch/powerpc/include/asm/ppc_asm.h       | 10 +++++
>>  arch/powerpc/include/asm/string.h        | 13 ++++++
>>  arch/powerpc/kernel/Makefile             |  1 +
>>  arch/powerpc/kernel/prom_init_check.sh   |  2 +-
>>  arch/powerpc/kernel/setup_64.c           |  3 ++
>>  arch/powerpc/lib/mem_64.S                |  6 ++-
>>  arch/powerpc/lib/memcpy_64.S             |  3 +-
>>  arch/powerpc/lib/ppc_ksyms.c             | 10 +++++
>>  arch/powerpc/mm/Makefile                 |  3 ++
>>  arch/powerpc/mm/kasan_init.c             | 44 +++++++++++++++++++
>>  arch/powerpc/mm/slb_low.S                |  4 ++
>>  arch/powerpc/platforms/Kconfig.cputype   |  1 +
>>  14 files changed, 171 insertions(+), 4 deletions(-)
>>  create mode 100644 arch/powerpc/include/asm/kasan.h
>>  create mode 100644 arch/powerpc/mm/kasan_init.c
>> 
>
> Did you disable stack instrumentation (in scripts/Makefile.kasa),
> or you version of gcc doesn't support it (e.g. like 4.9.x on x86) ?

I guess the later, because i do see this during compile

scripts/Makefile.kasan:23: CONFIG_KASAN: compiler does not support all options. Trying minimal configuration
scripts/kconfig/conf  --silentoldconfig Kconfig


> Because this can't work with stack instrumentation as you don't have shadow for stack in early code.
>
> But this should be doable, as I think. All you need is to setup shadow for init task's
> stack before executing any instrumented function. 

I still need to look at stack and global support. So that is not yet
there.

-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64
  2015-08-17 12:17     ` Andrey Ryabinin
@ 2015-08-18  5:36       ` Aneesh Kumar K.V
  2015-08-18  8:40         ` Andrey Ryabinin
  0 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-18  5:36 UTC (permalink / raw)
  To: Andrey Ryabinin, Benjamin Herrenschmidt, paulus, mpe; +Cc: linuxppc-dev

Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:

> 2015-08-17 15:13 GMT+03:00 Andrey Ryabinin <ryabinin.a.a@gmail.com>:
>>
>> Did you disable stack instrumentation (in scripts/Makefile.kasa),
>> or you version of gcc doesn't support it (e.g. like 4.9.x on x86) ?
>>
>> Because this can't work with stack instrumentation as you don't have shadow for stack in early code.
>>
>> But this should be doable, as I think. All you need is to setup shadow for init task's
>> stack before executing any instrumented function.
>
> And you also need to define CONFIG_KASAN_SHADOW_OFFSET, so it will be
> passed to GCC
> via -fasan-shadow-offset= option.

I am using KASAN minimal config. Hence this was not needed. Do we need
to pass that option for outline instrumentation ? If not it would be a
good idea to split that out and make it depend on KASAN_INLINE

-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-17 11:29     ` Andrey Ryabinin
@ 2015-08-18  5:42       ` Aneesh Kumar K.V
  2015-08-18  8:50         ` Andrey Ryabinin
  0 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-18  5:42 UTC (permalink / raw)
  To: Andrey Ryabinin; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:

> 2015-08-17 12:50 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
>> Because of the above I concluded that we may not be able to do
>> inline instrumentation. Now if we are not doing inline instrumentation,
>> we can simplify kasan support by not creating a shadow mapping at all
>> for vmalloc and vmemmap region. Hence the idea of returning the address
>> of a zero page for anything other than kernel linear map region.
>>
>
> Yes, mapping zero page needed only for inline instrumentation.
> You simply don't need to check shadow for vmalloc/vmemmap.
>
> So, instead of redefining kasan_mem_to_shadow() I'd suggest to
> add one more arch hook. Something like:
>
> bool kasan_tracks_vaddr(unsigned long addr)
> {
>      return REGION_ID(addr) == KERNEL_REGION_ID;
> }
>
> And in check_memory_region():
>        if (!(kasan_enabled() && kasan_tracks_vaddr(addr)))
>                return;


But that is introducting conditionals in core code for no real benefit.
This also will break when we eventually end up tracking vmalloc ?
In that case our mem_to_shadow will esentially be a switch
statement returning different offsets for kernel region and vmalloc
region. As far as core kernel code is considered, it just need to
ask arch to get the shadow address for a memory and instead of adding
conditionals in core, my suggestion is, we handle this in an arch function.

-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64
  2015-08-18  5:36       ` Aneesh Kumar K.V
@ 2015-08-18  8:40         ` Andrey Ryabinin
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-18  8:40 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

2015-08-18 8:36 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
> Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:
>
>> 2015-08-17 15:13 GMT+03:00 Andrey Ryabinin <ryabinin.a.a@gmail.com>:
>>>
>>> Did you disable stack instrumentation (in scripts/Makefile.kasa),
>>> or you version of gcc doesn't support it (e.g. like 4.9.x on x86) ?
>>>
>>> Because this can't work with stack instrumentation as you don't have shadow for stack in early code.
>>>
>>> But this should be doable, as I think. All you need is to setup shadow for init task's
>>> stack before executing any instrumented function.
>>
>> And you also need to define CONFIG_KASAN_SHADOW_OFFSET, so it will be
>> passed to GCC
>> via -fasan-shadow-offset= option.
>
> I am using KASAN minimal config. Hence this was not needed. Do we need
> to pass that option for outline instrumentation ? If not it would be a
> good idea to split that out and make it depend on KASAN_INLINE
>

We need to pass this for stack instrumentation too.

> -aneesh
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-18  5:42       ` Aneesh Kumar K.V
@ 2015-08-18  8:50         ` Andrey Ryabinin
  2015-08-18  9:21           ` Aneesh Kumar K.V
  0 siblings, 1 reply; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-18  8:50 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

2015-08-18 8:42 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
> Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:
>
>> 2015-08-17 12:50 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
>>> Because of the above I concluded that we may not be able to do
>>> inline instrumentation. Now if we are not doing inline instrumentation,
>>> we can simplify kasan support by not creating a shadow mapping at all
>>> for vmalloc and vmemmap region. Hence the idea of returning the address
>>> of a zero page for anything other than kernel linear map region.
>>>
>>
>> Yes, mapping zero page needed only for inline instrumentation.
>> You simply don't need to check shadow for vmalloc/vmemmap.
>>
>> So, instead of redefining kasan_mem_to_shadow() I'd suggest to
>> add one more arch hook. Something like:
>>
>> bool kasan_tracks_vaddr(unsigned long addr)
>> {
>>      return REGION_ID(addr) == KERNEL_REGION_ID;
>> }
>>
>> And in check_memory_region():
>>        if (!(kasan_enabled() && kasan_tracks_vaddr(addr)))
>>                return;
>
>
> But that is introducting conditionals in core code for no real benefit.
> This also will break when we eventually end up tracking vmalloc ?

Ok, that's a very good reason to not do this.

I see one potential problem in the way you use kasan_zero_page, though.
memset/memcpy of large portions of memory ( > 8 * PAGE_SIZE) will end up
in overflowing kasan_zero_page when we check shadow in memory_is_poisoned_n()

> In that case our mem_to_shadow will esentially be a switch
> statement returning different offsets for kernel region and vmalloc
> region. As far as core kernel code is considered, it just need to
> ask arch to get the shadow address for a memory and instead of adding
> conditionals in core, my suggestion is, we handle this in an arch function.
>
> -aneesh
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions
  2015-08-18  5:29     ` Aneesh Kumar K.V
@ 2015-08-18  9:12       ` Andrey Ryabinin
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-18  9:12 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

2015-08-18 8:29 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
> Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:
>
>> On 08/17/2015 09:36 AM, Aneesh Kumar K.V wrote:
>>> We can't use generic functions like print_hex_dump to access kasan
>>> shadow region. This require us to setup another kasan shadow region
>>> for the address passed (kasan shadow address). Most architecture won't
>>> be able to do that. Hence remove dumping kasan shadow region dump. If
>>> we really want to do this we will have to have a kasan internal implemen
>>> tation of print_hex_dump for which we will disable address sanitizer
>>> operation.
>>>
>>
>> I didn't understand that.
>> Yes, you don't have shadow for shadow. But, for shadow addresses you
>> return return (void *)kasan_zero_page in kasan_mem_to_shadow(), so we
>> should be fine to access shadow in generic code.
>>
>
> But in general IMHO it is not correct to pass shadow address to generic
> functions, because that requires arch to setup shadow for the shadow.

Yes, we have this shadow for shadow in x86_64/arm64.

> With one of the initial implementation of ppc64 support, I had page
> table entries setup for vmalloc and vmemmap shadow and that is when I
> hit the issue. We cannot expect arch to setup shadow regions like what is
> expected here. If we really need to print the shadow memory content, we
> could possibly make a copy of print_hex_dump in kasan_init.c . Let me
> know whether you think printing shadow area content is needed.
>

It was quite useful sometimes, so I think we should keep it.
But I agree with you, that it would be better to avoid accesses to shadow memory
in generic code.
Another way to deal with this would be to copy shadow content in buffer,
and then print_hex_dump() it.

> -aneesh
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-18  8:50         ` Andrey Ryabinin
@ 2015-08-18  9:21           ` Aneesh Kumar K.V
  2015-08-18  9:30             ` Andrey Ryabinin
  0 siblings, 1 reply; 27+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-18  9:21 UTC (permalink / raw)
  To: Andrey Ryabinin; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:

> 2015-08-18 8:42 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
>> Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:
>>
>>
>> But that is introducting conditionals in core code for no real benefit.
>> This also will break when we eventually end up tracking vmalloc ?
>
> Ok, that's a very good reason to not do this.
>
> I see one potential problem in the way you use kasan_zero_page, though.
> memset/memcpy of large portions of memory ( > 8 * PAGE_SIZE) will end up
> in overflowing kasan_zero_page when we check shadow in memory_is_poisoned_n()
>

Any suggestion on how to fix that ? I guess we definitely don't want to
check for addr and size in memset/memcpy. The other option is to
do zero page mapping as is done for other architectures. That is we map
via page table a zero page. But we still have the issue of memory we
need to map the entire vmalloc range (page table memory). I was hoping to
avoid all those complexities.


-aneesh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH V1 0/8] KASAN ppc64 support
  2015-08-18  9:21           ` Aneesh Kumar K.V
@ 2015-08-18  9:30             ` Andrey Ryabinin
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Ryabinin @ 2015-08-18  9:30 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Benjamin Herrenschmidt, paulus, mpe, linuxppc-dev

2015-08-18 12:21 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
> Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:
>
>> 2015-08-18 8:42 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>:
>>> Andrey Ryabinin <ryabinin.a.a@gmail.com> writes:
>>>
>>>
>>> But that is introducting conditionals in core code for no real benefit.
>>> This also will break when we eventually end up tracking vmalloc ?
>>
>> Ok, that's a very good reason to not do this.
>>
>> I see one potential problem in the way you use kasan_zero_page, though.
>> memset/memcpy of large portions of memory ( > 8 * PAGE_SIZE) will end up
>> in overflowing kasan_zero_page when we check shadow in memory_is_poisoned_n()
>>
>
> Any suggestion on how to fix that ? I guess we definitely don't want to

Wait, I was wrong, we should be fine.
In memory_is_poisoned_n():

ret = memory_is_zero(kasan_mem_to_shadow((void *)addr),
            kasan_mem_to_shadow((void *)addr + size - 1) + 1);

So this will be: memory_is_zero(kasan_zero_page, (char *)kasan_zero_page + 1);
Which means that we will access only 1 byte of kasan_zero_page.


> check for addr and size in memset/memcpy. The other option is to
> do zero page mapping as is done for other architectures. That is we map
> via page table a zero page. But we still have the issue of memory we
> need to map the entire vmalloc range (page table memory). I was hoping to
> avoid all those complexities.
>
>
> -aneesh
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2015-08-18  9:30 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-17  6:36 [RFC PATCH V1 0/8] KASAN ppc64 support Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 1/8] powerpc/mm: Add virt_to_pfn and use this instead of opencoding Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 2/8] kasan: MODULE_VADDR is not available on all archs Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 3/8] kasan: Rename kasan_enabled to kasan_report_enabled Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 4/8] kasan: Don't use kasan shadow pointer in generic functions Aneesh Kumar K.V
2015-08-17 11:36   ` Andrey Ryabinin
2015-08-18  5:29     ` Aneesh Kumar K.V
2015-08-18  9:12       ` Andrey Ryabinin
2015-08-17  6:36 ` [RFC PATCH V1 5/8] kasan: Enable arch to hook into kasan callbacks Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 6/8] kasan: Allow arch to overrride kasan shadow offsets Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 7/8] powerpc/mm: kasan: Add kasan support for ppc64 Aneesh Kumar K.V
2015-08-17 12:13   ` Andrey Ryabinin
2015-08-17 12:17     ` Andrey Ryabinin
2015-08-18  5:36       ` Aneesh Kumar K.V
2015-08-18  8:40         ` Andrey Ryabinin
2015-08-18  5:34     ` Aneesh Kumar K.V
2015-08-17  6:36 ` [RFC PATCH V1 8/8] powerpc: Disable kasan for kernel/ and mm/ directory Aneesh Kumar K.V
2015-08-17  6:54 ` [RFC PATCH V1 0/8] KASAN ppc64 support Benjamin Herrenschmidt
2015-08-17  9:50   ` Aneesh Kumar K.V
2015-08-17 10:01     ` Benjamin Herrenschmidt
2015-08-17 10:50       ` Aneesh Kumar K.V
2015-08-17 11:21         ` Benjamin Herrenschmidt
2015-08-17 11:29     ` Andrey Ryabinin
2015-08-18  5:42       ` Aneesh Kumar K.V
2015-08-18  8:50         ` Andrey Ryabinin
2015-08-18  9:21           ` Aneesh Kumar K.V
2015-08-18  9:30             ` Andrey Ryabinin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).