* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 11:29 Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
` (24 more replies)
0 siblings, 25 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
To: linux-arm-kernel
Hi all.
(Sorry, I screwd up with CC list in previous mails, so I'm doing this resend).
This patch set introduces address sanitizer for linux kernel (kasan).
Address sanitizer is dynamic memory error detector. It detects:
- Use after free bugs.
- Out of bounds reads/writes in kmalloc
It is possible, but not implemented yet or not included into this patch series:
- Global buffer overflow
- Stack buffer overflow
- Use after return
In this patches contains kasan for x86/x86_64/arm architectures, for buddy and SLUB allocator.
Patches are base on next-20140704 and also available in git:
git://github.com/aryabinin/linux.git --branch=kasan/kasan_v1
The main idea was borrowed from https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel.
The original implementation (only x88_64 and only for SLAB) by Andrey Konovalov could be
found here http://github.com/xairy/linux. Some of code in this patches was stolen from there.
To use this feature you need pretty fresh GCC (revision r211699 from 2014-06-16 or
above).
To enable kasan configure kernel with:
CONFIG_KASAN = y
and
CONFIG_KASAN_SANTIZE_ALL = y
Currently KASAN works only with SLUB allocator. It is highly recommended to run KASAN with
CONFIG_SLUB_DEBUG=y and use 'slub_debug=U' in boot cmdline to enable user tracking
(free and alloc stacktraces).
Basic concept of kasan:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
+ kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.
TODO:
- Optimizations: __asan_load*/__asan_store* are called for every memory access, so it's
important to make them as fast as possible.
In this patch set introduced only reference design of memory checking algorithm. It's
slow but very simple, so anyone could easily understand basic concept.
In future versions I'll try bring optimized versions with some numbers.
- It seems like guard page introduced in c0a32f (mm: more intensive memory corruption debugging)
could be easily reused for kasan as well.
- get rid of kasan_disable_local()/kasan_enable_local() functions. kasan_enable/kasan_disable are
used in some rare cases when we need validly access poisoned areas. This functions might be a
stopping gap for inline instrumentation (see below).
TODO probably not for these series:
- Quarantine for slub. For more strong use after free detection we need to delay reusing of freed
slabs. So we need a something similar to guard pages in buddy allocator. Such quarantine might
be useful even without kasan.
- Inline instrumentation. Inline instrumentation means that fast patch of __asan_load* __asan_store* calls
will be implemented in compiler, and instead of inserting function calls compiler will actually insert
this fast path. To be able to do this we need (at least):
a) get rid of kasan_disable()/kasan_enable() (see above)
b) get rid of kasan_initialized flag. The main reason why we have this flag now is because we don't
have any shadow on early stages of boot.
Konstantin Khlebnikov suggested a way to solve this issue:
We could reserve virtual address space for shadow and map pages on very early stage of
boot process (for x86_64 I think it should be done somewhere in x86_64_start_kernel).
So we will have shadow all the time an flag kasan_initialized will no longer required.
- Stack instrumentation (currently doesn't supported in mainline GCC though it is possible)
- Global variables instrumentation
- Use after return
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
List of already fixed bugs found by address sanitizer:
aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Alexey Preobrazhensky <preobr@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <linux-arm-kernel@lists.infradead.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Andrey Ryabinin (21):
Add kernel address sanitizer infrastructure.
init: main: initialize kasan's shadow area on boot
x86: add kasan hooks fort memcpy/memmove/memset functions
x86: boot: vdso: disable instrumentation for code not linked with
kernel
x86: cpu: don't sanitize early stages of a secondary CPU boot
x86: mm: init: allocate shadow memory for kasan
x86: Kconfig: enable kernel address sanitizer
mm: page_alloc: add kasan hooks on alloc and free pathes
mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
mm: slab: share virt_to_cache() between slab and slub
mm: slub: share slab_err and object_err functions
mm: util: move krealloc/kzfree to slab_common.c
mm: slub: add allocation size field to struct kmem_cache
mm: slub: kasan: disable kasan when touching unaccessible memory
mm: slub: add kernel address sanitizer hooks to slub allocator
arm: boot: compressed: disable kasan's instrumentation
arm: add kasan hooks fort memcpy/memmove/memset functions
arm: mm: reserve shadow memory for kasan
arm: Kconfig: enable kernel address sanitizer
fs: dcache: manually unpoison dname after allocation to shut up
kasan's reports
lib: add kmalloc_bug_test module
Documentation/kasan.txt | 224 ++++++++++++++++++++
Makefile | 8 +-
arch/arm/Kconfig | 1 +
arch/arm/boot/compressed/Makefile | 2 +
arch/arm/include/asm/string.h | 30 +++
arch/arm/mm/init.c | 3 +
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile | 2 +
arch/x86/boot/compressed/Makefile | 2 +
arch/x86/include/asm/string_32.h | 28 +++
arch/x86/include/asm/string_64.h | 24 +++
arch/x86/kernel/cpu/Makefile | 3 +
arch/x86/lib/Makefile | 2 +
arch/x86/mm/init.c | 3 +
arch/x86/realmode/Makefile | 2 +-
arch/x86/realmode/rm/Makefile | 1 +
arch/x86/vdso/Makefile | 1 +
commit | 3 +
fs/dcache.c | 3 +
include/linux/kasan.h | 61 ++++++
include/linux/sched.h | 4 +
include/linux/slab.h | 19 +-
include/linux/slub_def.h | 5 +
init/main.c | 3 +-
lib/Kconfig.debug | 10 +
lib/Kconfig.kasan | 22 ++
lib/Makefile | 1 +
lib/test_kmalloc_bugs.c | 254 +++++++++++++++++++++++
mm/Makefile | 5 +
mm/kasan/Makefile | 3 +
mm/kasan/kasan.c | 420 ++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 42 ++++
mm/kasan/report.c | 187 +++++++++++++++++
mm/page_alloc.c | 4 +
mm/slab.c | 6 -
mm/slab.h | 25 ++-
mm/slab_common.c | 96 +++++++++
mm/slub.c | 50 ++++-
mm/util.c | 91 ---------
scripts/Makefile.lib | 10 +
40 files changed, 1550 insertions(+), 111 deletions(-)
create mode 100644 Documentation/kasan.txt
create mode 100644 commit
create mode 100644 include/linux/kasan.h
create mode 100644 lib/Kconfig.kasan
create mode 100644 lib/test_kmalloc_bugs.c
create mode 100644 mm/kasan/Makefile
create mode 100644 mm/kasan/kasan.c
create mode 100644 mm/kasan/kasan.h
create mode 100644 mm/kasan/report.c
--
1.8.5.5
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
@ 2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 14:26 ` Christoph Lameter
` (5 more replies)
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot Andrey Ryabinin
` (23 subsequent siblings)
24 siblings, 6 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
To: linux-arm-kernel
Address sanitizer for kernel (kasan) is a dynamic memory error detector.
The main features of kasan is:
- is based on compiler instrumentation (fast),
- detects out of bounds for both writes and reads,
- provides use after free detection,
This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].
This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
latter).
Implementation details:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
+ kasan_shadow_start;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
Documentation/kasan.txt | 224 +++++++++++++++++++++++++++++++++++++
Makefile | 8 +-
commit | 3 +
include/linux/kasan.h | 33 ++++++
include/linux/sched.h | 4 +
lib/Kconfig.debug | 2 +
lib/Kconfig.kasan | 20 ++++
mm/Makefile | 1 +
mm/kasan/Makefile | 3 +
mm/kasan/kasan.c | 292 ++++++++++++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 36 ++++++
mm/kasan/report.c | 157 ++++++++++++++++++++++++++
scripts/Makefile.lib | 10 ++
13 files changed, 792 insertions(+), 1 deletion(-)
create mode 100644 Documentation/kasan.txt
create mode 100644 commit
create mode 100644 include/linux/kasan.h
create mode 100644 lib/Kconfig.kasan
create mode 100644 mm/kasan/Makefile
create mode 100644 mm/kasan/kasan.c
create mode 100644 mm/kasan/kasan.h
create mode 100644 mm/kasan/report.c
diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..141391ba
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,224 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
+ - is based on compiler instrumentation (fast),
+ - detects OOB for both writes and reads,
+ - provides UAF detection,
+ - prints informative reports.
+
+KASAN uses compiler instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.10.0.
+
+Currently KASAN supported on x86/x86_64/arm architectures and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 4.10.0).
+
+To enable KASAN configure kernel with:
+
+ CONFIG_KASAN = y
+
+to instrument entire kernel:
+
+ CONFIG_KASAN_SANTIZE_ALL = y
+
+Currently KASAN works only with SLUB. It is highly recommended to run KASAN with
+CONFIG_SLUB_DEBUG=y and 'slub_debug=U'. This enables user tracking (free and alloc traces).
+There is no need to enable redzoning since KASAN detects access to user tracking structs
+so they actually act like redzones.
+
+To enable instrumentation for only specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+ For a single file (e.g. main.o):
+ KASAN_SANITIZE_main.o := y
+
+ For all files in one directory:
+ KASAN_SANITIZE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use:
+
+ KASAN_SANITIZE_main.o := n
+ and:
+ KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical buffer overflow report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+ __slab_alloc.constprop.72+0x64f/0x680
+ kmem_cache_alloc+0xa8/0xe0
+ kasan_kmalloc_oob_rigth+0x2c/0x7a
+ kasan_tests_init+0x8/0xc
+ do_one_initcall+0x85/0x1a0
+ kernel_init_freeable+0x1f1/0x279
+ kernel_init+0x8/0xd0
+ ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00 .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G B 3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+ ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE 0xFA /* free slab page */
+#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+2.1. Shadow memory
+==================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use instrumentation to check the shadow memory on each memory
+access.
+
+AddressSanitizer dedicates one-eighth of the low memory to its shadow
+memory and uses direct mapping with a scale and offset to translate a memory
+address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+ return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+The figure below shows the address space layout. The memory is split
+into two parts (low and high) which map to the corresponding shadow regions.
+Applying the shadow mapping to addresses in the shadow region gives us
+addresses in the Bad region.
+
+|--------| |--------|
+| Memory |---- | Memory |
+|--------| \ |--------|
+| Shadow |-- -->| Shadow |
+|--------| \ |--------|
+| Bad | ---->| Bad |
+|--------| / |--------|
+| Shadow |-- -->| Shadow |
+|--------| / |--------|
+| Memory |---- | Memory |
+|--------| |--------|
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
+
+2.2. Instrumentation
+====================
+
+Since some functions (such as memset, memmove, memcpy) wich do memory accesses
+are written in assembly, compiler can't instrument them.
+Therefore we replace these functions with our own instrumented functions
+(kasan_memset, kasan_memcpy, kasan_memove).
+In some circumstances you may need to use the original functions,
+in such case insert #undef KASAN_HOOKS before includes.
+
diff --git a/Makefile b/Makefile
index 64ab7b3..08a07f2 100644
--- a/Makefile
+++ b/Makefile
@@ -384,6 +384,12 @@ LDFLAGS_MODULE =
CFLAGS_KERNEL =
AFLAGS_KERNEL =
CFLAGS_GCOV = -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN = -fsanitize=address --param asan-stack=0 \
+ --param asan-use-after-return=0 \
+ --param asan-globals=0 \
+ --param asan-memintrin=0 \
+ --param asan-instrumentation-with-call-threshold=0 \
+ -DKASAN_HOOKS
# Use USERINCLUDE when you must reference the UAPI directories only.
@@ -428,7 +434,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
diff --git a/commit b/commit
new file mode 100644
index 0000000..134f4dd
--- /dev/null
+++ b/commit
@@ -0,0 +1,3 @@
+
+I'm working on address sanitizer for kernel.
+fuck this bloody.
\ No newline@end of file
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..7efc3eb
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,33 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+void unpoison_shadow(const void *address, size_t size);
+
+void kasan_enable_local(void);
+void kasan_disable_local(void);
+
+/* Reserves shadow memory. */
+void kasan_alloc_shadow(void);
+void kasan_init_shadow(void);
+
+#else /* CONFIG_KASAN */
+
+static inline void unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+/* Reserves shadow memory. */
+static inline void kasan_init_shadow(void) {}
+static inline void kasan_alloc_shadow(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..286650a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1471,6 +1471,10 @@ struct task_struct {
gfp_t lockdep_reclaim_gfp;
#endif
+#ifdef CONFIG_KASAN
+ int kasan_depth;
+#endif
+
/* journalling filesystem info */
void *journal_info;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cf9cf82..67a4dfc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -611,6 +611,8 @@ config DEBUG_STACKOVERFLOW
source "lib/Kconfig.kmemcheck"
+source "lib/Kconfig.kasan"
+
endmenu # "Memory Debugging"
config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..2bfff78
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,20 @@
+config HAVE_ARCH_KASAN
+ bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+ bool "AddressSanitizer: dynamic memory error detector"
+ default n
+ help
+ Enables AddressSanitizer - dynamic memory error detector,
+ that finds out-of-bounds and use-after-free bugs.
+
+config KASAN_SANITIZE_ALL
+ bool "Instrument entire kernel"
+ depends on KASAN
+ default y
+ help
+ This enables compiler intrumentation for entire kernel
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index e4a97bd..dbe9a22 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZPOOL) += zpool.o
obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
obj-$(CONFIG_CMA) += cma.o
+obj-$(CONFIG_KASAN) += kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..e2cd345
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,292 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+static bool __read_mostly kasan_initialized;
+
+unsigned long kasan_shadow_start;
+unsigned long kasan_shadow_end;
+
+/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
+unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
+
+
+static inline bool addr_is_in_mem(unsigned long addr)
+{
+ return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
+}
+
+void kasan_enable_local(void)
+{
+ if (likely(kasan_initialized))
+ current->kasan_depth--;
+}
+
+void kasan_disable_local(void)
+{
+ if (likely(kasan_initialized))
+ current->kasan_depth++;
+}
+
+static inline bool kasan_enabled(void)
+{
+ return likely(kasan_initialized
+ && !current->kasan_depth);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void poison_shadow(const void *address, size_t size, u8 value)
+{
+ unsigned long shadow_start, shadow_end;
+ unsigned long addr = (unsigned long)address;
+
+ shadow_start = kasan_mem_to_shadow(addr);
+ shadow_end = kasan_mem_to_shadow(addr + size);
+
+ memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void unpoison_shadow(const void *address, size_t size)
+{
+ poison_shadow(address, size, 0);
+
+ if (size & KASAN_SHADOW_MASK) {
+ u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+ + size);
+ *shadow = size & KASAN_SHADOW_MASK;
+ }
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+ s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+ if (shadow_value != 0) {
+ s8 last_byte = addr & KASAN_SHADOW_MASK;
+ return last_byte >= shadow_value;
+ }
+ return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+ size_t size)
+{
+ unsigned long end = addr + size;
+ for (; addr < end; addr++)
+ if (unlikely(address_is_poisoned(addr)))
+ return addr;
+ return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+ size_t size, bool write)
+{
+ unsigned long access_addr;
+ struct access_info info;
+
+ if (!kasan_enabled())
+ return;
+
+ if (unlikely(addr < TASK_SIZE)) {
+ info.access_addr = addr;
+ info.access_size = size;
+ info.is_write = write;
+ info.ip = _RET_IP_;
+ kasan_report_user_access(&info);
+ return;
+ }
+
+ if (!addr_is_in_mem(addr))
+ return;
+
+ access_addr = memory_is_poisoned(addr, size);
+ if (likely(access_addr == 0))
+ return;
+
+ info.access_addr = access_addr;
+ info.access_size = size;
+ info.is_write = write;
+ info.ip = _RET_IP_;
+ kasan_report_error(&info);
+}
+
+void __init kasan_alloc_shadow(void)
+{
+ unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
+ unsigned long shadow_size;
+ phys_addr_t shadow_phys_start;
+
+ shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
+
+ shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
+ if (!shadow_phys_start) {
+ pr_err("Unable to reserve shadow memory\n");
+ return;
+ }
+
+ kasan_shadow_start = (unsigned long)phys_to_virt(shadow_phys_start);
+ kasan_shadow_end = kasan_shadow_start + shadow_size;
+
+ pr_info("reserved shadow memory: [0x%lx - 0x%lx]\n",
+ kasan_shadow_start, kasan_shadow_end);
+ kasan_shadow_offset = kasan_shadow_start -
+ (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
+}
+
+void __init kasan_init_shadow(void)
+{
+ if (kasan_shadow_start) {
+ unpoison_shadow((void *)PAGE_OFFSET,
+ (size_t)(kasan_shadow_start - PAGE_OFFSET));
+ poison_shadow((void *)kasan_shadow_start,
+ kasan_shadow_end - kasan_shadow_start,
+ KASAN_SHADOW_GAP);
+ unpoison_shadow((void *)kasan_shadow_end,
+ (size_t)(high_memory - kasan_shadow_end));
+ kasan_initialized = true;
+ pr_info("shadow memory initialized\n");
+ }
+}
+
+void *kasan_memcpy(void *dst, const void *src, size_t len)
+{
+ if (unlikely(len == 0))
+ return dst;
+
+ check_memory_region((unsigned long)src, len, false);
+ check_memory_region((unsigned long)dst, len, true);
+
+ return memcpy(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memcpy);
+
+void *kasan_memset(void *ptr, int val, size_t len)
+{
+ if (unlikely(len == 0))
+ return ptr;
+
+ check_memory_region((unsigned long)ptr, len, true);
+
+ return memset(ptr, val, len);
+}
+EXPORT_SYMBOL(kasan_memset);
+
+void *kasan_memmove(void *dst, const void *src, size_t len)
+{
+ if (unlikely(len == 0))
+ return dst;
+
+ check_memory_region((unsigned long)src, len, false);
+ check_memory_region((unsigned long)dst, len, true);
+
+ return memmove(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memmove);
+
+void __asan_load1(unsigned long addr)
+{
+ check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+ check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+ check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+ check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+ check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+ check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+ check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+ check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+ check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+ check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+ check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+ check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..711ae4f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,36 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
+
+struct access_info {
+ unsigned long access_addr;
+ size_t access_size;
+ bool is_write;
+ unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+ return (addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + kasan_shadow_offset;
+}
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+ return ((shadow_addr - kasan_shadow_start)
+ << KASAN_SHADOW_SCALE_SHIFT) + PAGE_OFFSET;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..2430e05
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,157 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ * Andrey Konovalov <andreyknvl@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h> /* for ../slab.h */
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
+{
+ return x - ((x - slab_start) % s->size);
+}
+
+static void print_error_description(struct access_info *info)
+{
+ const char *bug_type = "unknown crash";
+ u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+ switch (shadow_val) {
+ case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+ bug_type = "buffer overflow";
+ break;
+ case KASAN_SHADOW_GAP:
+ bug_type = "wild memory access";
+ break;
+ }
+
+ pr_err("AddressSanitizer: %s in %pS at addr %p\n",
+ bug_type, (void *)info->ip,
+ (void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+ void *object;
+ struct kmem_cache *cache;
+ void *slab_start;
+ struct page *page;
+ u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+ page = virt_to_page(info->access_addr);
+
+ switch (shadow_val) {
+ case KASAN_SHADOW_GAP:
+ pr_err("No metainfo is available for this access.\n");
+ dump_stack();
+ break;
+ default:
+ WARN_ON(1);
+ }
+
+ pr_err("%s of size %zu by thread T%d:\n",
+ info->is_write ? "Write" : "Read",
+ info->access_size, current->pid);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+ return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static void print_shadow_pointer(unsigned long row, unsigned long shadow,
+ char *output)
+{
+ /* The length of ">ff00ff00ff00ff00: " is 3 + (BITS_PER_LONG/8)*2 chars. */
+ unsigned long space_count = 3 + (BITS_PER_LONG >> 2) + (shadow - row)*2 +
+ (shadow - row) / SHADOW_BYTES_PER_BLOCK;
+ unsigned long i;
+
+ for (i = 0; i < space_count; i++)
+ output[i] = ' ';
+ output[space_count] = '^';
+ output[space_count + 1] = '\0';
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+ int i;
+ unsigned long shadow = kasan_mem_to_shadow(addr);
+ unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+ - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+ pr_err("Memory state around the buggy address:\n");
+
+ for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+ unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+ char buffer[100];
+
+ snprintf(buffer, sizeof(buffer),
+ (i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+ print_hex_dump(KERN_ERR, buffer,
+ DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+ (void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+
+ if (row_is_guilty(aligned_shadow, shadow)) {
+ print_shadow_pointer(aligned_shadow, shadow, buffer);
+ pr_err("%s\n", buffer);
+ }
+ aligned_shadow += SHADOW_BYTES_PER_ROW;
+ }
+}
+
+void kasan_report_error(struct access_info *info)
+{
+ kasan_disable_local();
+ pr_err("================================="
+ "=================================\n");
+ print_error_description(info);
+ print_address_description(info);
+ print_shadow_for_address(info->access_addr);
+ pr_err("================================="
+ "=================================\n");
+ kasan_enable_local();
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+ kasan_disable_local();
+ pr_err("================================="
+ "=================================\n");
+ pr_err("AddressSanitizer: user-memory-access on address %lx\n",
+ info->access_addr);
+ pr_err("%s of size %zu by thread T%d:\n",
+ info->is_write ? "Write" : "Read",
+ info->access_size, current->pid);
+ dump_stack();
+ pr_err("================================="
+ "=================================\n");
+ kasan_enable_local();
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 260bf8a..2bec69e 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
$(CFLAGS_GCOV))
endif
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+ $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN_SANITIZE_ALL)), \
+ $(CFLAGS_KASAN))
+endif
+
# If building the kernel in a separate objtree expand all occurrences
# of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
@ 2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
` (22 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
To: linux-arm-kernel
This patch initializes shadow area after it was allocated by arch code.
All low memory marked as accessible except shadow area itself.
Later free_all_bootmem() will release pages to buddy allocator
and these pages will be marked as unaccessible, untill somebody
will allocate them.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
init/main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
#include <linux/context_tracking.h>
#include <linux/random.h>
#include <linux/list.h>
+#include <linux/kasan.h>
#include <asm/io.h>
#include <asm/bugs.h>
@@ -549,7 +550,7 @@ asmlinkage __visible void __init start_kernel(void)
set_init_arg);
jump_label_init();
-
+ kasan_init_shadow();
/*
* These use large bootmem allocations and must precede
* kmem_cache_init()
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot Andrey Ryabinin
@ 2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 19:31 ` Andi Kleen
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel Andrey Ryabinin
` (21 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
To: linux-arm-kernel
Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.
This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y
In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/x86/include/asm/string_32.h | 28 ++++++++++++++++++++++++++++
arch/x86/include/asm/string_64.h | 24 ++++++++++++++++++++++++
arch/x86/lib/Makefile | 2 ++
3 files changed, 54 insertions(+)
diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
index 3d3e835..a86615a 100644
--- a/arch/x86/include/asm/string_32.h
+++ b/arch/x86/include/asm/string_32.h
@@ -321,6 +321,32 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
: __memset_generic((s), (c), (count)))
#define __HAVE_ARCH_MEMSET
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+#undef memcpy
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#else /* CONFIG_KASAN && KASAN_HOOKS */
+
#if (__GNUC__ >= 4)
#define memset(s, c, count) __builtin_memset(s, c, count)
#else
@@ -331,6 +357,8 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
: __memset((s), (c), (count)))
#endif
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
/*
* find the first occurrence of byte 'c', or 1 past the area if none
*/
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..2af2dbe 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -63,6 +63,30 @@ char *strcpy(char *dest, const char *src);
char *strcat(char *dest, const char *src);
int strcmp(const char *cs, const char *ct);
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
#endif /* __KERNEL__ */
#endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index 4d4f96a..d82bc35 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -2,6 +2,8 @@
# Makefile for x86 specific library files.
#
+KASAN_SANITIZE_memcpy_32.o := n
+
inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
quiet_cmd_inat_tables = GEN $@
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (2 preceding siblings ...)
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
@ 2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot Andrey Ryabinin
` (20 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
To: linux-arm-kernel
To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/x86/boot/Makefile | 2 ++
arch/x86/boot/compressed/Makefile | 2 ++
arch/x86/realmode/Makefile | 2 +-
arch/x86/realmode/rm/Makefile | 1 +
arch/x86/vdso/Makefile | 1 +
5 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
# Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
# The number is the same as you would ordinarily press at bootup.
+KASAN_SANITIZE := n
+
SVGA_MODE := -DSVGA_MODE=NORMAL_VGA
targets := vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 0fcd913..64a92b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
# create a compressed vmlinux image from the original vmlinux
#
+KASAN_SANITIZE := n
+
targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
# for more details.
#
#
-
+KASAN_SANITIZE := n
subdir- := rm
obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
# for more details.
#
#
+KASAN_SANITIZE := n
always := realmode.bin realmode.relocs
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 61b04fe..90daad6 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
#
KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
VDSO64-$(CONFIG_X86_64) := y
VDSOX32-$(CONFIG_X86_X32_ABI) := y
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (3 preceding siblings ...)
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel Andrey Ryabinin
@ 2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 19:33 ` Andi Kleen
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan Andrey Ryabinin
` (19 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
To: linux-arm-kernel
Instrumentation of this files may result in unbootable machine.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/x86/kernel/cpu/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -8,6 +8,9 @@ CFLAGS_REMOVE_common.o = -pg
CFLAGS_REMOVE_perf_event.o = -pg
endif
+KASAN_SANITIZE_common.o := n
+KASAN_SANITIZE_perf_event.o := n
+
# Make sure load_percpu_segment has no stackprotector
nostackp := $(call cc-option, -fno-stack-protector)
CFLAGS_common.o := $(nostackp)
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (4 preceding siblings ...)
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer Andrey Ryabinin
` (18 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/x86/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
#include <linux/swap.h>
#include <linux/memblock.h>
#include <linux/bootmem.h> /* for max_low_pfn */
+#include <linux/kasan.h>
#include <asm/cacheflush.h>
#include <asm/e820.h>
@@ -678,5 +679,7 @@ void __init zone_sizes_init(void)
#endif
free_area_init_nodes(max_zone_pfns);
+
+ kasan_alloc_shadow();
}
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (5 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes Andrey Ryabinin
` (17 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -132,6 +132,7 @@ config X86
select HAVE_CC_STACKPROTECTOR
select GENERIC_CPU_AUTOPROBE
select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_ARCH_KASAN
config INSTRUCTION_DECODER
def_bool y
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (6 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-15 5:52 ` Joonsoo Kim
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files Andrey Ryabinin
` (16 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
include/linux/kasan.h | 6 ++++++
mm/Makefile | 2 ++
mm/kasan/kasan.c | 18 ++++++++++++++++++
mm/kasan/kasan.h | 1 +
mm/kasan/report.c | 7 +++++++
mm/page_alloc.c | 4 ++++
6 files changed, 38 insertions(+)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7efc3eb..4adc0a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,9 @@ void kasan_disable_local(void);
void kasan_alloc_shadow(void);
void kasan_init_shadow(void);
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
#else /* CONFIG_KASAN */
static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
static inline void kasan_init_shadow(void) {}
static inline void kasan_alloc_shadow(void) {}
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
#endif /* CONFIG_KASAN */
#endif /* LINUX_KASAN_H */
diff --git a/mm/Makefile b/mm/Makefile
index dbe9a22..6a9c3f8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,8 @@
# Makefile for the linux memory manager.
#
+KASAN_SANITIZE_page_alloc.o := n
+
mmu-y := nommu.o
mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \
mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e2cd345..109478e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
}
}
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (likely(page && !PageHighMem(page)))
+ unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (likely(!PageHighMem(page)))
+ poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
void *kasan_memcpy(void *dst, const void *src, size_t len)
{
if (unlikely(len == 0))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 711ae4f..be9597e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -5,6 +5,7 @@
#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2430e05..6ef9e57 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
bug_type = "buffer overflow";
break;
+ case KASAN_FREE_PAGE:
+ bug_type = "use after free";
+ break;
case KASAN_SHADOW_GAP:
bug_type = "wild memory access";
break;
@@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
page = virt_to_page(info->access_addr);
switch (shadow_val) {
+ case KASAN_FREE_PAGE:
+ dump_page(page, "kasan error");
+ dump_stack();
+ break;
case KASAN_SHADOW_GAP:
pr_err("No metainfo is available for this access.\n");
dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c9eeec..67833d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
#include <linux/page-debug-flags.h>
#include <linux/hugetlb.h>
#include <linux/sched/rt.h>
+#include <linux/kasan.h>
#include <asm/sections.h>
#include <asm/tlbflush.h>
@@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
trace_mm_page_free(page, order);
kmemcheck_free_shadow(page, order);
+ kasan_free_pages(page, order);
if (PageAnon(page))
page->mapping = NULL;
@@ -2807,6 +2809,7 @@ out:
if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
goto retry_cpuset;
+ kasan_alloc_pages(page, order);
return page;
}
EXPORT_SYMBOL(__alloc_pages_nodemask);
@@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
if (end != outer_end)
free_contig_range(end, outer_end - end);
+ kasan_alloc_pages(pfn_to_page(start), end - start);
done:
undo_isolate_page_range(pfn_max_align_down(start),
pfn_max_align_up(end), migratetype);
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (7 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub Andrey Ryabinin
` (15 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Code in slub.c and slab_common.c files could validly access to object's
redzones
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
mm/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -3,6 +3,8 @@
#
KASAN_SANITIZE_page_alloc.o := n
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
mmu-y := nommu.o
mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (8 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-15 5:53 ` Joonsoo Kim
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions Andrey Ryabinin
` (14 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
mm/slab.c | 6 ------
mm/slab.h | 10 +++++++---
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index e7763db..fa4f840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -433,12 +433,6 @@ static inline void set_obj_status(struct page *page, int idx, int val) {}
static int slab_max_order = SLAB_MAX_ORDER_LO;
static bool slab_max_order_set __initdata;
-static inline struct kmem_cache *virt_to_cache(const void *obj)
-{
- struct page *page = virt_to_head_page(obj);
- return page->slab_cache;
-}
-
static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
unsigned int idx)
{
diff --git a/mm/slab.h b/mm/slab.h
index 84c160a..1257ade 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,10 +260,15 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
}
#endif
+static inline struct kmem_cache *virt_to_cache(const void *obj)
+{
+ struct page *page = virt_to_head_page(obj);
+ return page->slab_cache;
+}
+
static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
{
struct kmem_cache *cachep;
- struct page *page;
/*
* When kmemcg is not being used, both assignments should return the
@@ -275,8 +280,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
return s;
- page = virt_to_head_page(x);
- cachep = page->slab_cache;
+ cachep = virt_to_cache(x);
if (slab_equal_or_root(cachep, s))
return cachep;
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (9 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 14:29 ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c Andrey Ryabinin
` (13 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
mm/slab.h | 5 +++++
mm/slub.c | 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 1257ade..912af7f 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
void *slab_next(struct seq_file *m, void *p, loff_t *pos);
void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+ const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+ u8 *object, char *reason);
+
#endif /* MM_SLAB_H */
diff --git a/mm/slub.c b/mm/slub.c
index 6641a8f..3bdd9ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
dump_stack();
}
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
u8 *object, char *reason)
{
slab_bug(s, "%s", reason);
print_trailer(s, page, object);
}
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
const char *fmt, ...)
{
va_list args;
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (10 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 14:32 ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache Andrey Ryabinin
` (12 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
To avoid false positive reports in kernel address sanitizer krealloc/kzfree
functions shouldn't be instrumented. Since we want to instrument other
functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
instrumented.
Unfortunately we can't completely disable instrumentation for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
mm/slab_common.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
mm/util.c | 91 --------------------------------------------------------
2 files changed, 91 insertions(+), 91 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d31c4ba..8df59b09 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -787,3 +787,94 @@ static int __init slab_proc_init(void)
}
module_init(slab_proc_init);
#endif /* CONFIG_SLABINFO */
+
+static __always_inline void *__do_krealloc(const void *p, size_t new_size,
+ gfp_t flags)
+{
+ void *ret;
+ size_t ks = 0;
+
+ if (p)
+ ks = ksize(p);
+
+ if (ks >= new_size)
+ return (void *)p;
+
+ ret = kmalloc_track_caller(new_size, flags);
+ if (ret && p)
+ memcpy(ret, p, ks);
+
+ return ret;
+}
+
+/**
+ * __krealloc - like krealloc() but don't free @p.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * This function is like krealloc() except it never frees the originally
+ * allocated buffer. Use this if you don't want to free the buffer immediately
+ * like, for example, with RCU.
+ */
+void *__krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ if (unlikely(!new_size))
+ return ZERO_SIZE_PTR;
+
+ return __do_krealloc(p, new_size, flags);
+
+}
+EXPORT_SYMBOL(__krealloc);
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * The contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes. If @p is %NULL, krealloc()
+ * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
+ * %NULL pointer, the object pointed to is freed.
+ */
+void *krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ void *ret;
+
+ if (unlikely(!new_size)) {
+ kfree(p);
+ return ZERO_SIZE_PTR;
+ }
+
+ ret = __do_krealloc(p, new_size, flags);
+ if (ret && p != ret)
+ kfree(p);
+
+ return ret;
+}
+EXPORT_SYMBOL(krealloc);
+
+/**
+ * kzfree - like kfree but zero memory
+ * @p: object to free memory of
+ *
+ * The memory of the object @p points to is zeroed before freed.
+ * If @p is %NULL, kzfree() does nothing.
+ *
+ * Note: this function zeroes the whole allocated buffer which can be a good
+ * deal bigger than the requested buffer size passed to kmalloc(). So be
+ * careful when using this function in performance sensitive code.
+ */
+void kzfree(const void *p)
+{
+ size_t ks;
+ void *mem = (void *)p;
+
+ if (unlikely(ZERO_OR_NULL_PTR(mem)))
+ return;
+ ks = ksize(mem);
+ memset(mem, 0, ks);
+ kfree(mem);
+}
+EXPORT_SYMBOL(kzfree);
diff --git a/mm/util.c b/mm/util.c
index 8f326ed..2992e16 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -142,97 +142,6 @@ void *memdup_user(const void __user *src, size_t len)
}
EXPORT_SYMBOL(memdup_user);
-static __always_inline void *__do_krealloc(const void *p, size_t new_size,
- gfp_t flags)
-{
- void *ret;
- size_t ks = 0;
-
- if (p)
- ks = ksize(p);
-
- if (ks >= new_size)
- return (void *)p;
-
- ret = kmalloc_track_caller(new_size, flags);
- if (ret && p)
- memcpy(ret, p, ks);
-
- return ret;
-}
-
-/**
- * __krealloc - like krealloc() but don't free @p.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * This function is like krealloc() except it never frees the originally
- * allocated buffer. Use this if you don't want to free the buffer immediately
- * like, for example, with RCU.
- */
-void *__krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- if (unlikely(!new_size))
- return ZERO_SIZE_PTR;
-
- return __do_krealloc(p, new_size, flags);
-
-}
-EXPORT_SYMBOL(__krealloc);
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes. If @p is %NULL, krealloc()
- * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
- * %NULL pointer, the object pointed to is freed.
- */
-void *krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- void *ret;
-
- if (unlikely(!new_size)) {
- kfree(p);
- return ZERO_SIZE_PTR;
- }
-
- ret = __do_krealloc(p, new_size, flags);
- if (ret && p != ret)
- kfree(p);
-
- return ret;
-}
-EXPORT_SYMBOL(krealloc);
-
-/**
- * kzfree - like kfree but zero memory
- * @p: object to free memory of
- *
- * The memory of the object @p points to is zeroed before freed.
- * If @p is %NULL, kzfree() does nothing.
- *
- * Note: this function zeroes the whole allocated buffer which can be a good
- * deal bigger than the requested buffer size passed to kmalloc(). So be
- * careful when using this function in performance sensitive code.
- */
-void kzfree(const void *p)
-{
- size_t ks;
- void *mem = (void *)p;
-
- if (unlikely(ZERO_OR_NULL_PTR(mem)))
- return;
- ks = ksize(mem);
- memset(mem, 0, ks);
- kfree(mem);
-}
-EXPORT_SYMBOL(kzfree);
-
/*
* strndup_user - duplicate an existing string from user space
* @s: The string to duplicate
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (11 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 14:33 ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory Andrey Ryabinin
` (11 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
include/linux/slub_def.h | 5 +++++
mm/slab.h | 10 ++++++++++
mm/slab_common.c | 2 ++
mm/slub.c | 1 +
4 files changed, 18 insertions(+)
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..b8b8154 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -68,6 +68,11 @@ struct kmem_cache {
int object_size; /* The size of an object without meta data */
int offset; /* Free pointer offset. */
int cpu_partial; /* Number of per cpu partial objects to keep around */
+
+#ifdef CONFIG_KASAN
+ int alloc_size; /* actual allocation size kmem_cache_create */
+#endif
+
struct kmem_cache_order_objects oo;
/* Allocation and freeing of slabs */
diff --git a/mm/slab.h b/mm/slab.h
index 912af7f..cb2e776 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,6 +260,16 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
}
#endif
+#ifdef CONFIG_KASAN
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size)
+{
+ s->alloc_size = size;
+}
+#else
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) { }
+#endif
+
+
static inline struct kmem_cache *virt_to_cache(const void *obj)
{
struct page *page = virt_to_head_page(obj);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8df59b09..f5b52f0 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -147,6 +147,7 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
s->name = name;
s->object_size = object_size;
s->size = size;
+ kasan_set_alloc_size(s, object_size);
s->align = align;
s->ctor = ctor;
@@ -409,6 +410,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
s->name = name;
s->size = s->object_size = size;
+ kasan_set_alloc_size(s, size);
s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
err = __kmem_cache_create(s, flags);
diff --git a/mm/slub.c b/mm/slub.c
index 3bdd9ac..6ddedf9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3724,6 +3724,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
* the complete object on kzalloc.
*/
s->object_size = max(s->object_size, (int)size);
+ kasan_set_alloc_size(s, max(s->alloc_size, (int)size));
s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
for_each_memcg_cache_index(i) {
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (12 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-15 6:04 ` Joonsoo Kim
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator Andrey Ryabinin
` (10 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
mm/slub.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 6ddedf9..c8dbea7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
if (!(s->flags & SLAB_STORE_USER))
return;
+ kasan_disable_local();
print_track("Allocated", get_track(s, object, TRACK_ALLOC));
print_track("Freed", get_track(s, object, TRACK_FREE));
+ kasan_enable_local();
}
static void print_page_info(struct page *page)
@@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
unsigned int off; /* Offset of last byte */
u8 *addr = page_address(page);
+ kasan_disable_local();
+
print_tracking(s, p);
print_page_info(page);
@@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
/* Beginning of the filler is the free pointer */
print_section("Padding ", p + off, s->size - off);
+ kasan_enable_local();
+
dump_stack();
}
@@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
struct page *page,
void *object, unsigned long addr)
{
+
+ kasan_disable_local();
if (!check_slab(s, page))
goto bad;
@@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
set_track(s, object, TRACK_ALLOC, addr);
trace(s, page, object, 1);
init_object(s, object, SLUB_RED_ACTIVE);
+ kasan_enable_local();
return 1;
bad:
@@ -1041,6 +1050,7 @@ bad:
page->inuse = page->objects;
page->freelist = NULL;
}
+ kasan_enable_local();
return 0;
}
@@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
spin_lock_irqsave(&n->list_lock, *flags);
slab_lock(page);
+ kasan_disable_local();
if (!check_slab(s, page))
goto fail;
@@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
trace(s, page, object, 0);
init_object(s, object, SLUB_RED_INACTIVE);
out:
+ kasan_enable_local();
slab_unlock(page);
/*
* Keep node_lock to preserve integrity
@@ -1096,6 +1108,7 @@ out:
return n;
fail:
+ kasan_enable_local();
slab_unlock(page);
spin_unlock_irqrestore(&n->list_lock, *flags);
slab_fix(s, "Object at 0x%p not freed", object);
@@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
void *object)
{
setup_object_debug(s, page, object);
- if (unlikely(s->ctor))
+ if (unlikely(s->ctor)) {
+ kasan_disable_local();
s->ctor(object);
+ kasan_enable_local();
+ }
}
static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
if (kmem_cache_debug(s)) {
void *p;
-
+ kasan_disable_local();
slab_pad_check(s, page);
for_each_object(p, s, page_address(page),
page->objects)
check_object(s, page, p, SLUB_RED_INACTIVE);
+ kasan_enable_local();
}
kmemcheck_free_shadow(page, compound_order(page));
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (13 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 14:48 ` Christoph Lameter
2014-07-15 6:09 ` Joonsoo Kim
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation Andrey Ryabinin
` (9 subsequent siblings)
24 siblings, 2 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation of slub object requested allocation size marked as
accessible, and the rest of the object (including slub's metadata)
marked as redzone (unaccessible).
We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible by kasan_krealloc call.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
include/linux/kasan.h | 22 ++++++++++
include/linux/slab.h | 19 +++++++--
lib/Kconfig.kasan | 2 +
mm/kasan/kasan.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 5 +++
mm/kasan/report.c | 23 +++++++++++
mm/slab.h | 2 +-
mm/slab_common.c | 9 +++--
mm/slub.c | 24 ++++++++++-
9 files changed, 208 insertions(+), 8 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4adc0a1..583c011 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -20,6 +20,17 @@ void kasan_init_shadow(void);
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_alloc_slab_pages(struct page *page, int order);
+void kasan_free_slab_pages(struct page *page, int order);
+
#else /* CONFIG_KASAN */
static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
+
#endif /* CONFIG_KASAN */
#endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 68b1feab..a9513e9 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
(unsigned long)ZERO_SIZE_PTR)
#include <linux/kmemleak.h>
+#include <linux/kasan.h>
struct mem_cgroup;
/*
@@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
*/
static __always_inline void *kmalloc(size_t size, gfp_t flags)
{
+ void *ret;
+
if (__builtin_constant_p(size)) {
if (size > KMALLOC_MAX_CACHE_SIZE)
return kmalloc_large(size, flags);
@@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
if (!index)
return ZERO_SIZE_PTR;
- return kmem_cache_alloc_trace(kmalloc_caches[index],
+ ret = kmem_cache_alloc_trace(kmalloc_caches[index],
flags, size);
+
+ kasan_kmalloc(kmalloc_caches[index], ret, size);
+
+ return ret;
}
#endif
}
@@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
{
#ifndef CONFIG_SLOB
+ void *ret;
+
if (__builtin_constant_p(size) &&
size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
int i = kmalloc_index(size);
@@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
if (!i)
return ZERO_SIZE_PTR;
- return kmem_cache_alloc_node_trace(kmalloc_caches[i],
- flags, node, size);
+ ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
+ flags, node, size);
+
+ kasan_kmalloc(kmalloc_caches[i], ret, size);
+
+ return ret;
}
#endif
return __kmalloc_node(size, flags, node);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 2bfff78..289a624 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
config KASAN
bool "AddressSanitizer: dynamic memory error detector"
+ depends on SLUB
+ select STACKTRACE
default n
help
Enables AddressSanitizer - dynamic memory error detector,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 109478e..9b5182a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
}
}
+void kasan_alloc_slab_pages(struct page *page, int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (unlikely(object == NULL))
+ return;
+
+ poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
+ unpoison_shadow(object, cache->alloc_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+ unsigned long size = cache->size;
+ unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+ unsigned long redzone_start;
+ unsigned long redzone_end;
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (unlikely(object == NULL))
+ return;
+
+ redzone_start = round_up((unsigned long)(object + size),
+ KASAN_SHADOW_SCALE_SIZE);
+ redzone_end = (unsigned long)object + cache->size;
+
+ unpoison_shadow(object, size);
+ poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+ KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+ struct page *page;
+ unsigned long redzone_start;
+ unsigned long redzone_end;
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (unlikely(ptr == NULL))
+ return;
+
+ page = virt_to_page(ptr);
+ redzone_start = round_up((unsigned long)(ptr + size),
+ KASAN_SHADOW_SCALE_SIZE);
+ redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+ unpoison_shadow(ptr, size);
+ poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+ KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+ struct page *page;
+
+ if (unlikely(object == ZERO_SIZE_PTR))
+ return;
+
+ page = virt_to_head_page(object);
+
+ if (unlikely(!PageSlab(page)))
+ kasan_kmalloc_large(object, size);
+ else
+ kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+ struct page *page;
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ page = virt_to_page(ptr);
+ poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
+}
+
void kasan_alloc_pages(struct page *page, unsigned int order)
{
if (unlikely(!kasan_initialized))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index be9597e..f925d03 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE 0xFA /* free slab page */
#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6ef9e57..6d829af 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
switch (shadow_val) {
+ case KASAN_PAGE_REDZONE:
+ case KASAN_SLAB_REDZONE:
+ case KASAN_KMALLOC_REDZONE:
case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
bug_type = "buffer overflow";
break;
case KASAN_FREE_PAGE:
+ case KASAN_SLAB_FREE:
+ case KASAN_KMALLOC_FREE:
bug_type = "use after free";
break;
case KASAN_SHADOW_GAP:
@@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
page = virt_to_page(info->access_addr);
switch (shadow_val) {
+ case KASAN_SLAB_REDZONE:
+ cache = virt_to_cache((void *)info->access_addr);
+ slab_err(cache, page, "access to slab redzone");
+ dump_stack();
+ break;
+ case KASAN_KMALLOC_FREE:
+ case KASAN_KMALLOC_REDZONE:
+ case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+ if (PageSlab(page)) {
+ cache = virt_to_cache((void *)info->access_addr);
+ slab_start = page_address(virt_to_head_page((void *)info->access_addr));
+ object = virt_to_obj(cache, slab_start,
+ (void *)info->access_addr);
+ object_err(cache, page, object, "kasan error");
+ break;
+ }
+ case KASAN_PAGE_REDZONE:
case KASAN_FREE_PAGE:
+ case KASAN_SLAB_FREE:
dump_page(page, "kasan error");
dump_stack();
break;
diff --git a/mm/slab.h b/mm/slab.h
index cb2e776..b22ed8b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -353,6 +353,6 @@ void slab_err(struct kmem_cache *s, struct page *page,
const char *fmt, ...);
void object_err(struct kmem_cache *s, struct page *page,
u8 *object, char *reason);
-
+size_t __ksize(const void *obj);
#endif /* MM_SLAB_H */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f5b52f0..313e270 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -625,6 +625,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
page = alloc_kmem_pages(flags, order);
ret = page ? page_address(page) : NULL;
kmemleak_alloc(ret, size, 1, flags);
+ kasan_kmalloc_large(ret, size);
return ret;
}
EXPORT_SYMBOL(kmalloc_order);
@@ -797,10 +798,12 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
size_t ks = 0;
if (p)
- ks = ksize(p);
+ ks = __ksize(p);
- if (ks >= new_size)
+ if (ks >= new_size) {
+ kasan_krealloc((void *)p, new_size);
return (void *)p;
+ }
ret = kmalloc_track_caller(new_size, flags);
if (ret && p)
@@ -875,7 +878,7 @@ void kzfree(const void *p)
if (unlikely(ZERO_OR_NULL_PTR(mem)))
return;
- ks = ksize(mem);
+ ks = __ksize(mem);
memset(mem, 0, ks);
kfree(mem);
}
diff --git a/mm/slub.c b/mm/slub.c
index c8dbea7..87d2198 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
#include <linux/stacktrace.h>
#include <linux/prefetch.h>
#include <linux/memcontrol.h>
+#include <linux/kasan.h>
#include <trace/events/kmem.h>
@@ -1245,11 +1246,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
{
kmemleak_alloc(ptr, size, 1, flags);
+ kasan_kmalloc_large(ptr, size);
}
static inline void kfree_hook(const void *x)
{
kmemleak_free(x);
+ kasan_kfree_large(x);
}
static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1267,11 +1270,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
flags &= gfp_allowed_mask;
kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+ kasan_slab_alloc(s, object);
}
static inline void slab_free_hook(struct kmem_cache *s, void *x)
{
kmemleak_free_recursive(x, s->flags);
+ kasan_slab_free(s, x);
/*
* Trouble is that we may no longer disable interrupts in the fast path
@@ -1371,6 +1376,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
if (!page)
return NULL;
+ kasan_alloc_slab_pages(page, oo_order(oo));
+
page->objects = oo_objects(oo);
mod_zone_page_state(page_zone(page),
(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -1450,6 +1457,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
}
kmemcheck_free_shadow(page, compound_order(page));
+ kasan_free_slab_pages(page, compound_order(page));
mod_zone_page_state(page_zone(page),
(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2907,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
+ kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
init_kmem_cache_node(n);
inc_slabs_node(kmem_cache_node, node, page->objects);
@@ -3289,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
+ kasan_kmalloc(s, ret, size);
+
return ret;
}
EXPORT_SYMBOL(__kmalloc);
@@ -3332,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
+ kasan_kmalloc(s, ret, size);
+
return ret;
}
EXPORT_SYMBOL(__kmalloc_node);
#endif
-size_t ksize(const void *object)
+size_t __ksize(const void *object)
{
struct page *page;
@@ -3353,6 +3366,15 @@ size_t ksize(const void *object)
return slab_ksize(page->slab_cache);
}
+
+size_t ksize(const void *object)
+{
+ size_t size = __ksize(object);
+ /* We assume that ksize callers could use whole allocated area,
+ so we need unpoison this area. */
+ kasan_krealloc(object, size);
+ return size;
+}
EXPORT_SYMBOL(ksize);
void kfree(const void *x)
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (14 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
` (8 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/boot/compressed/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 76a50ec..03f2976 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -4,6 +4,8 @@
# create a compressed vmlinuz image from the original vmlinux
#
+KASAN_SANITIZE := n
+
OBJS =
# Ensure that MMCIF loader code appears early in the image
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (15 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan Andrey Ryabinin
` (7 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.
This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y
In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/include/asm/string.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa..3cbe47f 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -38,4 +38,34 @@ extern void __memzero(void *ptr, __kernel_size_t n);
(__p); \
})
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In case if any of mem*() fucntions are written in C we use our instrumented
+ * functions for perfomance reasons. It's should be faster to check whole
+ * accessed memory range at once, then do a lot of checks at each memory access.
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case #undef KASAN_HOOKS before includes.
+ */
+#undef memset
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
#endif
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (16 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer Andrey Ryabinin
` (6 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
#include <linux/memblock.h>
#include <linux/dma-contiguous.h>
#include <linux/sizes.h>
+#include <linux/kasan.h>
#include <asm/cp15.h>
#include <asm/mach-types.h>
@@ -324,6 +325,8 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
*/
dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));
+ kasan_alloc_shadow();
+
arm_memblock_steal_permitted = false;
memblock_dump_all();
}
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (17 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
` (5 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -26,6 +26,7 @@ config ARM
select HARDIRQS_SW_RESEND
select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
+ select HAVE_ARCH_KASAN
select HAVE_ARCH_KGDB
select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (18 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-15 6:12 ` Joonsoo Kim
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module Andrey Ryabinin
` (4 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
fs/dcache.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/dcache.c b/fs/dcache.c
index b7e8b20..dff64f2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
#include <linux/prefetch.h>
#include <linux/ratelimit.h>
#include <linux/list_lru.h>
+#include <linux/kasan.h>
#include "internal.h"
#include "mount.h"
@@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
kmem_cache_free(dentry_cache, dentry);
return NULL;
}
+ unpoison_shadow(dname,
+ roundup(name->len + 1, sizeof(unsigned long)));
} else {
dname = dentry->d_iname;
}
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (19 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
@ 2014-07-09 11:30 ` Andrey Ryabinin
2014-07-09 21:19 ` [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Dave Hansen
` (3 subsequent siblings)
24 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
To: linux-arm-kernel
This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
lib/Kconfig.debug | 8 ++
lib/Makefile | 1 +
lib/test_kmalloc_bugs.c | 254 ++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 263 insertions(+)
create mode 100644 lib/test_kmalloc_bugs.c
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 67a4dfc..64fd9e6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -609,6 +609,14 @@ config DEBUG_STACKOVERFLOW
If in doubt, say "N".
+config KMALLOC_BUG_TEST
+ tristate "Module for testing bugs detection in sl[auo]b"
+ default n
+ help
+ This is a test module doing varios nasty things like
+ out of bounds accesses, use after free. It is usefull for testing
+ kernel debugging features like kernel address sanitizer.
+
source "lib/Kconfig.kmemcheck"
source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index e48067c..af68259 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
obj-$(CONFIG_TEST_MODULE) += test_module.o
obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
obj-$(CONFIG_TEST_BPF) += test_bpf.o
+obj-$(CONFIG_KMALLOC_BUG_TEST) += test_kmalloc_bugs.o
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kmalloc_bugs.c b/lib/test_kmalloc_bugs.c
new file mode 100644
index 0000000..04cd11b
--- /dev/null
+++ b/lib/test_kmalloc_bugs.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kmalloc bug test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+ char *ptr;
+ size_t size = 123;
+
+ pr_info("out-of-bounds to right\n");
+ ptr = kmalloc(size , GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 'x';
+ kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+ char *ptr;
+ size_t size = 15;
+
+ pr_info("out-of-bounds to left\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ *ptr = *(ptr - 1);
+ kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+ char *ptr;
+ size_t size = 4096;
+
+ pr_info("kmalloc_node(): out-of-bounds to right\n");
+ ptr = kmalloc_node(size , GFP_KERNEL, 0);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 0;
+ kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+ char *ptr;
+ size_t size = PAGE_SIZE*3 - 10;
+
+ pr_info("kmalloc large allocation: out-of-bounds to right\n");
+ ptr = kmalloc(size , GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 0;
+ kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+ char *ptr1, *ptr2;
+ size_t size1 = 17;
+ size_t size2 = 19;
+
+ pr_info("out-of-bounds after krealloc more\n");
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ return;
+ }
+
+ ptr2[size2] = 'x';
+ kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+ char *ptr1, *ptr2;
+ size_t size1 = 17;
+ size_t size2 = 15;
+
+ pr_info("out-of-bounds after krealloc less\n");
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ return;
+ }
+ ptr2[size1] = 'x';
+ kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+ struct {
+ u64 words[2];
+ } *ptr1, *ptr2;
+
+ pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+ ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+ ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ kfree(ptr2);
+ return;
+ }
+ *ptr1 = *ptr2;
+ kfree(ptr1);
+ kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+ char *ptr;
+ size_t size = 666;
+
+ pr_info("out-of-bounds in memset\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ memset(ptr, 0, size+5);
+ kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+ char *ptr;
+ size_t size = 10;
+
+ pr_info("use-after-free\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr);
+ *ptr = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+ char *ptr;
+ size_t size = 33;
+
+ pr_info("use-after-free in memset\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr);
+ memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+ char *ptr1, *ptr2;
+ size_t size = 43;
+
+ pr_info("use-after-free after another kmalloc\n");
+ ptr1 = kmalloc(size, GFP_KERNEL);
+ if (!ptr1) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr1);
+ ptr2 = kmalloc(size, GFP_KERNEL);
+ if (!ptr2) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr1[0] = 'x';
+ kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+ char *p;
+ size_t size = 200;
+ struct kmem_cache *cache = kmem_cache_create("test_cache",
+ size, 0,
+ 0, NULL);
+ if (!cache) {
+ pr_err("Cache allocation failed\n");
+ return;
+ }
+ pr_info("out-of-bounds in kmem_cache_alloc\n");
+ p = kmem_cache_alloc(cache, GFP_KERNEL);
+ if (!p) {
+ pr_err("Allocation failed\n");
+ kmem_cache_destroy(cache);
+ return;
+ }
+
+ *p = p[size];
+ kmem_cache_free(cache, p);
+ kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+ kmalloc_oob_rigth();
+ kmalloc_oob_left();
+ kmalloc_node_oob_right();
+ kmalloc_large_oob_rigth();
+ kmalloc_oob_krealloc_more();
+ kmalloc_oob_krealloc_less();
+ kmalloc_oob_16();
+ kmalloc_oob_in_memset();
+ kmalloc_uaf();
+ kmalloc_uaf_memset();
+ kmalloc_uaf2();
+ kmem_cache_oob();
+ return 0;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
--
1.8.5.5
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
@ 2014-07-09 14:26 ` Christoph Lameter
2014-07-10 7:31 ` Andrey Ryabinin
2014-07-09 19:29 ` Andi Kleen
` (4 subsequent siblings)
5 siblings, 1 reply; 80+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:26 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE 0xFF /* page was freed */
> +#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
We call these zones "PADDING". Redzones are associated with an object.
Padding is there because bytes are left over, unusable or necessary for
alignment.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions Andrey Ryabinin
@ 2014-07-09 14:29 ` Christoph Lameter
2014-07-10 7:41 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:29 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
Hmmm... This is allocator specific. At some future point it would be good
to move error reporting to slab_common.c and use those from all
allocators.
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> mm/slab.h | 5 +++++
> mm/slub.c | 4 ++--
> 2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 1257ade..912af7f 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>
> void *slab_next(struct seq_file *m, void *p, loff_t *pos);
> void slab_stop(struct seq_file *m, void *p);
> +void slab_err(struct kmem_cache *s, struct page *page,
> + const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> + u8 *object, char *reason);
> +
>
> #endif /* MM_SLAB_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 6641a8f..3bdd9ac 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> dump_stack();
> }
>
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
> u8 *object, char *reason)
> {
> slab_bug(s, "%s", reason);
> print_trailer(s, page, object);
> }
>
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
> const char *fmt, ...)
> {
> va_list args;
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c Andrey Ryabinin
@ 2014-07-09 14:32 ` Christoph Lameter
2014-07-10 7:43 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:32 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
> functions shouldn't be instrumented. Since we want to instrument other
> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
> instrumented.
>
> Unfortunately we can't completely disable instrumentation for one function.
> We could disable compiler's instrumentation for one function by using
> __atribute__((no_sanitize_address)).
> But the problem here is that memset call will be replaced by instumented
> version kasan_memset since currently it's implemented as define:
Looks good to me and useful regardless of the sanitizer going in.
Acked-by: Christoph Lameter <cl@linux.com>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache Andrey Ryabinin
@ 2014-07-09 14:33 ` Christoph Lameter
2014-07-10 8:44 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:33 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> When caller creates new kmem_cache, requested size of kmem_cache
> will be stored in alloc_size. Later alloc_size will be used by
> kerenel address sanitizer to mark alloc_size of slab object as
> accessible and the rest of its size as redzone.
I think this patch is not needed since object_size == alloc_size right?
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator Andrey Ryabinin
@ 2014-07-09 14:48 ` Christoph Lameter
2014-07-10 9:24 ` Andrey Ryabinin
2014-07-15 6:09 ` Joonsoo Kim
1 sibling, 1 reply; 80+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:48 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.
Do you really need to go through all of this? Add the hooks to
kmem_cache_alloc_trace() instead and use the existing instrumentation
that is there for other purposes?
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-07-09 14:26 ` Christoph Lameter
@ 2014-07-09 19:29 ` Andi Kleen
2014-07-09 20:40 ` Yuri Gribov
2014-07-10 12:10 ` Andrey Ryabinin
2014-07-09 20:26 ` Dave Hansen
` (3 subsequent siblings)
5 siblings, 2 replies; 80+ messages in thread
From: Andi Kleen @ 2014-07-09 19:29 UTC (permalink / raw)
To: linux-arm-kernel
Andrey Ryabinin <a.ryabinin@samsung.com> writes:
Seems like a useful facility. Thanks for working on it. Overall the code
looks fairly good. Some comments below.
> +
> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> +
> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
> + - is based on compiler instrumentation (fast),
> + - detects OOB for both writes and reads,
> + - provides UAF detection,
Please expand the acronym.
> +
> +|--------| |--------|
> +| Memory |---- | Memory |
> +|--------| \ |--------|
> +| Shadow |-- -->| Shadow |
> +|--------| \ |--------|
> +| Bad | ---->| Bad |
> +|--------| / |--------|
> +| Shadow |-- -->| Shadow |
> +|--------| / |--------|
> +| Memory |---- | Memory |
> +|--------| |--------|
I guess this implies it's incompatible with memory hotplug, as the
shadow couldn't be extended?
That's fine, but you should exclude that in Kconfig.
There are likely more exclude dependencies for Kconfig too.
Neds dependencies on the right sparse mem options?
Does it work with kmemcheck? If not exclude.
Perhaps try to boot it with all other debug options and see which ones break.
> diff --git a/Makefile b/Makefile
> index 64ab7b3..08a07f2 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -384,6 +384,12 @@ LDFLAGS_MODULE =
> CFLAGS_KERNEL =
> AFLAGS_KERNEL =
> CFLAGS_GCOV = -fprofile-arcs -ftest-coverage
> +CFLAGS_KASAN = -fsanitize=address --param asan-stack=0 \
> + --param asan-use-after-return=0 \
> + --param asan-globals=0 \
> + --param asan-memintrin=0 \
> + --param asan-instrumentation-with-call-threshold=0 \
Hardcoding --param is not very nice. They can change from compiler
to compiler version. Need some version checking?
Also you should probably have some check that the compiler supports it
(and print some warning if not)
Otherwise randconfig builds will be broken if the compiler doesn't.
Also does the kernel really build/work without the other patches?
If not please move this patchkit to the end of the series, to keep
the patchkit bisectable (this may need moving parts of the includes
into a separate patch)
> diff --git a/commit b/commit
> new file mode 100644
> index 0000000..134f4dd
> --- /dev/null
> +++ b/commit
> @@ -0,0 +1,3 @@
> +
> +I'm working on address sanitizer for kernel.
> +fuck this bloody.
> \ No newline at end of file
Heh. Please remove.
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..2bfff78
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,20 @@
> +config HAVE_ARCH_KASAN
> + bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> + bool "AddressSanitizer: dynamic memory error detector"
> + default n
> + help
> + Enables AddressSanitizer - dynamic memory error detector,
> + that finds out-of-bounds and use-after-free bugs.
Needs much more description.
> +
> +config KASAN_SANITIZE_ALL
> + bool "Instrument entire kernel"
> + depends on KASAN
> + default y
> + help
> + This enables compiler intrumentation for entire kernel
> +
Same.
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..e2cd345
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,292 @@
> +/*
> + *
Add one line here what the file does. Same for other files.
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> +#include "kasan.h"
> +#include "../slab.h"
That's ugly, but ok.
> +
> +static bool __read_mostly kasan_initialized;
It would be better to use a static_key, but I guess your initialization
is too early?
Of course the proposal to move it into start_kernel and get rid of the
flag would be best.
> +
> +unsigned long kasan_shadow_start;
> +unsigned long kasan_shadow_end;
> +
> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
Do these all need to be global?
> +
> +
> +static inline bool addr_is_in_mem(unsigned long addr)
> +{
> + return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> +}
Of course there are lots of cases where this doesn't work (like large
holes), but I assume this has been checked elsewhere?
> +
> +void kasan_enable_local(void)
> +{
> + if (likely(kasan_initialized))
> + current->kasan_depth--;
> +}
> +
> +void kasan_disable_local(void)
> +{
> + if (likely(kasan_initialized))
> + current->kasan_depth++;
> +}
Couldn't this be done without checking the flag?
> + return;
> +
> + if (unlikely(addr < TASK_SIZE)) {
> + info.access_addr = addr;
> + info.access_size = size;
> + info.is_write = write;
> + info.ip = _RET_IP_;
> + kasan_report_user_access(&info);
> + return;
> + }
How about vsyscall pages here?
> +
> + if (!addr_is_in_mem(addr))
> + return;
> +
> + access_addr = memory_is_poisoned(addr, size);
> + if (likely(access_addr == 0))
> + return;
> +
> + info.access_addr = access_addr;
> + info.access_size = size;
> + info.is_write = write;
> + info.ip = _RET_IP_;
> + kasan_report_error(&info);
> +}
> +
> +void __init kasan_alloc_shadow(void)
> +{
> + unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> + unsigned long shadow_size;
> + phys_addr_t shadow_phys_start;
> +
> + shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
> +
> + shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
> + if (!shadow_phys_start) {
> + pr_err("Unable to reserve shadow memory\n");
> + return;
Wouldn't this crash&burn later? panic?
> +void *kasan_memcpy(void *dst, const void *src, size_t len)
> +{
> + if (unlikely(len == 0))
> + return dst;
> +
> + check_memory_region((unsigned long)src, len, false);
> + check_memory_region((unsigned long)dst, len, true);
I assume this handles negative len?
Also check for overlaps?
> +
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
> +{
> + return x - ((x - slab_start) % s->size);
> +}
This should be in the respective slab headers, not hard coded.
> +void kasan_report_error(struct access_info *info)
> +{
> + kasan_disable_local();
> + pr_err("================================="
> + "=================================\n");
> + print_error_description(info);
> + print_address_description(info);
> + print_shadow_for_address(info->access_addr);
> + pr_err("================================="
> + "=================================\n");
> + kasan_enable_local();
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> + kasan_disable_local();
Should print the same prefix oopses use, a lot of log grep tools
look for that.
Also you may want some lock to prevent multiple
reports mixing.
-Andi
--
ak at linux.intel.com -- Speaking for myself only
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
@ 2014-07-09 19:31 ` Andi Kleen
2014-07-10 13:54 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Andi Kleen @ 2014-07-09 19:31 UTC (permalink / raw)
To: linux-arm-kernel
Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> +
> +#undef memcpy
> +void *kasan_memset(void *ptr, int val, size_t len);
> +void *kasan_memcpy(void *dst, const void *src, size_t len);
> +void *kasan_memmove(void *dst, const void *src, size_t len);
> +
> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
I don't think just define is enough, gcc can call these functions
implicitely too (both with and without __). For example for a struct copy.
You need to have true linker level aliases.
-Andi
--
ak at linux.intel.com -- Speaking for myself only
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot Andrey Ryabinin
@ 2014-07-09 19:33 ` Andi Kleen
2014-07-10 13:15 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Andi Kleen @ 2014-07-09 19:33 UTC (permalink / raw)
To: linux-arm-kernel
Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> Instrumentation of this files may result in unbootable machine.
This doesn't make sense. Is the code not NMI safe?
If yes that would need to be fixed because
Please debug more.
perf is a common source of bugs (see Vice Weaver's fuzzer results),
so it would be good to have this functionality for it.
-Andi
--
ak at linux.intel.com -- Speaking for myself only
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-07-09 14:26 ` Christoph Lameter
2014-07-09 19:29 ` Andi Kleen
@ 2014-07-09 20:26 ` Dave Hansen
2014-07-10 12:12 ` Andrey Ryabinin
2014-07-09 20:37 ` Dave Hansen
` (2 subsequent siblings)
5 siblings, 1 reply; 80+ messages in thread
From: Dave Hansen @ 2014-07-09 20:26 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
>
> Here is function to translate address to corresponding shadow address:
>
> unsigned long kasan_mem_to_shadow(unsigned long addr)
> {
> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
> + kasan_shadow_start;
> }
How does this interact with vmalloc() addresses or those from a kmap()?
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
` (2 preceding siblings ...)
2014-07-09 20:26 ` Dave Hansen
@ 2014-07-09 20:37 ` Dave Hansen
2014-07-09 20:38 ` Dave Hansen
2014-07-10 11:55 ` Sasha Levin
5 siblings, 0 replies; 80+ messages in thread
From: Dave Hansen @ 2014-07-09 20:37 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +void __init kasan_alloc_shadow(void)
> +{
> + unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> + unsigned long shadow_size;
> + phys_addr_t shadow_phys_start;
> +
> + shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
This calculation is essentially meaningless, and it's going to break
when we have sparse memory situations like having big holes. This code
attempts to allocate non-sparse data for backing what might be very
sparse memory ranges.
It's quite OK for us to handle configurations today where we have 2GB of
RAM with 1GB at 0x0 and 1GB at 0x10000000000. This code would attempt
to allocate a 128GB shadow area for this configuration with 2GB of RAM. :)
You're probably going to get stuck doing something similar to the
sparsemem-vmemmap code does. You could handle this for normal sparsemem
by adding a shadow area pointer to the memory section.
Or, just vmalloc() (get_vm_area() really) the virtual space and then
make sure to allocate the backing store before you need it (handling the
faults would probably get too tricky).
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
` (3 preceding siblings ...)
2014-07-09 20:37 ` Dave Hansen
@ 2014-07-09 20:38 ` Dave Hansen
2014-07-10 11:55 ` Sasha Levin
5 siblings, 0 replies; 80+ messages in thread
From: Dave Hansen @ 2014-07-09 20:38 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +config KASAN
> + bool "AddressSanitizer: dynamic memory error detector"
> + default n
> + help
> + Enables AddressSanitizer - dynamic memory error detector,
> + that finds out-of-bounds and use-after-free bugs.
This definitely needs some more text like "This option eats boatloads of
memory and will slow your system down enough that it should never be
used in production unless you are crazy".
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 19:29 ` Andi Kleen
@ 2014-07-09 20:40 ` Yuri Gribov
2014-07-10 12:10 ` Andrey Ryabinin
1 sibling, 0 replies; 80+ messages in thread
From: Yuri Gribov @ 2014-07-09 20:40 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jul 9, 2014 at 11:29 PM, Andi Kleen <andi@firstfloor.org> wrote:
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?
We plan to address this soon. CFLAGS will look more like
-fsanitize=kernel-address but this flag is not yet in gcc.
-Y
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
` (20 preceding siblings ...)
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module Andrey Ryabinin
@ 2014-07-09 21:19 ` Dave Hansen
2014-07-09 21:44 ` Andi Kleen
[not found] ` <1421859105-25253-1-git-send-email-a.ryabinin@samsung.com>
` (2 subsequent siblings)
24 siblings, 1 reply; 80+ messages in thread
From: Dave Hansen @ 2014-07-09 21:19 UTC (permalink / raw)
To: linux-arm-kernel
This is totally self-serving (and employer-serving), but has anybody
thought about this large collection of memory debugging tools that we
are growing? It helps to have them all in the same places in the menus
(thanks for adding it to Memory Debugging, btw!).
But, this gives us at least four things that overlap with kasan's
features on some level. Each of these has its own advantages and
disadvantages, of course:
1. DEBUG_PAGEALLOC
2. SLUB debugging / DEBUG_OBJECTS
3. kmemcheck
4. kasan
... and there are surely more coming down pike. Like Intel MPX:
> https://software.intel.com/en-us/articles/introduction-to-intel-memory-protection-extensions
Or, do we just keep adding these overlapping tools and their associated
code over and over and fragment their user bases?
You're also claiming that "KASAN is better than all of
CONFIG_DEBUG_PAGEALLOC". So should we just disallow (or hide)
DEBUG_PAGEALLOC on kernels where KASAN is available?
Maybe we just need to keep these out of mainline and make Andrew carry
it in -mm until the end of time. :)
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
2014-07-09 21:19 ` [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Dave Hansen
@ 2014-07-09 21:44 ` Andi Kleen
2014-07-09 21:59 ` Vegard Nossum
0 siblings, 1 reply; 80+ messages in thread
From: Andi Kleen @ 2014-07-09 21:44 UTC (permalink / raw)
To: linux-arm-kernel
Dave Hansen <dave.hansen@intel.com> writes:
>
> You're also claiming that "KASAN is better than all of
better as in finding more bugs, but surely not better as in
"do so with less overhead"
> CONFIG_DEBUG_PAGEALLOC". So should we just disallow (or hide)
> DEBUG_PAGEALLOC on kernels where KASAN is available?
I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
KASAN is "slow but thorough" There are niches for both.
But I could see KASAN eventually deprecating kmemcheck, which
is just incredible slow.
-Andi
--
ak at linux.intel.com -- Speaking for myself only
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
2014-07-09 21:44 ` Andi Kleen
@ 2014-07-09 21:59 ` Vegard Nossum
2014-07-09 23:33 ` Dave Hansen
` (2 more replies)
0 siblings, 3 replies; 80+ messages in thread
From: Vegard Nossum @ 2014-07-09 21:59 UTC (permalink / raw)
To: linux-arm-kernel
On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
> Dave Hansen <dave.hansen@intel.com> writes:
>>
>> You're also claiming that "KASAN is better than all of
>
> better as in finding more bugs, but surely not better as in
> "do so with less overhead"
>
>> CONFIG_DEBUG_PAGEALLOC". So should we just disallow (or hide)
>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>
> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>
> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
> KASAN is "slow but thorough" There are niches for both.
>
> But I could see KASAN eventually deprecating kmemcheck, which
> is just incredible slow.
FWIW, I definitely agree with this -- if KASAN can do everything that
kmemcheck can, it is no doubt the right way forward.
Vegard
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
2014-07-09 21:59 ` Vegard Nossum
@ 2014-07-09 23:33 ` Dave Hansen
2014-07-10 0:03 ` Andi Kleen
2014-07-10 13:59 ` Andrey Ryabinin
2 siblings, 0 replies; 80+ messages in thread
From: Dave Hansen @ 2014-07-09 23:33 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/2014 02:59 PM, Vegard Nossum wrote:
>> > But I could see KASAN eventually deprecating kmemcheck, which
>> > is just incredible slow.
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.
That's very cool. For what it's worth, the per-arch work does appear to
be pretty minimal and the things like the string function replacements
_should_ be able to be made generic. Aren't the x86_32/x86_64 and arm
hooks pretty much copied-and-pasted?
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
2014-07-09 21:59 ` Vegard Nossum
2014-07-09 23:33 ` Dave Hansen
@ 2014-07-10 0:03 ` Andi Kleen
2014-07-10 13:59 ` Andrey Ryabinin
2 siblings, 0 replies; 80+ messages in thread
From: Andi Kleen @ 2014-07-10 0:03 UTC (permalink / raw)
To: linux-arm-kernel
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.
Thanks
BTW I didn't want to sound like I'm against kmemcheck. It is a very
useful tool and was impressive work given the constraints (no help from
the compiler)
-andi
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 14:26 ` Christoph Lameter
@ 2014-07-10 7:31 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 7:31 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 18:26, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
>
>> +
>> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE 0xFF /* page was freed */
>> +#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
>
> We call these zones "PADDING". Redzones are associated with an object.
> Padding is there because bytes are left over, unusable or necessary for
> alignment.
>
Goop point. I will change the name to make it less confusing.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
2014-07-09 14:29 ` Christoph Lameter
@ 2014-07-10 7:41 ` Andrey Ryabinin
2014-07-10 14:07 ` Christoph Lameter
0 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 7:41 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 18:29, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
>
>> Remove static and add function declarations to mm/slab.h so they
>> could be used by kernel address sanitizer.
>
> Hmmm... This is allocator specific. At some future point it would be good
> to move error reporting to slab_common.c and use those from all
> allocators.
>
I could move declarations to kasan internals, but it will look ugly too.
I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
I can't tell right now how hard it will be, but it seems doable.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
2014-07-09 14:32 ` Christoph Lameter
@ 2014-07-10 7:43 ` Andrey Ryabinin
2014-07-10 14:08 ` Christoph Lameter
0 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 7:43 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 18:32, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
>
>> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
>> functions shouldn't be instrumented. Since we want to instrument other
>> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
>> instrumented.
>>
>> Unfortunately we can't completely disable instrumentation for one function.
>> We could disable compiler's instrumentation for one function by using
>> __atribute__((no_sanitize_address)).
>> But the problem here is that memset call will be replaced by instumented
>> version kasan_memset since currently it's implemented as define:
>
> Looks good to me and useful regardless of the sanitizer going in.
>
> Acked-by: Christoph Lameter <cl@linux.com>
>
I also noticed in mm/util.c:
/* Tracepoints definitions. */
EXPORT_TRACEPOINT_SYMBOL(kmalloc);
EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
EXPORT_TRACEPOINT_SYMBOL(kmalloc_node);
EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node);
EXPORT_TRACEPOINT_SYMBOL(kfree);
EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free);
Should I send another patch to move this to slab_common.c?
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
2014-07-09 14:33 ` Christoph Lameter
@ 2014-07-10 8:44 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 8:44 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 18:33, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
>
>> When caller creates new kmem_cache, requested size of kmem_cache
>> will be stored in alloc_size. Later alloc_size will be used by
>> kerenel address sanitizer to mark alloc_size of slab object as
>> accessible and the rest of its size as redzone.
>
> I think this patch is not needed since object_size == alloc_size right?
>
I vaguely remember there was a reason for this patch, but I can't see/recall it now.
Probably I misunderstood something. I'll drop this patch
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
2014-07-09 14:48 ` Christoph Lameter
@ 2014-07-10 9:24 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 9:24 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 18:48, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
>
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
>
> Do you really need to go through all of this? Add the hooks to
> kmem_cache_alloc_trace() instead and use the existing instrumentation
> that is there for other purposes?
>
I could move kasan_kmalloc hooks kmem_cache_alloc_trace(), and I think it will look better.
Hovewer I will require two hooks instead of one (for CONFIG_TRACING=y and CONFIG_TRACING=n).
Btw, seems I broke CONFIG_SL[AO]B configurations in this patch by introducing __ksize function
which used in krealloc now.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
` (4 preceding siblings ...)
2014-07-09 20:38 ` Dave Hansen
@ 2014-07-10 11:55 ` Sasha Levin
2014-07-10 13:01 ` Andrey Ryabinin
5 siblings, 1 reply; 80+ messages in thread
From: Sasha Levin @ 2014-07-10 11:55 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>
> The main features of kasan is:
> - is based on compiler instrumentation (fast),
> - detects out of bounds for both writes and reads,
> - provides use after free detection,
>
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
>
> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
> latter).
>
> Implementation details:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
>
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
>
> Here is function to translate address to corresponding shadow address:
>
> unsigned long kasan_mem_to_shadow(unsigned long addr)
> {
> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
> + kasan_shadow_start;
> }
>
> where KASAN_SHADOW_SCALE_SHIFT = 3.
>
> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.
> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
>
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
>
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
I gave it a spin, and it seems that it fails for what you might call a "regular"
memory size these days, in my case it was 18G:
[ 0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
[ 0.000000]
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
[ 0.000000] ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
[ 0.000000] ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
[ 0.000000] ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
[ 0.000000] Call Trace:
[ 0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
[ 0.000000] panic (kernel/panic.c:119)
[ 0.000000] memblock_alloc_base (mm/memblock.c:1092)
[ 0.000000] memblock_alloc (mm/memblock.c:1097)
[ 0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
[ 0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
[ 0.000000] paging_init (arch/x86/mm/init_64.c:677)
[ 0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
[ 0.000000] ? printk (kernel/printk/printk.c:1839)
[ 0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
[ 0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
[ 0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
[ 0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
It got better when I reduced memory to 1GB, but then my system just failed to boot
at all because that's not enough to bring everything up.
Thanks,
Sasha
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 19:29 ` Andi Kleen
2014-07-09 20:40 ` Yuri Gribov
@ 2014-07-10 12:10 ` Andrey Ryabinin
1 sibling, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:10 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 23:29, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>
> Seems like a useful facility. Thanks for working on it. Overall the code
> looks fairly good. Some comments below.
>
>
>> +
>> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
>> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>> +
>> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
>> + - is based on compiler instrumentation (fast),
>> + - detects OOB for both writes and reads,
>> + - provides UAF detection,
>
> Please expand the acronym.
>
Sure, will do.
>> +
>> +|--------| |--------|
>> +| Memory |---- | Memory |
>> +|--------| \ |--------|
>> +| Shadow |-- -->| Shadow |
>> +|--------| \ |--------|
>> +| Bad | ---->| Bad |
>> +|--------| / |--------|
>> +| Shadow |-- -->| Shadow |
>> +|--------| / |--------|
>> +| Memory |---- | Memory |
>> +|--------| |--------|
>
> I guess this implies it's incompatible with memory hotplug, as the
> shadow couldn't be extended?
>
> That's fine, but you should exclude that in Kconfig.
>
> There are likely more exclude dependencies for Kconfig too.
> Neds dependencies on the right sparse mem options?
> Does it work with kmemcheck? If not exclude.
>
> Perhaps try to boot it with all other debug options and see which ones break.
>
Besides Kconfig dependencies I might need to disable instrumentation in some places.
For example kasan doesn't play well with kmemleak. Kmemleak may look for pointers inside redzones
and kasan treats this as an error.
>> diff --git a/Makefile b/Makefile
>> index 64ab7b3..08a07f2 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -384,6 +384,12 @@ LDFLAGS_MODULE =
>> CFLAGS_KERNEL =
>> AFLAGS_KERNEL =
>> CFLAGS_GCOV = -fprofile-arcs -ftest-coverage
>> +CFLAGS_KASAN = -fsanitize=address --param asan-stack=0 \
>> + --param asan-use-after-return=0 \
>> + --param asan-globals=0 \
>> + --param asan-memintrin=0 \
>> + --param asan-instrumentation-with-call-threshold=0 \
>
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?
>
> Also you should probably have some check that the compiler supports it
> (and print some warning if not)
> Otherwise randconfig builds will be broken if the compiler doesn't.
>
> Also does the kernel really build/work without the other patches?
> If not please move this patchkit to the end of the series, to keep
> the patchkit bisectable (this may need moving parts of the includes
> into a separate patch)
>
It's buildable. At this point you can't select CONFIG_KASAN = y because there is no
arch that supports kasan (HAVE_ARCH_KASAN config). But after x86 patches kernel could be
build and run with kasan. At that point kasan will be able to catch only "wild" memory
accesses (when someone outside mm/kasan/* tries to access shadow memory).
>> diff --git a/commit b/commit
>> new file mode 100644
>> index 0000000..134f4dd
>> --- /dev/null
>> +++ b/commit
>> @@ -0,0 +1,3 @@
>> +
>> +I'm working on address sanitizer for kernel.
>> +fuck this bloody.
>> \ No newline at end of file
>
> Heh. Please remove.
>
Oops. No idea how it get there :)
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> new file mode 100644
>> index 0000000..2bfff78
>> --- /dev/null
>> +++ b/lib/Kconfig.kasan
>> @@ -0,0 +1,20 @@
>> +config HAVE_ARCH_KASAN
>> + bool
>> +
>> +if HAVE_ARCH_KASAN
>> +
>> +config KASAN
>> + bool "AddressSanitizer: dynamic memory error detector"
>> + default n
>> + help
>> + Enables AddressSanitizer - dynamic memory error detector,
>> + that finds out-of-bounds and use-after-free bugs.
>
> Needs much more description.
>
>> +
>> +config KASAN_SANITIZE_ALL
>> + bool "Instrument entire kernel"
>> + depends on KASAN
>> + default y
>> + help
>> + This enables compiler intrumentation for entire kernel
>> +
>
> Same.
>
>
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> new file mode 100644
>> index 0000000..e2cd345
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,292 @@
>> +/*
>> + *
>
> Add one line here what the file does. Same for other files.
>
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> +#include "kasan.h"
>> +#include "../slab.h"
>
> That's ugly, but ok.
Hm... "../slab.h" is not needed in this file. linux/slab.h is enough here.
>
>> +
>> +static bool __read_mostly kasan_initialized;
>
> It would be better to use a static_key, but I guess your initialization
> is too early?
No, not too early. kasan_init_shadow which switches this flag called just after jump_label_init,
so it's not a problem for static_key, but there is another one.
I tried static key here. I works really well for arm, but it has some problems on x86.
While switching static key by calling static_key_slow_inc, the first byte of static key is replaced with
breakpoint (look at text_poke_bp()). After that, at first memory access __asan_load/__asan_store called and
we are executing this breakpoint from the code that trying to update that instruction.
text_poke_bp()
{
....
//replace first byte with breakpoint
....
___asan_load*()
....
if (static_key_false(&kasan_initlized)) <-- static key update still in progress
....
//patching code done
}
To make static_key work on x86 I need to disable instrumentation in text_poke_bp() and in any other functions that called from it.
It might be a big problem if text_poke_bp uses some very generic functions.
Another better option would be to get rid of kasan_initilized check in kasan_enabled():
static inline bool kasan_enabled(void)
{
return likely(kasan_initialized
&& !current->kasan_depth);
}
>
> Of course the proposal to move it into start_kernel and get rid of the
> flag would be best.
>
that's the plan for future.
>> +
>> +unsigned long kasan_shadow_start;
>> +unsigned long kasan_shadow_end;
>> +
>> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
>> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
>
> Do these all need to be global?
>
For now only kasan_shadow_start and kasan_shadow_offset need to be global.
It also should be possible to get rid of using kasan_shadow_start in kasan_shadow_to_mem(), and make it static
>> +
>> +
>> +static inline bool addr_is_in_mem(unsigned long addr)
>> +{
>> + return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> +}
>
> Of course there are lots of cases where this doesn't work (like large
> holes), but I assume this has been checked elsewhere?
>
Seems I need to do some work for sparsemem configurations.
>
>> +
>> +void kasan_enable_local(void)
>> +{
>> + if (likely(kasan_initialized))
>> + current->kasan_depth--;
>> +}
>> +
>> +void kasan_disable_local(void)
>> +{
>> + if (likely(kasan_initialized))
>> + current->kasan_depth++;
>> +}
>
> Couldn't this be done without checking the flag?
>
Not sure. Do we always have current available? I assume it should be initialized at some point of boot process.
I will check that.
>
>> + return;
>> +
>> + if (unlikely(addr < TASK_SIZE)) {
>> + info.access_addr = addr;
>> + info.access_size = size;
>> + info.is_write = write;
>> + info.ip = _RET_IP_;
>> + kasan_report_user_access(&info);
>> + return;
>> + }
>
> How about vsyscall pages here?
>
Not sure what do you mean. Could you please elaborate?
>> +
>> + if (!addr_is_in_mem(addr))
>> + return;
>> +
>> + access_addr = memory_is_poisoned(addr, size);
>> + if (likely(access_addr == 0))
>> + return;
>> +
>> + info.access_addr = access_addr;
>> + info.access_size = size;
>> + info.is_write = write;
>> + info.ip = _RET_IP_;
>> + kasan_report_error(&info);
>> +}
>> +
>> +void __init kasan_alloc_shadow(void)
>> +{
>> + unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
>> + unsigned long shadow_size;
>> + phys_addr_t shadow_phys_start;
>> +
>> + shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
>> +
>> + shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
>> + if (!shadow_phys_start) {
>> + pr_err("Unable to reserve shadow memory\n");
>> + return;
>
> Wouldn't this crash&burn later? panic?
>
As already Sasha reported it will panic in memblock_alloc.
>> +void *kasan_memcpy(void *dst, const void *src, size_t len)
>> +{
>> + if (unlikely(len == 0))
>> + return dst;
>> +
>> + check_memory_region((unsigned long)src, len, false);
>> + check_memory_region((unsigned long)dst, len, true);
>
> I assume this handles negative len?
> Also check for overlaps?
>
Will do.
>> +
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
>> +{
>> + return x - ((x - slab_start) % s->size);
>> +}
>
> This should be in the respective slab headers, not hard coded.
>
Agreed.
>> +void kasan_report_error(struct access_info *info)
>> +{
>> + kasan_disable_local();
>> + pr_err("================================="
>> + "=================================\n");
>> + print_error_description(info);
>> + print_address_description(info);
>> + print_shadow_for_address(info->access_addr);
>> + pr_err("================================="
>> + "=================================\n");
>> + kasan_enable_local();
>> +}
>> +
>> +void kasan_report_user_access(struct access_info *info)
>> +{
>> + kasan_disable_local();
>
> Should print the same prefix oopses use, a lot of log grep tools
> look for that.
>
Ok
> Also you may want some lock to prevent multiple
> reports mixing.
I think hiding it into
if (spin_trylock) { ... }
would be enough.
I think it might be a good idea to add option for reporting only first error.
It will be usefull for some cases (for example strlen on not null terminated string makes kasan crazy)
Thanks for review
>
> -Andi
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-09 20:26 ` Dave Hansen
@ 2014-07-10 12:12 ` Andrey Ryabinin
2014-07-10 15:55 ` Dave Hansen
0 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:12 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/14 00:26, Dave Hansen wrote:
> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>> {
>> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>> + kasan_shadow_start;
>> }
>
> How does this interact with vmalloc() addresses or those from a kmap()?
>
It's used only for lowmem:
static inline bool addr_is_in_mem(unsigned long addr)
{
return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
}
static __always_inline void check_memory_region(unsigned long addr,
size_t size, bool write)
{
....
if (!addr_is_in_mem(addr))
return;
// check shadow here
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 11:55 ` Sasha Levin
@ 2014-07-10 13:01 ` Andrey Ryabinin
2014-07-10 13:31 ` Sasha Levin
0 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:01 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/14 15:55, Sasha Levin wrote:
> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>
>> The main features of kasan is:
>> - is based on compiler instrumentation (fast),
>> - detects out of bounds for both writes and reads,
>> - provides use after free detection,
>>
>> This patch only adds infrastructure for kernel address sanitizer. It's not
>> available for use yet. The idea and some code was borrowed from [1].
>>
>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>> latter).
>>
>> Implementation details:
>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>> on each memory access.
>>
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>> {
>> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>> + kasan_shadow_start;
>> }
>>
>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>
>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>> Any negative value indicates that the entire 8-bytes are unaccessible.
>> Different negative values used to distinguish between different kinds of
>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>
>> To be able to detect accesses to bad memory we need a special compiler.
>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>> before each memory access of size 1, 2, 4, 8 or 16.
>>
>> These functions check whether memory region is valid to access or not by checking
>> corresponding shadow memory. If access is not valid an error printed.
>>
>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> I gave it a spin, and it seems that it fails for what you might call a "regular"
> memory size these days, in my case it was 18G:
>
> [ 0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
> [ 0.000000]
> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
> [ 0.000000] ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
> [ 0.000000] ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
> [ 0.000000] ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
> [ 0.000000] Call Trace:
> [ 0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
> [ 0.000000] panic (kernel/panic.c:119)
> [ 0.000000] memblock_alloc_base (mm/memblock.c:1092)
> [ 0.000000] memblock_alloc (mm/memblock.c:1097)
> [ 0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
> [ 0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
> [ 0.000000] paging_init (arch/x86/mm/init_64.c:677)
> [ 0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
> [ 0.000000] ? printk (kernel/printk/printk.c:1839)
> [ 0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
> [ 0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
> [ 0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
> [ 0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>
> It got better when I reduced memory to 1GB, but then my system just failed to boot
> at all because that's not enough to bring everything up.
>
Thanks.
I think memory size is not a problem here. I tested on my desktop with 16G.
Seems it's a problem with memory holes cited by Dave.
kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
>
> Thanks,
> Sasha
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
2014-07-09 19:33 ` Andi Kleen
@ 2014-07-10 13:15 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:15 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 23:33, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>
>> Instrumentation of this files may result in unbootable machine.
>
> This doesn't make sense. Is the code not NMI safe?
> If yes that would need to be fixed because
>
> Please debug more.
>
Sure.
It turns out that KASAN_SANITIZE_perf_event.o := n is not needed here.
The problem only with common.c
> perf is a common source of bugs (see Vice Weaver's fuzzer results),
> so it would be good to have this functionality for it.
>
> -Andi
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 13:01 ` Andrey Ryabinin
@ 2014-07-10 13:31 ` Sasha Levin
2014-07-10 13:39 ` Andrey Ryabinin
2014-07-10 13:50 ` Andrey Ryabinin
0 siblings, 2 replies; 80+ messages in thread
From: Sasha Levin @ 2014-07-10 13:31 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
> On 07/10/14 15:55, Sasha Levin wrote:
>> > On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>> >> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>> >>
>>> >> The main features of kasan is:
>>> >> - is based on compiler instrumentation (fast),
>>> >> - detects out of bounds for both writes and reads,
>>> >> - provides use after free detection,
>>> >>
>>> >> This patch only adds infrastructure for kernel address sanitizer. It's not
>>> >> available for use yet. The idea and some code was borrowed from [1].
>>> >>
>>> >> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>> >> latter).
>>> >>
>>> >> Implementation details:
>>> >> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>> >> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>> >> on each memory access.
>>> >>
>>> >> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> >> mapping with a scale and offset to translate a memory address to its corresponding
>>> >> shadow address.
>>> >>
>>> >> Here is function to translate address to corresponding shadow address:
>>> >>
>>> >> unsigned long kasan_mem_to_shadow(unsigned long addr)
>>> >> {
>>> >> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>> >> + kasan_shadow_start;
>>> >> }
>>> >>
>>> >> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>> >>
>>> >> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>> >> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>> >> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>> >> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>> >> Any negative value indicates that the entire 8-bytes are unaccessible.
>>> >> Different negative values used to distinguish between different kinds of
>>> >> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>> >>
>>> >> To be able to detect accesses to bad memory we need a special compiler.
>>> >> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>> >> before each memory access of size 1, 2, 4, 8 or 16.
>>> >>
>>> >> These functions check whether memory region is valid to access or not by checking
>>> >> corresponding shadow memory. If access is not valid an error printed.
>>> >>
>>> >> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>> >>
>>> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> >
>> > I gave it a spin, and it seems that it fails for what you might call a "regular"
>> > memory size these days, in my case it was 18G:
>> >
>> > [ 0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>> > [ 0.000000]
>> > [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>> > [ 0.000000] ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>> > [ 0.000000] ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>> > [ 0.000000] ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>> > [ 0.000000] Call Trace:
>> > [ 0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>> > [ 0.000000] panic (kernel/panic.c:119)
>> > [ 0.000000] memblock_alloc_base (mm/memblock.c:1092)
>> > [ 0.000000] memblock_alloc (mm/memblock.c:1097)
>> > [ 0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>> > [ 0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>> > [ 0.000000] paging_init (arch/x86/mm/init_64.c:677)
>> > [ 0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>> > [ 0.000000] ? printk (kernel/printk/printk.c:1839)
>> > [ 0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>> > [ 0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>> > [ 0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>> > [ 0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>> >
>> > It got better when I reduced memory to 1GB, but then my system just failed to boot
>> > at all because that's not enough to bring everything up.
>> >
> Thanks.
> I think memory size is not a problem here. I tested on my desktop with 16G.
> Seems it's a problem with memory holes cited by Dave.
> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
That's correct (I've mistyped and got 18 instead of 28 above).
However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
thing, so I'm not sure how it applies here.
Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
get KASAN running on my machine?
Thanks,
Sasha
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 13:31 ` Sasha Levin
@ 2014-07-10 13:39 ` Andrey Ryabinin
2014-07-10 14:02 ` Sasha Levin
2014-07-10 13:50 ` Andrey Ryabinin
1 sibling, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:39 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>> - is based on compiler instrumentation (fast),
>>>>>> - detects out of bounds for both writes and reads,
>>>>>> - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>> {
>>>>>> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>> + kasan_shadow_start;
>>>>>> }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [ 0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [ 0.000000]
>>>> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [ 0.000000] ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [ 0.000000] ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [ 0.000000] ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [ 0.000000] Call Trace:
>>>> [ 0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [ 0.000000] panic (kernel/panic.c:119)
>>>> [ 0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [ 0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [ 0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [ 0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [ 0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [ 0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [ 0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [ 0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [ 0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [ 0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [ 0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
>
> That's correct (I've mistyped and got 18 instead of 28 above).
>
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
>
Right. By lowmemsize here I mean size of direct
mapping of all phys. memory (which usually called as lowmem on 32bit systems).
> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
>
Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
Also boot cmdline might help.
>
> Thanks,
> Sasha
>
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 13:31 ` Sasha Levin
2014-07-10 13:39 ` Andrey Ryabinin
@ 2014-07-10 13:50 ` Andrey Ryabinin
1 sibling, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:50 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>> - is based on compiler instrumentation (fast),
>>>>>> - detects out of bounds for both writes and reads,
>>>>>> - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>> {
>>>>>> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>> + kasan_shadow_start;
>>>>>> }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [ 0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [ 0.000000]
>>>> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [ 0.000000] ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [ 0.000000] ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [ 0.000000] ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [ 0.000000] Call Trace:
>>>> [ 0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [ 0.000000] panic (kernel/panic.c:119)
>>>> [ 0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [ 0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [ 0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [ 0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [ 0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [ 0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [ 0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [ 0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [ 0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [ 0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [ 0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
>
> That's correct (I've mistyped and got 18 instead of 28 above).
>
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
>
> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
>
It's not boot with the same Failed to allocate error?
>
> Thanks,
> Sasha
>
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
2014-07-09 19:31 ` Andi Kleen
@ 2014-07-10 13:54 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:54 UTC (permalink / raw)
To: linux-arm-kernel
On 07/09/14 23:31, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> +
>> +#undef memcpy
>> +void *kasan_memset(void *ptr, int val, size_t len);
>> +void *kasan_memcpy(void *dst, const void *src, size_t len);
>> +void *kasan_memmove(void *dst, const void *src, size_t len);
>> +
>> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
>> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
>> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
>
> I don't think just define is enough, gcc can call these functions
> implicitely too (both with and without __). For example for a struct copy.
>
> You need to have true linker level aliases.
>
It's true, but problem with linker aliases that they cannot be disabled for some files
we don't want to instrument.
> -Andi
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
2014-07-09 21:59 ` Vegard Nossum
2014-07-09 23:33 ` Dave Hansen
2014-07-10 0:03 ` Andi Kleen
@ 2014-07-10 13:59 ` Andrey Ryabinin
2 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:59 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/14 01:59, Vegard Nossum wrote:
> On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
>> Dave Hansen <dave.hansen@intel.com> writes:
>>>
>>> You're also claiming that "KASAN is better than all of
>>
>> better as in finding more bugs, but surely not better as in
>> "do so with less overhead"
>>
>>> CONFIG_DEBUG_PAGEALLOC". So should we just disallow (or hide)
>>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>>
>> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>>
>> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
>> KASAN is "slow but thorough" There are niches for both.
>>
>> But I could see KASAN eventually deprecating kmemcheck, which
>> is just incredible slow.
>
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.
>
AFAIK kmemcheck could catch reads of uninitialized memory.
KASAN can't do it now, but It should be possible to implementation.
There is such tool for userspace - https://code.google.com/p/memory-sanitizer/wiki/MemorySanitizer
However detection of reads of uninitialized memory will require a different
shadow encoding. Therefore I think it would be better to make it as a separate feature, incompatible with kasan.
>
> Vegard
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 13:39 ` Andrey Ryabinin
@ 2014-07-10 14:02 ` Sasha Levin
2014-07-10 19:04 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Sasha Levin @ 2014-07-10 14:02 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> >
> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
> Also boot cmdline might help.
>
Sure. It's the .config I use for fuzzing so it's rather big (attached).
The cmdline is:
[ 0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init
And the memory map:
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable
On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> >
> It's not boot with the same Failed to allocate error?
I think I misunderstood your question here. With >1GB is triggers a panic() when
KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
because 1GB isn't enough to load everything - so it fails in some other random
spot as it runs on out memory.
Thanks,
Sasha
-------------- next part --------------
A non-text attachment was scrubbed...
Name: config.sasha.gz
Type: application/gzip
Size: 40233 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140710/19f1c49f/attachment-0001.gz>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
2014-07-10 7:41 ` Andrey Ryabinin
@ 2014-07-10 14:07 ` Christoph Lameter
0 siblings, 0 replies; 80+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:07 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 10 Jul 2014, Andrey Ryabinin wrote:
> On 07/09/14 18:29, Christoph Lameter wrote:
> > On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> >
> >> Remove static and add function declarations to mm/slab.h so they
> >> could be used by kernel address sanitizer.
> >
> > Hmmm... This is allocator specific. At some future point it would be good
> > to move error reporting to slab_common.c and use those from all
> > allocators.
> >
>
> I could move declarations to kasan internals, but it will look ugly too.
> I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
> I can't tell right now how hard it will be, but it seems doable.
Well the simple approach is to first unify the reporting functions and
then work the way up to higher levels. The reporting functions could also
be more generalized to be more useful for multiple checking tools.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
2014-07-10 7:43 ` Andrey Ryabinin
@ 2014-07-10 14:08 ` Christoph Lameter
0 siblings, 0 replies; 80+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:08 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 10 Jul 2014, Andrey Ryabinin wrote:
> Should I send another patch to move this to slab_common.c?
Send one patch that is separte from this patchset to all slab
maintainers and include my ack.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 12:12 ` Andrey Ryabinin
@ 2014-07-10 15:55 ` Dave Hansen
2014-07-10 19:48 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Dave Hansen @ 2014-07-10 15:55 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
> On 07/10/14 00:26, Dave Hansen wrote:
>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> mapping with a scale and offset to translate a memory address to its corresponding
>>> shadow address.
>>>
>>> Here is function to translate address to corresponding shadow address:
>>>
>>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>>> {
>>> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>> + kasan_shadow_start;
>>> }
>>
>> How does this interact with vmalloc() addresses or those from a kmap()?
>>
> It's used only for lowmem:
>
> static inline bool addr_is_in_mem(unsigned long addr)
> {
> return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> }
That's fine, and definitely covers the common cases. Could you make
sure to call this out explicitly? Also, there's nothing to _keep_ this
approach working for things out of the direct map, right? It would just
be a matter of updating the shadow memory to have entries for the other
virtual address ranges.
addr_is_in_mem() is a pretty bad name for what it's doing. :)
I'd probably call it something like kasan_tracks_vaddr().
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 14:02 ` Sasha Levin
@ 2014-07-10 19:04 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:04 UTC (permalink / raw)
To: linux-arm-kernel
2014-07-10 18:02 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
>> Also boot cmdline might help.
>>
>
> Sure. It's the .config I use for fuzzing so it's rather big (attached).
>
> The cmdline is:
>
> [ 0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init
>
> And the memory map:
>
> [ 0.000000] e820: BIOS-provided physical RAM map:
> [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
> [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
> [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
> [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
> [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable
>
>
> On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> It's not boot with the same Failed to allocate error?
>
> I think I misunderstood your question here. With >1GB is triggers a panic() when
> KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
> because 1GB isn't enough to load everything - so it fails in some other random
> spot as it runs on out memory.
>
>
> Thanks,
> Sasha
Looks like I found where is a problem. memblock_alloc cannot allocate
accross numa nodes,
therefore kasan fails for numa=fake>=8.
You should succeed with numa=fake=7 or less.
--
Best regards,
Andrey Ryabinin
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 15:55 ` Dave Hansen
@ 2014-07-10 19:48 ` Andrey Ryabinin
2014-07-10 20:04 ` Dave Hansen
0 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:48 UTC (permalink / raw)
To: linux-arm-kernel
2014-07-10 19:55 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
>> On 07/10/14 00:26, Dave Hansen wrote:
>>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>> shadow address.
>>>>
>>>> Here is function to translate address to corresponding shadow address:
>>>>
>>>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>> {
>>>> return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>> + kasan_shadow_start;
>>>> }
>>>
>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>
>> It's used only for lowmem:
>>
>> static inline bool addr_is_in_mem(unsigned long addr)
>> {
>> return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> }
>
> That's fine, and definitely covers the common cases. Could you make
> sure to call this out explicitly? Also, there's nothing to _keep_ this
> approach working for things out of the direct map, right? It would just
> be a matter of updating the shadow memory to have entries for the other
> virtual address ranges.
Why do you want shadow for things out of the direct map?
If you want to catch use-after-free in vmalloc than DEBUG_PAGEALLOC
will be enough.
If you want catch out-of-bounds in vmalloc you don't need anything,
because vmalloc
allocates guarding hole in the end.
Or do you want something else?
>
> addr_is_in_mem() is a pretty bad name for what it's doing. :)
>
> I'd probably call it something like kasan_tracks_vaddr().
>
Agree
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
--
Best regards,
Andrey Ryabinin
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
2014-07-10 19:48 ` Andrey Ryabinin
@ 2014-07-10 20:04 ` Dave Hansen
0 siblings, 0 replies; 80+ messages in thread
From: Dave Hansen @ 2014-07-10 20:04 UTC (permalink / raw)
To: linux-arm-kernel
On 07/10/2014 12:48 PM, Andrey Ryabinin wrote:
>>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>>
>>> It's used only for lowmem:
>>>
>>> static inline bool addr_is_in_mem(unsigned long addr)
>>> {
>>> return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>>> }
>>
>> That's fine, and definitely covers the common cases. Could you make
>> sure to call this out explicitly? Also, there's nothing to _keep_ this
>> approach working for things out of the direct map, right? It would just
>> be a matter of updating the shadow memory to have entries for the other
>> virtual address ranges.
>
> Why do you want shadow for things out of the direct map? If you want
> to catch use-after-free in vmalloc than DEBUG_PAGEALLOC will be
> enough. If you want catch out-of-bounds in vmalloc you don't need
> anything, because vmalloc allocates guarding hole in the end. Or do
> you want something else?
That's all true for page-size accesses. Address sanitizer's biggest
advantage over using the page tables is that it can do checks at
sub-page granularity. But, we don't have any APIs that I can think of
that _care_ about <PAGE_SIZE outside of the direct map (maybe zsmalloc,
but that's pretty obscure).
So I guess it doesn't matter.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes Andrey Ryabinin
@ 2014-07-15 5:52 ` Joonsoo Kim
2014-07-15 6:54 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Joonsoo Kim @ 2014-07-15 5:52 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as unaccessible.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> include/linux/kasan.h | 6 ++++++
> mm/Makefile | 2 ++
> mm/kasan/kasan.c | 18 ++++++++++++++++++
> mm/kasan/kasan.h | 1 +
> mm/kasan/report.c | 7 +++++++
> mm/page_alloc.c | 4 ++++
> 6 files changed, 38 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 7efc3eb..4adc0a1 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
> void kasan_alloc_shadow(void);
> void kasan_init_shadow(void);
>
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
> #else /* CONFIG_KASAN */
>
> static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
> static inline void kasan_init_shadow(void) {}
> static inline void kasan_alloc_shadow(void) {}
>
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
> #endif /* CONFIG_KASAN */
>
> #endif /* LINUX_KASAN_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index dbe9a22..6a9c3f8 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,8 @@
> # Makefile for the linux memory manager.
> #
>
> +KASAN_SANITIZE_page_alloc.o := n
> +
> mmu-y := nommu.o
> mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \
> mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index e2cd345..109478e 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
> }
> }
>
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + if (likely(page && !PageHighMem(page)))
> + unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + if (likely(!PageHighMem(page)))
> + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
> +}
> +
> void *kasan_memcpy(void *dst, const void *src, size_t len)
> {
> if (unlikely(len == 0))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 711ae4f..be9597e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -5,6 +5,7 @@
> #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
>
> +#define KASAN_FREE_PAGE 0xFF /* page was freed */
> #define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
>
> struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 2430e05..6ef9e57 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
> case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> bug_type = "buffer overflow";
> break;
> + case KASAN_FREE_PAGE:
> + bug_type = "use after free";
> + break;
> case KASAN_SHADOW_GAP:
> bug_type = "wild memory access";
> break;
> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
> page = virt_to_page(info->access_addr);
>
> switch (shadow_val) {
> + case KASAN_FREE_PAGE:
> + dump_page(page, "kasan error");
> + dump_stack();
> + break;
> case KASAN_SHADOW_GAP:
> pr_err("No metainfo is available for this access.\n");
> dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8c9eeec..67833d1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -61,6 +61,7 @@
> #include <linux/page-debug-flags.h>
> #include <linux/hugetlb.h>
> #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>
> #include <asm/sections.h>
> #include <asm/tlbflush.h>
> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>
> trace_mm_page_free(page, order);
> kmemcheck_free_shadow(page, order);
> + kasan_free_pages(page, order);
>
> if (PageAnon(page))
> page->mapping = NULL;
> @@ -2807,6 +2809,7 @@ out:
> if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
> goto retry_cpuset;
>
> + kasan_alloc_pages(page, order);
> return page;
> }
> EXPORT_SYMBOL(__alloc_pages_nodemask);
> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> if (end != outer_end)
> free_contig_range(end, outer_end - end);
>
> + kasan_alloc_pages(pfn_to_page(start), end - start);
> done:
> undo_isolate_page_range(pfn_max_align_down(start),
> pfn_max_align_up(end), migratetype);
Hello,
I don't think that this is right place for this hook.
There is a function, __isolate_free_pages(), which steals buddy pages
from page allocator. So you should put this hook onto that function.
alloc_contig_range() uses that function through below call path, so
adding hook on it solves your issue here.
alloc_contig_range() -> isolate_freepages_range() ->
isolate_freepages_block() -> split_free_page -> __isolate_free_page()
And, this also solves marking issue on compaction logic, since
compaction also steal buddy pages from page allocator through
isolate_freepages() -> isolate_freepages_block() -> split_free_page()
-> __isolate_free_page().
Thanks.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub Andrey Ryabinin
@ 2014-07-15 5:53 ` Joonsoo Kim
2014-07-15 6:56 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Joonsoo Kim @ 2014-07-15 5:53 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
> This patch shares virt_to_cache() between slab and slub and
> it used in cache_from_obj() now.
> Later virt_to_cache() will be kernel address sanitizer also.
I think that this patch won't be needed.
See comment in 15/21.
Thanks.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory Andrey Ryabinin
@ 2014-07-15 6:04 ` Joonsoo Kim
2014-07-15 7:37 ` Andrey Ryabinin
0 siblings, 1 reply; 80+ messages in thread
From: Joonsoo Kim @ 2014-07-15 6:04 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> Some code in slub could validly touch memory marked by kasan as unaccessible.
> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> so to avoid false positive reports such places are protected by
> kasan_disable_local()/kasan_enable_local() calls.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> mm/slub.c | 21 +++++++++++++++++++--
> 1 file changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 6ddedf9..c8dbea7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
> if (!(s->flags & SLAB_STORE_USER))
> return;
>
> + kasan_disable_local();
> print_track("Allocated", get_track(s, object, TRACK_ALLOC));
> print_track("Freed", get_track(s, object, TRACK_FREE));
> + kasan_enable_local();
I don't think that this is needed since print_track() doesn't call
external function with object pointer. print_track() call pr_err(), but,
before calling, it retrieve t->addrs[i] so memory access only occurs
in slub.c.
> }
>
> static void print_page_info(struct page *page)
> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> unsigned int off; /* Offset of last byte */
> u8 *addr = page_address(page);
>
> + kasan_disable_local();
> +
> print_tracking(s, p);
>
> print_page_info(page);
> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> /* Beginning of the filler is the free pointer */
> print_section("Padding ", p + off, s->size - off);
>
> + kasan_enable_local();
> +
> dump_stack();
> }
And, I recommend that you put this hook on right place.
At a glance, the problematic function is print_section() which have
external function call, print_hex_dump(), with object pointer.
If you disable kasan in print_section, all the below thing won't be
needed, I guess.
Thanks.
>
> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
> struct page *page,
> void *object, unsigned long addr)
> {
> +
> + kasan_disable_local();
> if (!check_slab(s, page))
> goto bad;
>
> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
> set_track(s, object, TRACK_ALLOC, addr);
> trace(s, page, object, 1);
> init_object(s, object, SLUB_RED_ACTIVE);
> + kasan_enable_local();
> return 1;
>
> bad:
> @@ -1041,6 +1050,7 @@ bad:
> page->inuse = page->objects;
> page->freelist = NULL;
> }
> + kasan_enable_local();
> return 0;
> }
>
> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>
> spin_lock_irqsave(&n->list_lock, *flags);
> slab_lock(page);
> + kasan_disable_local();
>
> if (!check_slab(s, page))
> goto fail;
> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
> trace(s, page, object, 0);
> init_object(s, object, SLUB_RED_INACTIVE);
> out:
> + kasan_enable_local();
> slab_unlock(page);
> /*
> * Keep node_lock to preserve integrity
> @@ -1096,6 +1108,7 @@ out:
> return n;
>
> fail:
> + kasan_enable_local();
> slab_unlock(page);
> spin_unlock_irqrestore(&n->list_lock, *flags);
> slab_fix(s, "Object at 0x%p not freed", object);
> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
> void *object)
> {
> setup_object_debug(s, page, object);
> - if (unlikely(s->ctor))
> + if (unlikely(s->ctor)) {
> + kasan_disable_local();
> s->ctor(object);
> + kasan_enable_local();
> + }
> }
> static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>
> if (kmem_cache_debug(s)) {
> void *p;
> -
> + kasan_disable_local();
> slab_pad_check(s, page);
> for_each_object(p, s, page_address(page),
> page->objects)
> check_object(s, page, p, SLUB_RED_INACTIVE);
> + kasan_enable_local();
> }
>
> kmemcheck_free_shadow(page, compound_order(page));
> --
> 1.8.5.5
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
2014-07-15 6:12 ` Joonsoo Kim
@ 2014-07-15 6:08 ` Dmitry Vyukov
2014-07-15 9:34 ` Andrey Ryabinin
1 sibling, 0 replies; 80+ messages in thread
From: Dmitry Vyukov @ 2014-07-15 6:08 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Jul 15, 2014 at 10:12 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
>
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
>
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>
> Isn't it?
I absolutely agree!
> Thanks.
>
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>> fs/dcache.c | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>> #include <linux/prefetch.h>
>> #include <linux/ratelimit.h>
>> #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>> #include "internal.h"
>> #include "mount.h"
>>
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>> kmem_cache_free(dentry_cache, dentry);
>> return NULL;
>> }
>> + unpoison_shadow(dname,
>> + roundup(name->len + 1, sizeof(unsigned long)));
>> } else {
>> dname = dentry->d_iname;
>> }
>> --
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo at kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator Andrey Ryabinin
2014-07-09 14:48 ` Christoph Lameter
@ 2014-07-15 6:09 ` Joonsoo Kim
2014-07-15 7:45 ` Andrey Ryabinin
1 sibling, 1 reply; 80+ messages in thread
From: Joonsoo Kim @ 2014-07-15 6:09 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> include/linux/kasan.h | 22 ++++++++++
> include/linux/slab.h | 19 +++++++--
> lib/Kconfig.kasan | 2 +
> mm/kasan/kasan.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.h | 5 +++
> mm/kasan/report.c | 23 +++++++++++
> mm/slab.h | 2 +-
> mm/slab_common.c | 9 +++--
> mm/slub.c | 24 ++++++++++-
> 9 files changed, 208 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 4adc0a1..583c011 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
> void kasan_alloc_pages(struct page *page, unsigned int order);
> void kasan_free_pages(struct page *page, unsigned int order);
>
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
> +
> +void kasan_alloc_slab_pages(struct page *page, int order);
> +void kasan_free_slab_pages(struct page *page, int order);
> +
> #else /* CONFIG_KASAN */
>
> static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
> +
> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
> +
> #endif /* CONFIG_KASAN */
>
> #endif /* LINUX_KASAN_H */
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 68b1feab..a9513e9 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
> (unsigned long)ZERO_SIZE_PTR)
>
> #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>
> struct mem_cgroup;
> /*
> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
> */
> static __always_inline void *kmalloc(size_t size, gfp_t flags)
> {
> + void *ret;
> +
> if (__builtin_constant_p(size)) {
> if (size > KMALLOC_MAX_CACHE_SIZE)
> return kmalloc_large(size, flags);
> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
> if (!index)
> return ZERO_SIZE_PTR;
>
> - return kmem_cache_alloc_trace(kmalloc_caches[index],
> + ret = kmem_cache_alloc_trace(kmalloc_caches[index],
> flags, size);
> +
> + kasan_kmalloc(kmalloc_caches[index], ret, size);
> +
> + return ret;
> }
> #endif
> }
> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
> static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
> {
> #ifndef CONFIG_SLOB
> + void *ret;
> +
> if (__builtin_constant_p(size) &&
> size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
> int i = kmalloc_index(size);
> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
> if (!i)
> return ZERO_SIZE_PTR;
>
> - return kmem_cache_alloc_node_trace(kmalloc_caches[i],
> - flags, node, size);
> + ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
> + flags, node, size);
> +
> + kasan_kmalloc(kmalloc_caches[i], ret, size);
> +
> + return ret;
> }
> #endif
> return __kmalloc_node(size, flags, node);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 2bfff78..289a624 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>
> config KASAN
> bool "AddressSanitizer: dynamic memory error detector"
> + depends on SLUB
> + select STACKTRACE
> default n
> help
> Enables AddressSanitizer - dynamic memory error detector,
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 109478e..9b5182a 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
> }
> }
>
> +void kasan_alloc_slab_pages(struct page *page, int order)
> +{
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
> +}
> +
> +void kasan_free_slab_pages(struct page *page, int order)
> +{
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + if (unlikely(object == NULL))
> + return;
> +
> + poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
> + unpoison_shadow(object, cache->alloc_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> + unsigned long size = cache->size;
> + unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> + unsigned long redzone_start;
> + unsigned long redzone_end;
> +
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + if (unlikely(object == NULL))
> + return;
> +
> + redzone_start = round_up((unsigned long)(object + size),
> + KASAN_SHADOW_SCALE_SIZE);
> + redzone_end = (unsigned long)object + cache->size;
> +
> + unpoison_shadow(object, size);
> + poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> + KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> + struct page *page;
> + unsigned long redzone_start;
> + unsigned long redzone_end;
> +
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + if (unlikely(ptr == NULL))
> + return;
> +
> + page = virt_to_page(ptr);
> + redzone_start = round_up((unsigned long)(ptr + size),
> + KASAN_SHADOW_SCALE_SIZE);
> + redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> + unpoison_shadow(ptr, size);
> + poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> + KASAN_PAGE_REDZONE);
> +}
> +EXPORT_SYMBOL(kasan_kmalloc_large);
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> + struct page *page;
> +
> + if (unlikely(object == ZERO_SIZE_PTR))
> + return;
> +
> + page = virt_to_head_page(object);
> +
> + if (unlikely(!PageSlab(page)))
> + kasan_kmalloc_large(object, size);
> + else
> + kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> + struct page *page;
> +
> + if (unlikely(!kasan_initialized))
> + return;
> +
> + page = virt_to_page(ptr);
> + poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
> +}
> +
> void kasan_alloc_pages(struct page *page, unsigned int order)
> {
> if (unlikely(!kasan_initialized))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index be9597e..f925d03 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,11 @@
> #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
>
> #define KASAN_FREE_PAGE 0xFF /* page was freed */
> +#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE 0xFA /* free slab page */
> #define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
>
> struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 6ef9e57..6d829af 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
> u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>
> switch (shadow_val) {
> + case KASAN_PAGE_REDZONE:
> + case KASAN_SLAB_REDZONE:
> + case KASAN_KMALLOC_REDZONE:
> case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> bug_type = "buffer overflow";
> break;
> case KASAN_FREE_PAGE:
> + case KASAN_SLAB_FREE:
> + case KASAN_KMALLOC_FREE:
> bug_type = "use after free";
> break;
> case KASAN_SHADOW_GAP:
> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
> page = virt_to_page(info->access_addr);
>
> switch (shadow_val) {
> + case KASAN_SLAB_REDZONE:
> + cache = virt_to_cache((void *)info->access_addr);
> + slab_err(cache, page, "access to slab redzone");
We need head page of invalid access address for slab_err() since head
page has all meta data of this slab. So, instead of, virt_to_cache,
use virt_to_head_page() and page->slab_cache.
> + dump_stack();
> + break;
> + case KASAN_KMALLOC_FREE:
> + case KASAN_KMALLOC_REDZONE:
> + case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> + if (PageSlab(page)) {
> + cache = virt_to_cache((void *)info->access_addr);
> + slab_start = page_address(virt_to_head_page((void *)info->access_addr));
> + object = virt_to_obj(cache, slab_start,
> + (void *)info->access_addr);
> + object_err(cache, page, object, "kasan error");
> + break;
> + }
Same here, page should be head page.
Thanks.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
@ 2014-07-15 6:12 ` Joonsoo Kim
2014-07-15 6:08 ` Dmitry Vyukov
2014-07-15 9:34 ` Andrey Ryabinin
0 siblings, 2 replies; 80+ messages in thread
From: Joonsoo Kim @ 2014-07-15 6:12 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
> We need to manually unpoison rounded up allocation size for dname
> to avoid kasan's reports in __d_lookup_rcu.
> __d_lookup_rcu may validly read a little beyound allocated size.
If it read a little beyond allocated size, IMHO, it is better to
allocate correct size.
kmalloc(name->len + 1, GFP_KERNEL); -->
kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
Isn't it?
Thanks.
>
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> fs/dcache.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/fs/dcache.c b/fs/dcache.c
> index b7e8b20..dff64f2 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -38,6 +38,7 @@
> #include <linux/prefetch.h>
> #include <linux/ratelimit.h>
> #include <linux/list_lru.h>
> +#include <linux/kasan.h>
> #include "internal.h"
> #include "mount.h"
>
> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
> kmem_cache_free(dentry_cache, dentry);
> return NULL;
> }
> + unpoison_shadow(dname,
> + roundup(name->len + 1, sizeof(unsigned long)));
> } else {
> dname = dentry->d_iname;
> }
> --
> 1.8.5.5
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
2014-07-15 5:52 ` Joonsoo Kim
@ 2014-07-15 6:54 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 6:54 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 09:52, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
>> Add kernel address sanitizer hooks to mark allocated page's addresses
>> as accessible in corresponding shadow region.
>> Mark freed pages as unaccessible.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>> include/linux/kasan.h | 6 ++++++
>> mm/Makefile | 2 ++
>> mm/kasan/kasan.c | 18 ++++++++++++++++++
>> mm/kasan/kasan.h | 1 +
>> mm/kasan/report.c | 7 +++++++
>> mm/page_alloc.c | 4 ++++
>> 6 files changed, 38 insertions(+)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 7efc3eb..4adc0a1 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>> void kasan_alloc_shadow(void);
>> void kasan_init_shadow(void);
>>
>> +void kasan_alloc_pages(struct page *page, unsigned int order);
>> +void kasan_free_pages(struct page *page, unsigned int order);
>> +
>> #else /* CONFIG_KASAN */
>>
>> static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>> static inline void kasan_init_shadow(void) {}
>> static inline void kasan_alloc_shadow(void) {}
>>
>> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>> +
>> #endif /* CONFIG_KASAN */
>>
>> #endif /* LINUX_KASAN_H */
>> diff --git a/mm/Makefile b/mm/Makefile
>> index dbe9a22..6a9c3f8 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -2,6 +2,8 @@
>> # Makefile for the linux memory manager.
>> #
>>
>> +KASAN_SANITIZE_page_alloc.o := n
>> +
>> mmu-y := nommu.o
>> mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \
>> mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index e2cd345..109478e 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>> }
>> }
>>
>> +void kasan_alloc_pages(struct page *page, unsigned int order)
>> +{
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + if (likely(page && !PageHighMem(page)))
>> + unpoison_shadow(page_address(page), PAGE_SIZE << order);
>> +}
>> +
>> +void kasan_free_pages(struct page *page, unsigned int order)
>> +{
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + if (likely(!PageHighMem(page)))
>> + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
>> +}
>> +
>> void *kasan_memcpy(void *dst, const void *src, size_t len)
>> {
>> if (unlikely(len == 0))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index 711ae4f..be9597e 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -5,6 +5,7 @@
>> #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>> #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
>>
>> +#define KASAN_FREE_PAGE 0xFF /* page was freed */
>> #define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
>>
>> struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 2430e05..6ef9e57 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>> case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>> bug_type = "buffer overflow";
>> break;
>> + case KASAN_FREE_PAGE:
>> + bug_type = "use after free";
>> + break;
>> case KASAN_SHADOW_GAP:
>> bug_type = "wild memory access";
>> break;
>> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>> page = virt_to_page(info->access_addr);
>>
>> switch (shadow_val) {
>> + case KASAN_FREE_PAGE:
>> + dump_page(page, "kasan error");
>> + dump_stack();
>> + break;
>> case KASAN_SHADOW_GAP:
>> pr_err("No metainfo is available for this access.\n");
>> dump_stack();
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 8c9eeec..67833d1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -61,6 +61,7 @@
>> #include <linux/page-debug-flags.h>
>> #include <linux/hugetlb.h>
>> #include <linux/sched/rt.h>
>> +#include <linux/kasan.h>
>>
>> #include <asm/sections.h>
>> #include <asm/tlbflush.h>
>> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>
>> trace_mm_page_free(page, order);
>> kmemcheck_free_shadow(page, order);
>> + kasan_free_pages(page, order);
>>
>> if (PageAnon(page))
>> page->mapping = NULL;
>> @@ -2807,6 +2809,7 @@ out:
>> if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>> goto retry_cpuset;
>>
>> + kasan_alloc_pages(page, order);
>> return page;
>> }
>> EXPORT_SYMBOL(__alloc_pages_nodemask);
>> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>> if (end != outer_end)
>> free_contig_range(end, outer_end - end);
>>
>> + kasan_alloc_pages(pfn_to_page(start), end - start);
>> done:
>> undo_isolate_page_range(pfn_max_align_down(start),
>> pfn_max_align_up(end), migratetype);
>
> Hello,
>
> I don't think that this is right place for this hook.
>
I'm also made a stupid mistake here. kasan_alloc_pages() expects page order here,
not count of pages.
> There is a function, __isolate_free_pages(), which steals buddy pages
> from page allocator. So you should put this hook onto that function.
>
> alloc_contig_range() uses that function through below call path, so
> adding hook on it solves your issue here.
>
> alloc_contig_range() -> isolate_freepages_range() ->
> isolate_freepages_block() -> split_free_page -> __isolate_free_page()
>
> And, this also solves marking issue on compaction logic, since
> compaction also steal buddy pages from page allocator through
> isolate_freepages() -> isolate_freepages_block() -> split_free_page()
> -> __isolate_free_page().
>
Yep, I've seen some false positives when compaction
was enabled and just yesterday I've fixed it as your suggested.
I'm also going to move kasan_alloc_pages hook from alloc_pages_nodemask()
to prep_new_page. I think this is more right place for such hook and will
make possible to enable instrumentation for page_alloc.c
Thanks
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
2014-07-15 5:53 ` Joonsoo Kim
@ 2014-07-15 6:56 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 6:56 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 09:53, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
>> This patch shares virt_to_cache() between slab and slub and
>> it used in cache_from_obj() now.
>> Later virt_to_cache() will be kernel address sanitizer also.
>
> I think that this patch won't be needed.
> See comment in 15/21.
>
Ok, I'll drop it.
> Thanks.
>
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-15 6:04 ` Joonsoo Kim
@ 2014-07-15 7:37 ` Andrey Ryabinin
2014-07-15 8:18 ` Joonsoo Kim
0 siblings, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 7:37 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 10:04, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>> so to avoid false positive reports such places are protected by
>> kasan_disable_local()/kasan_enable_local() calls.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>> mm/slub.c | 21 +++++++++++++++++++--
>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 6ddedf9..c8dbea7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>> if (!(s->flags & SLAB_STORE_USER))
>> return;
>>
>> + kasan_disable_local();
>> print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>> print_track("Freed", get_track(s, object, TRACK_FREE));
>> + kasan_enable_local();
>
> I don't think that this is needed since print_track() doesn't call
> external function with object pointer. print_track() call pr_err(), but,
> before calling, it retrieve t->addrs[i] so memory access only occurs
> in slub.c.
>
Agree.
>> }
>>
>> static void print_page_info(struct page *page)
>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>> unsigned int off; /* Offset of last byte */
>> u8 *addr = page_address(page);
>>
>> + kasan_disable_local();
>> +
>> print_tracking(s, p);
>>
>> print_page_info(page);
>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>> /* Beginning of the filler is the free pointer */
>> print_section("Padding ", p + off, s->size - off);
>>
>> + kasan_enable_local();
>> +
>> dump_stack();
>> }
>
> And, I recommend that you put this hook on right place.
> At a glance, the problematic function is print_section() which have
> external function call, print_hex_dump(), with object pointer.
> If you disable kasan in print_section, all the below thing won't be
> needed, I guess.
>
Nope, at least memchr_inv() call in slab_pad_check will be a problem.
I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
for kasan.
> Thanks.
>
>>
>> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>> struct page *page,
>> void *object, unsigned long addr)
>> {
>> +
>> + kasan_disable_local();
>> if (!check_slab(s, page))
>> goto bad;
>>
>> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>> set_track(s, object, TRACK_ALLOC, addr);
>> trace(s, page, object, 1);
>> init_object(s, object, SLUB_RED_ACTIVE);
>> + kasan_enable_local();
>> return 1;
>>
>> bad:
>> @@ -1041,6 +1050,7 @@ bad:
>> page->inuse = page->objects;
>> page->freelist = NULL;
>> }
>> + kasan_enable_local();
>> return 0;
>> }
>>
>> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>
>> spin_lock_irqsave(&n->list_lock, *flags);
>> slab_lock(page);
>> + kasan_disable_local();
>>
>> if (!check_slab(s, page))
>> goto fail;
>> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>> trace(s, page, object, 0);
>> init_object(s, object, SLUB_RED_INACTIVE);
>> out:
>> + kasan_enable_local();
>> slab_unlock(page);
>> /*
>> * Keep node_lock to preserve integrity
>> @@ -1096,6 +1108,7 @@ out:
>> return n;
>>
>> fail:
>> + kasan_enable_local();
>> slab_unlock(page);
>> spin_unlock_irqrestore(&n->list_lock, *flags);
>> slab_fix(s, "Object at 0x%p not freed", object);
>> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>> void *object)
>> {
>> setup_object_debug(s, page, object);
>> - if (unlikely(s->ctor))
>> + if (unlikely(s->ctor)) {
>> + kasan_disable_local();
>> s->ctor(object);
>> + kasan_enable_local();
>> + }
>> }
>> static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>
>> if (kmem_cache_debug(s)) {
>> void *p;
>> -
>> + kasan_disable_local();
>> slab_pad_check(s, page);
>> for_each_object(p, s, page_address(page),
>> page->objects)
>> check_object(s, page, p, SLUB_RED_INACTIVE);
>> + kasan_enable_local();
>> }
>>
>> kmemcheck_free_shadow(page, compound_order(page));
>> --
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo at kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
2014-07-15 6:09 ` Joonsoo Kim
@ 2014-07-15 7:45 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 7:45 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 10:09, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>> include/linux/kasan.h | 22 ++++++++++
>> include/linux/slab.h | 19 +++++++--
>> lib/Kconfig.kasan | 2 +
>> mm/kasan/kasan.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>> mm/kasan/kasan.h | 5 +++
>> mm/kasan/report.c | 23 +++++++++++
>> mm/slab.h | 2 +-
>> mm/slab_common.c | 9 +++--
>> mm/slub.c | 24 ++++++++++-
>> 9 files changed, 208 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 4adc0a1..583c011 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>> void kasan_alloc_pages(struct page *page, unsigned int order);
>> void kasan_free_pages(struct page *page, unsigned int order);
>>
>> +void kasan_kmalloc_large(const void *ptr, size_t size);
>> +void kasan_kfree_large(const void *ptr);
>> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
>> +void kasan_krealloc(const void *object, size_t new_size);
>> +
>> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
>> +void kasan_slab_free(struct kmem_cache *s, void *object);
>> +
>> +void kasan_alloc_slab_pages(struct page *page, int order);
>> +void kasan_free_slab_pages(struct page *page, int order);
>> +
>> #else /* CONFIG_KASAN */
>>
>> static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>>
>> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
>> +static inline void kasan_kfree_large(const void *ptr) {}
>> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
>> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
>> +
>> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
>> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
>> +
>> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
>> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
>> +
>> #endif /* CONFIG_KASAN */
>>
>> #endif /* LINUX_KASAN_H */
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index 68b1feab..a9513e9 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -104,6 +104,7 @@
>> (unsigned long)ZERO_SIZE_PTR)
>>
>> #include <linux/kmemleak.h>
>> +#include <linux/kasan.h>
>>
>> struct mem_cgroup;
>> /*
>> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>> */
>> static __always_inline void *kmalloc(size_t size, gfp_t flags)
>> {
>> + void *ret;
>> +
>> if (__builtin_constant_p(size)) {
>> if (size > KMALLOC_MAX_CACHE_SIZE)
>> return kmalloc_large(size, flags);
>> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>> if (!index)
>> return ZERO_SIZE_PTR;
>>
>> - return kmem_cache_alloc_trace(kmalloc_caches[index],
>> + ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>> flags, size);
>> +
>> + kasan_kmalloc(kmalloc_caches[index], ret, size);
>> +
>> + return ret;
>> }
>> #endif
>> }
>> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>> static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>> {
>> #ifndef CONFIG_SLOB
>> + void *ret;
>> +
>> if (__builtin_constant_p(size) &&
>> size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>> int i = kmalloc_index(size);
>> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>> if (!i)
>> return ZERO_SIZE_PTR;
>>
>> - return kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> - flags, node, size);
>> + ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> + flags, node, size);
>> +
>> + kasan_kmalloc(kmalloc_caches[i], ret, size);
>> +
>> + return ret;
>> }
>> #endif
>> return __kmalloc_node(size, flags, node);
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index 2bfff78..289a624 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>>
>> config KASAN
>> bool "AddressSanitizer: dynamic memory error detector"
>> + depends on SLUB
>> + select STACKTRACE
>> default n
>> help
>> Enables AddressSanitizer - dynamic memory error detector,
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 109478e..9b5182a 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>> }
>> }
>>
>> +void kasan_alloc_slab_pages(struct page *page, int order)
>> +{
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
>> +}
>> +
>> +void kasan_free_slab_pages(struct page *page, int order)
>> +{
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
>> +}
>> +
>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>> +{
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + if (unlikely(object == NULL))
>> + return;
>> +
>> + poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
>> + unpoison_shadow(object, cache->alloc_size);
>> +}
>> +
>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>> +{
>> + unsigned long size = cache->size;
>> + unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> +
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>> +}
>> +
>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>> +{
>> + unsigned long redzone_start;
>> + unsigned long redzone_end;
>> +
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + if (unlikely(object == NULL))
>> + return;
>> +
>> + redzone_start = round_up((unsigned long)(object + size),
>> + KASAN_SHADOW_SCALE_SIZE);
>> + redzone_end = (unsigned long)object + cache->size;
>> +
>> + unpoison_shadow(object, size);
>> + poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> + KASAN_KMALLOC_REDZONE);
>> +
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc);
>> +
>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>> +{
>> + struct page *page;
>> + unsigned long redzone_start;
>> + unsigned long redzone_end;
>> +
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + if (unlikely(ptr == NULL))
>> + return;
>> +
>> + page = virt_to_page(ptr);
>> + redzone_start = round_up((unsigned long)(ptr + size),
>> + KASAN_SHADOW_SCALE_SIZE);
>> + redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>> +
>> + unpoison_shadow(ptr, size);
>> + poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> + KASAN_PAGE_REDZONE);
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc_large);
>> +
>> +void kasan_krealloc(const void *object, size_t size)
>> +{
>> + struct page *page;
>> +
>> + if (unlikely(object == ZERO_SIZE_PTR))
>> + return;
>> +
>> + page = virt_to_head_page(object);
>> +
>> + if (unlikely(!PageSlab(page)))
>> + kasan_kmalloc_large(object, size);
>> + else
>> + kasan_kmalloc(page->slab_cache, object, size);
>> +}
>> +
>> +void kasan_kfree_large(const void *ptr)
>> +{
>> + struct page *page;
>> +
>> + if (unlikely(!kasan_initialized))
>> + return;
>> +
>> + page = virt_to_page(ptr);
>> + poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
>> +}
>> +
>> void kasan_alloc_pages(struct page *page, unsigned int order)
>> {
>> if (unlikely(!kasan_initialized))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index be9597e..f925d03 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -6,6 +6,11 @@
>> #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
>>
>> #define KASAN_FREE_PAGE 0xFF /* page was freed */
>> +#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
>> +#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
>> +#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
>> +#define KASAN_SLAB_FREE 0xFA /* free slab page */
>> #define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
>>
>> struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 6ef9e57..6d829af 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>> u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>>
>> switch (shadow_val) {
>> + case KASAN_PAGE_REDZONE:
>> + case KASAN_SLAB_REDZONE:
>> + case KASAN_KMALLOC_REDZONE:
>> case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>> bug_type = "buffer overflow";
>> break;
>> case KASAN_FREE_PAGE:
>> + case KASAN_SLAB_FREE:
>> + case KASAN_KMALLOC_FREE:
>> bug_type = "use after free";
>> break;
>> case KASAN_SHADOW_GAP:
>> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>> page = virt_to_page(info->access_addr);
>>
>> switch (shadow_val) {
>> + case KASAN_SLAB_REDZONE:
>> + cache = virt_to_cache((void *)info->access_addr);
>> + slab_err(cache, page, "access to slab redzone");
>
> We need head page of invalid access address for slab_err() since head
> page has all meta data of this slab. So, instead of, virt_to_cache,
> use virt_to_head_page() and page->slab_cache.
>
>> + dump_stack();
>> + break;
>> + case KASAN_KMALLOC_FREE:
>> + case KASAN_KMALLOC_REDZONE:
>> + case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
>> + if (PageSlab(page)) {
>> + cache = virt_to_cache((void *)info->access_addr);
>> + slab_start = page_address(virt_to_head_page((void *)info->access_addr));
>> + object = virt_to_obj(cache, slab_start,
>> + (void *)info->access_addr);
>> + object_err(cache, page, object, "kasan error");
>> + break;
>> + }
>
> Same here, page should be head page.
>
Correct, I'll fix it.
Thanks.
> Thanks.
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-15 7:37 ` Andrey Ryabinin
@ 2014-07-15 8:18 ` Joonsoo Kim
2014-07-15 9:51 ` Andrey Ryabinin
2014-07-15 14:26 ` Christoph Lameter
0 siblings, 2 replies; 80+ messages in thread
From: Joonsoo Kim @ 2014-07-15 8:18 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
> On 07/15/14 10:04, Joonsoo Kim wrote:
> > On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> >> Some code in slub could validly touch memory marked by kasan as unaccessible.
> >> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> >> so to avoid false positive reports such places are protected by
> >> kasan_disable_local()/kasan_enable_local() calls.
> >>
> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> >> ---
> >> mm/slub.c | 21 +++++++++++++++++++--
> >> 1 file changed, 19 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 6ddedf9..c8dbea7 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
> >> if (!(s->flags & SLAB_STORE_USER))
> >> return;
> >>
> >> + kasan_disable_local();
> >> print_track("Allocated", get_track(s, object, TRACK_ALLOC));
> >> print_track("Freed", get_track(s, object, TRACK_FREE));
> >> + kasan_enable_local();
> >
> > I don't think that this is needed since print_track() doesn't call
> > external function with object pointer. print_track() call pr_err(), but,
> > before calling, it retrieve t->addrs[i] so memory access only occurs
> > in slub.c.
> >
> Agree.
>
> >> }
> >>
> >> static void print_page_info(struct page *page)
> >> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >> unsigned int off; /* Offset of last byte */
> >> u8 *addr = page_address(page);
> >>
> >> + kasan_disable_local();
> >> +
> >> print_tracking(s, p);
> >>
> >> print_page_info(page);
> >> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >> /* Beginning of the filler is the free pointer */
> >> print_section("Padding ", p + off, s->size - off);
> >>
> >> + kasan_enable_local();
> >> +
> >> dump_stack();
> >> }
> >
> > And, I recommend that you put this hook on right place.
> > At a glance, the problematic function is print_section() which have
> > external function call, print_hex_dump(), with object pointer.
> > If you disable kasan in print_section, all the below thing won't be
> > needed, I guess.
> >
>
> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
>
> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> for kasan.
I don't agree with this.
If someone is going to add a slab_pad_check() in other places in
slub.c, we should disable/enable kasan there, too. This looks same
maintenance problem to me. Putting disable/enable only where we
strictly need at least ensures that we don't need to care when using
slub internal functions.
And, if memchr_inv() is problem, I think that you also need to add hook
into validate_slab_cache().
validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
check_object() -> check_bytes_and_report() -> memchr_inv()
Thanks.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
2014-07-15 6:12 ` Joonsoo Kim
2014-07-15 6:08 ` Dmitry Vyukov
@ 2014-07-15 9:34 ` Andrey Ryabinin
2014-07-15 9:45 ` Dmitry Vyukov
1 sibling, 1 reply; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 9:34 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 10:12, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
>
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
>
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>
> Isn't it?
>
It's not needed here because kmalloc always roundup allocation size.
This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
in other words it's by design.
That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.
I think this patch needs only more detailed description why we not simply allocate more.
Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().
> Thanks.
>
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>> fs/dcache.c | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>> #include <linux/prefetch.h>
>> #include <linux/ratelimit.h>
>> #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>> #include "internal.h"
>> #include "mount.h"
>>
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>> kmem_cache_free(dentry_cache, dentry);
>> return NULL;
>> }
>> + unpoison_shadow(dname,
>> + roundup(name->len + 1, sizeof(unsigned long)));
>> } else {
>> dname = dentry->d_iname;
>> }
>> --
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo at kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
2014-07-15 9:34 ` Andrey Ryabinin
@ 2014-07-15 9:45 ` Dmitry Vyukov
0 siblings, 0 replies; 80+ messages in thread
From: Dmitry Vyukov @ 2014-07-15 9:45 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Jul 15, 2014 at 1:34 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 07/15/14 10:12, Joonsoo Kim wrote:
>> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>>> We need to manually unpoison rounded up allocation size for dname
>>> to avoid kasan's reports in __d_lookup_rcu.
>>> __d_lookup_rcu may validly read a little beyound allocated size.
>>
>> If it read a little beyond allocated size, IMHO, it is better to
>> allocate correct size.
>>
>> kmalloc(name->len + 1, GFP_KERNEL); -->
>> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>>
>> Isn't it?
>>
>
> It's not needed here because kmalloc always roundup allocation size.
>
> This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
> dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
> in other words it's by design.
>
> That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
> Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.
>
> I think this patch needs only more detailed description why we not simply allocate more.
> Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().
Note that this poison/unpoison functionality can be used in other
contexts. E.g. when you allocate a bunch of pages, then at some point
poison a part of it to ensure that nobody touches it, then unpoison it
back. Allocated/unallocated looks like a bad fit here, because it has
nothing to do with allocation state. Poison/unpoison is also what we
use in user-space.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-15 8:18 ` Joonsoo Kim
@ 2014-07-15 9:51 ` Andrey Ryabinin
2014-07-15 14:26 ` Christoph Lameter
1 sibling, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 9:51 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 12:18, Joonsoo Kim wrote:
> On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
>> On 07/15/14 10:04, Joonsoo Kim wrote:
>>> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>>>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>>>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>>>> so to avoid false positive reports such places are protected by
>>>> kasan_disable_local()/kasan_enable_local() calls.
>>>>
>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>> ---
>>>> mm/slub.c | 21 +++++++++++++++++++--
>>>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/slub.c b/mm/slub.c
>>>> index 6ddedf9..c8dbea7 100644
>>>> --- a/mm/slub.c
>>>> +++ b/mm/slub.c
>>>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>>> if (!(s->flags & SLAB_STORE_USER))
>>>> return;
>>>>
>>>> + kasan_disable_local();
>>>> print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>>> print_track("Freed", get_track(s, object, TRACK_FREE));
>>>> + kasan_enable_local();
>>>
>>> I don't think that this is needed since print_track() doesn't call
>>> external function with object pointer. print_track() call pr_err(), but,
>>> before calling, it retrieve t->addrs[i] so memory access only occurs
>>> in slub.c.
>>>
>> Agree.
>>
>>>> }
>>>>
>>>> static void print_page_info(struct page *page)
>>>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>> unsigned int off; /* Offset of last byte */
>>>> u8 *addr = page_address(page);
>>>>
>>>> + kasan_disable_local();
>>>> +
>>>> print_tracking(s, p);
>>>>
>>>> print_page_info(page);
>>>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>> /* Beginning of the filler is the free pointer */
>>>> print_section("Padding ", p + off, s->size - off);
>>>>
>>>> + kasan_enable_local();
>>>> +
>>>> dump_stack();
>>>> }
>>>
>>> And, I recommend that you put this hook on right place.
>>> At a glance, the problematic function is print_section() which have
>>> external function call, print_hex_dump(), with object pointer.
>>> If you disable kasan in print_section, all the below thing won't be
>>> needed, I guess.
>>>
>>
>> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
>>
>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>> for kasan.
>
> I don't agree with this.
>
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
>
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
>
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()
>
> Thanks.
>
Ok, you convinced me. I'll do it.
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-15 8:18 ` Joonsoo Kim
2014-07-15 9:51 ` Andrey Ryabinin
@ 2014-07-15 14:26 ` Christoph Lameter
2014-07-15 15:02 ` Andrey Ryabinin
1 sibling, 1 reply; 80+ messages in thread
From: Christoph Lameter @ 2014-07-15 14:26 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, 15 Jul 2014, Joonsoo Kim wrote:
> > I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> > If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> > for kasan.
>
> I don't agree with this.
>
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
>
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
>
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()
I think adding disable/enable is good because it separates the payload
access from metadata accesses. This may be useful for future checkers.
Maybe call it something different so that this is more generic.
metadata_access_enable()
metadata_access_disable()
?
Maybe someone else has a better idea?
^ permalink raw reply [flat|nested] 80+ messages in thread
* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
2014-07-15 14:26 ` Christoph Lameter
@ 2014-07-15 15:02 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 15:02 UTC (permalink / raw)
To: linux-arm-kernel
On 07/15/14 18:26, Christoph Lameter wrote:
> On Tue, 15 Jul 2014, Joonsoo Kim wrote:
>
>>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>>> for kasan.
>>
>> I don't agree with this.
>>
>> If someone is going to add a slab_pad_check() in other places in
>> slub.c, we should disable/enable kasan there, too. This looks same
>> maintenance problem to me. Putting disable/enable only where we
>> strictly need at least ensures that we don't need to care when using
>> slub internal functions.
>>
>> And, if memchr_inv() is problem, I think that you also need to add hook
>> into validate_slab_cache().
>>
>> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
>> check_object() -> check_bytes_and_report() -> memchr_inv()
>
> I think adding disable/enable is good because it separates the payload
> access from metadata accesses. This may be useful for future checkers.
> Maybe call it something different so that this is more generic.
>
> metadata_access_enable()
>
> metadata_access_disable()
>
> ?
>
It sounds like a good idea to me. However in this patch, besides from protecting metadata accesses,
this calls also used in setup_objects for wrapping ctor call. It used there because all pages in allocate_slab
are poisoned, so at the time when ctors are called all object's memory marked as poisoned.
I think this could be solved by removing kasan_alloc_slab_pages() hook form allocate_slab() and adding
kasan_slab_free() hook after ctor call.
But I guess in that case padding at the end of slab will be unpoisoined.
> Maybe someone else has a better idea?
>
>
>
^ permalink raw reply [flat|nested] 80+ messages in thread
* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
[not found] ` <1421859105-25253-1-git-send-email-a.ryabinin@samsung.com>
@ 2015-01-21 16:51 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
To: linux-arm-kernel
For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.
Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/kernel/module.c | 2 +-
arch/arm64/kernel/module.c | 2 +-
arch/mips/kernel/module.c | 2 +-
arch/parisc/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/sparc/kernel/module.c | 2 +-
arch/unicore32/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/vmalloc.h | 4 +++-
mm/vmalloc.c | 10 ++++++----
10 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
* init_data correctly */
return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_RWX, NUMA_NO_NODE,
+ PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
return __vmalloc_node_range(size, 1,
MODULES_VADDR + get_module_load_offset(),
MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller);
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller);
+
extern void vfree(const void *addr);
extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
* @end: vm area range end
* @gfp_mask: flags for the page level allocator
* @prot: protection mask for the allocated pages
+ * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD)
* @node: node to use for allocation or NUMA_NO_NODE
* @caller: caller's return address
*
@@ -1628,7 +1629,8 @@ fail:
*/
void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller)
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller)
{
struct vm_struct *area;
void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!size || (size >> PAGE_SHIFT) > totalram_pages)
goto fail;
- area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
- start, end, node, gfp_mask, caller);
+ area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+ vm_flags, start, end, node, gfp_mask, caller);
if (!area)
goto fail;
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
int node, const void *caller)
{
return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
- gfp_mask, prot, node, caller);
+ gfp_mask, prot, 0, node, caller);
}
void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
--
2.2.1
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
[not found] ` <1422544321-24232-1-git-send-email-a.ryabinin@samsung.com>
@ 2015-01-29 15:11 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
To: linux-arm-kernel
For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.
Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/kernel/module.c | 2 +-
arch/arm64/kernel/module.c | 4 ++--
arch/mips/kernel/module.c | 2 +-
arch/parisc/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/sparc/kernel/module.c | 2 +-
arch/unicore32/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/vmalloc.h | 4 +++-
mm/vmalloc.c | 10 ++++++----
10 files changed, 18 insertions(+), 14 deletions(-)
diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
- __builtin_return_address(0));
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+ NUMA_NO_NODE, __builtin_return_address(0));
}
enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
* init_data correctly */
return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_RWX, NUMA_NO_NODE,
+ PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
return __vmalloc_node_range(size, 1,
MODULES_VADDR + get_module_load_offset(),
MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller);
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller);
+
extern void vfree(const void *addr);
extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
* @end: vm area range end
* @gfp_mask: flags for the page level allocator
* @prot: protection mask for the allocated pages
+ * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD)
* @node: node to use for allocation or NUMA_NO_NODE
* @caller: caller's return address
*
@@ -1628,7 +1629,8 @@ fail:
*/
void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller)
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller)
{
struct vm_struct *area;
void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!size || (size >> PAGE_SHIFT) > totalram_pages)
goto fail;
- area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
- start, end, node, gfp_mask, caller);
+ area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+ vm_flags, start, end, node, gfp_mask, caller);
if (!area)
goto fail;
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
int node, const void *caller)
{
return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
- gfp_mask, prot, node, caller);
+ gfp_mask, prot, 0, node, caller);
}
void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
--
2.2.2
^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
[not found] ` <1422985392-28652-1-git-send-email-a.ryabinin@samsung.com>
@ 2015-02-03 17:43 ` Andrey Ryabinin
0 siblings, 0 replies; 80+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
To: linux-arm-kernel
For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().
__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().
Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/kernel/module.c | 2 +-
arch/arm64/kernel/module.c | 4 ++--
arch/mips/kernel/module.c | 2 +-
arch/parisc/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/sparc/kernel/module.c | 2 +-
arch/unicore32/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/vmalloc.h | 4 +++-
mm/vmalloc.c | 10 ++++++----
10 files changed, 18 insertions(+), 14 deletions(-)
diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
- __builtin_return_address(0));
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+ NUMA_NO_NODE, __builtin_return_address(0));
}
enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
* init_data correctly */
return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_RWX, NUMA_NO_NODE,
+ PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
return __vmalloc_node_range(size, 1,
MODULES_VADDR + get_module_load_offset(),
MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+ PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller);
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller);
+
extern void vfree(const void *addr);
extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
* @end: vm area range end
* @gfp_mask: flags for the page level allocator
* @prot: protection mask for the allocated pages
+ * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD)
* @node: node to use for allocation or NUMA_NO_NODE
* @caller: caller's return address
*
@@ -1628,7 +1629,8 @@ fail:
*/
void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller)
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller)
{
struct vm_struct *area;
void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!size || (size >> PAGE_SHIFT) > totalram_pages)
goto fail;
- area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
- start, end, node, gfp_mask, caller);
+ area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+ vm_flags, start, end, node, gfp_mask, caller);
if (!area)
goto fail;
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
int node, const void *caller)
{
return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
- gfp_mask, prot, node, caller);
+ gfp_mask, prot, 0, node, caller);
}
void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
--
2.2.2
^ permalink raw reply related [flat|nested] 80+ messages in thread
end of thread, other threads:[~2015-02-03 17:43 UTC | newest]
Thread overview: 80+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-07-09 14:26 ` Christoph Lameter
2014-07-10 7:31 ` Andrey Ryabinin
2014-07-09 19:29 ` Andi Kleen
2014-07-09 20:40 ` Yuri Gribov
2014-07-10 12:10 ` Andrey Ryabinin
2014-07-09 20:26 ` Dave Hansen
2014-07-10 12:12 ` Andrey Ryabinin
2014-07-10 15:55 ` Dave Hansen
2014-07-10 19:48 ` Andrey Ryabinin
2014-07-10 20:04 ` Dave Hansen
2014-07-09 20:37 ` Dave Hansen
2014-07-09 20:38 ` Dave Hansen
2014-07-10 11:55 ` Sasha Levin
2014-07-10 13:01 ` Andrey Ryabinin
2014-07-10 13:31 ` Sasha Levin
2014-07-10 13:39 ` Andrey Ryabinin
2014-07-10 14:02 ` Sasha Levin
2014-07-10 19:04 ` Andrey Ryabinin
2014-07-10 13:50 ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
2014-07-09 19:31 ` Andi Kleen
2014-07-10 13:54 ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot Andrey Ryabinin
2014-07-09 19:33 ` Andi Kleen
2014-07-10 13:15 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes Andrey Ryabinin
2014-07-15 5:52 ` Joonsoo Kim
2014-07-15 6:54 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub Andrey Ryabinin
2014-07-15 5:53 ` Joonsoo Kim
2014-07-15 6:56 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-07-09 14:29 ` Christoph Lameter
2014-07-10 7:41 ` Andrey Ryabinin
2014-07-10 14:07 ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c Andrey Ryabinin
2014-07-09 14:32 ` Christoph Lameter
2014-07-10 7:43 ` Andrey Ryabinin
2014-07-10 14:08 ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache Andrey Ryabinin
2014-07-09 14:33 ` Christoph Lameter
2014-07-10 8:44 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory Andrey Ryabinin
2014-07-15 6:04 ` Joonsoo Kim
2014-07-15 7:37 ` Andrey Ryabinin
2014-07-15 8:18 ` Joonsoo Kim
2014-07-15 9:51 ` Andrey Ryabinin
2014-07-15 14:26 ` Christoph Lameter
2014-07-15 15:02 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator Andrey Ryabinin
2014-07-09 14:48 ` Christoph Lameter
2014-07-10 9:24 ` Andrey Ryabinin
2014-07-15 6:09 ` Joonsoo Kim
2014-07-15 7:45 ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-07-15 6:12 ` Joonsoo Kim
2014-07-15 6:08 ` Dmitry Vyukov
2014-07-15 9:34 ` Andrey Ryabinin
2014-07-15 9:45 ` Dmitry Vyukov
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module Andrey Ryabinin
2014-07-09 21:19 ` [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Dave Hansen
2014-07-09 21:44 ` Andi Kleen
2014-07-09 21:59 ` Vegard Nossum
2014-07-09 23:33 ` Dave Hansen
2014-07-10 0:03 ` Andi Kleen
2014-07-10 13:59 ` Andrey Ryabinin
[not found] ` <1421859105-25253-1-git-send-email-a.ryabinin@samsung.com>
2015-01-21 16:51 ` [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range() Andrey Ryabinin
[not found] ` <1422544321-24232-1-git-send-email-a.ryabinin@samsung.com>
2015-01-29 15:11 ` [PATCH v10 " Andrey Ryabinin
[not found] ` <1422985392-28652-1-git-send-email-a.ryabinin@samsung.com>
2015-02-03 17:43 ` [PATCH v11 16/19] " Andrey Ryabinin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).