* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer [not found] <cover.1535462971.git.andreyknvl@google.com> @ 2018-09-05 21:10 ` Andrew Morton 2018-09-05 21:55 ` Nick Desaulniers 2018-09-06 10:05 ` Will Deacon [not found] ` <db103bdc2109396af0c6007f1669ebbbb63b872b.1535462971.git.andreyknvl@google.com> ` (9 subsequent siblings) 10 siblings, 2 replies; 34+ messages in thread From: Andrew Morton @ 2018-09-05 21:10 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: > This patchset adds a new mode to KASAN [1], which is called KHWASAN > (Kernel HardWare assisted Address SANitizer). We're at v6 and there are no reviewed-by's or acked-by's to be seen. Is that a fair commentary on what has been happening, or have people been remiss in sending and gathering such things? ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer 2018-09-05 21:10 ` [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer Andrew Morton @ 2018-09-05 21:55 ` Nick Desaulniers 2018-09-06 10:05 ` Will Deacon 1 sibling, 0 replies; 34+ messages in thread From: Nick Desaulniers @ 2018-09-05 21:55 UTC (permalink / raw) To: Andrew Morton, Andrey Ryabinin, Catalin Marinas, Will Deacon Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov, Christoph Lameter, Mark Rutland, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg KH, Kate Stewart, Mike Rapoport, kasan-dev, linux-doc, LKML On Wed, Sep 5, 2018 at 2:10 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: > > > This patchset adds a new mode to KASAN [1], which is called KHWASAN > > (Kernel HardWare assisted Address SANitizer). > > We're at v6 and there are no reviewed-by's or acked-by's to be seen. > Is that a fair commentary on what has been happening, or have people > been remiss in sending and gathering such things? > I'm anxious to use these for Pixel Android devices. Looks like the series has been aggregating changes from valuable feedback. Maybe if the ARM maintainers and KASAN maintainers could Ack or Nack these, we could decide to merge these or what needs more work? -- Thanks, ~Nick Desaulniers ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer 2018-09-05 21:10 ` [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer Andrew Morton 2018-09-05 21:55 ` Nick Desaulniers @ 2018-09-06 10:05 ` Will Deacon 2018-09-06 11:06 ` Andrey Konovalov 1 sibling, 1 reply; 34+ messages in thread From: Will Deacon @ 2018-09-06 10:05 UTC (permalink / raw) To: Andrew Morton Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 05, 2018 at 02:10:32PM -0700, Andrew Morton wrote: > On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: > > > This patchset adds a new mode to KASAN [1], which is called KHWASAN > > (Kernel HardWare assisted Address SANitizer). > > We're at v6 and there are no reviewed-by's or acked-by's to be seen. > Is that a fair commentary on what has been happening, or have people > been remiss in sending and gathering such things? I still have concerns about the consequences of merging this as anything other than a debug option [1]. Unfortunately, merging it as a debug option defeats the whole point, so I think we need to spend more effort on developing tools that can help us to find and fix the subtle bugs which will arise from enabling tagged pointers in the kernel. Will [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-August/596077.html ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer 2018-09-06 10:05 ` Will Deacon @ 2018-09-06 11:06 ` Andrey Konovalov 2018-09-06 16:39 ` Nick Desaulniers 2018-09-14 15:28 ` Will Deacon 0 siblings, 2 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-06 11:06 UTC (permalink / raw) To: Will Deacon Cc: Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <ks> On Thu, Sep 6, 2018 at 12:05 PM, Will Deacon <will.deacon@arm.com> wrote: > On Wed, Sep 05, 2018 at 02:10:32PM -0700, Andrew Morton wrote: >> On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: >> >> > This patchset adds a new mode to KASAN [1], which is called KHWASAN >> > (Kernel HardWare assisted Address SANitizer). >> >> We're at v6 and there are no reviewed-by's or acked-by's to be seen. >> Is that a fair commentary on what has been happening, or have people >> been remiss in sending and gathering such things? > > I still have concerns about the consequences of merging this as anything > other than a debug option [1]. Unfortunately, merging it as a debug option > defeats the whole point, so I think we need to spend more effort on developing > tools that can help us to find and fix the subtle bugs which will arise from > enabling tagged pointers in the kernel. I totally don't mind calling it a debug option. Do I need to somehow specify it somewhere? Why does it defeat the point? The point is to ease KASAN-like testing on devices with limited memory. ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer 2018-09-06 11:06 ` Andrey Konovalov @ 2018-09-06 16:39 ` Nick Desaulniers 2018-09-14 15:28 ` Will Deacon 1 sibling, 0 replies; 34+ messages in thread From: Nick Desaulniers @ 2018-09-06 16:39 UTC (permalink / raw) To: Andrey Konovalov Cc: Will Deacon, Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg KH, Kate Stewart On Thu, Sep 6, 2018 at 4:06 AM Andrey Konovalov <andreyknvl@google.com> wrote: > > On Thu, Sep 6, 2018 at 12:05 PM, Will Deacon <will.deacon@arm.com> wrote: > > On Wed, Sep 05, 2018 at 02:10:32PM -0700, Andrew Morton wrote: > >> On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: > >> > >> > This patchset adds a new mode to KASAN [1], which is called KHWASAN > >> > (Kernel HardWare assisted Address SANitizer). > >> > >> We're at v6 and there are no reviewed-by's or acked-by's to be seen. > >> Is that a fair commentary on what has been happening, or have people > >> been remiss in sending and gathering such things? > > > > I still have concerns about the consequences of merging this as anything > > other than a debug option [1]. Unfortunately, merging it as a debug option > > defeats the whole point, so I think we need to spend more effort on developing > > tools that can help us to find and fix the subtle bugs which will arise from > > enabling tagged pointers in the kernel. > > I totally don't mind calling it a debug option. Do I need to somehow > specify it somewhere? > > Why does it defeat the point? The point is to ease KASAN-like testing > on devices with limited memory. I don't disagree with using it strictly for debug. When I say I want the series for Pixel phones, I should have been clearer that my intent is for a limited pool of internal testers to walk around with KHWASAN enabled devices; not general end users. It's hard enough today to get anyone to test KASAN/ASAN on their "daily driver" due to the memory usage and resulting performance. We don't ship KASAN or KUBSAN on by default to end users (nor plan to); it's used strictly for fuzzing through syzkaller (or by brave "dogfooders" on the internal kernel teams). KHWASAN would let these dogfooders go from being brave to fearless. -- Thanks, ~Nick Desaulniers ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer 2018-09-06 11:06 ` Andrey Konovalov 2018-09-06 16:39 ` Nick Desaulniers @ 2018-09-14 15:28 ` Will Deacon 2018-09-19 18:53 ` Andrey Konovalov 1 sibling, 1 reply; 34+ messages in thread From: Will Deacon @ 2018-09-14 15:28 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <ks> On Thu, Sep 06, 2018 at 01:06:23PM +0200, Andrey Konovalov wrote: > On Thu, Sep 6, 2018 at 12:05 PM, Will Deacon <will.deacon@arm.com> wrote: > > On Wed, Sep 05, 2018 at 02:10:32PM -0700, Andrew Morton wrote: > >> On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: > >> > >> > This patchset adds a new mode to KASAN [1], which is called KHWASAN > >> > (Kernel HardWare assisted Address SANitizer). > >> > >> We're at v6 and there are no reviewed-by's or acked-by's to be seen. > >> Is that a fair commentary on what has been happening, or have people > >> been remiss in sending and gathering such things? > > > > I still have concerns about the consequences of merging this as anything > > other than a debug option [1]. Unfortunately, merging it as a debug option > > defeats the whole point, so I think we need to spend more effort on developing > > tools that can help us to find and fix the subtle bugs which will arise from > > enabling tagged pointers in the kernel. > > I totally don't mind calling it a debug option. Do I need to somehow > specify it somewhere? Ok, sorry, I completely misunderstood you earlier on then! For some reason I thought you wanted this on by default. In which case, I'm ok with the overall idea as long as we make the caveats clear in the Kconfig text. In particular, that enabling this option may introduce problems relating to pointer casting and comparison, but can offer better coverage and lower memory consumption than a fully software-based KASAN solution. Will ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer 2018-09-14 15:28 ` Will Deacon @ 2018-09-19 18:53 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-19 18:53 UTC (permalink / raw) To: Will Deacon Cc: Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <ks> On Fri, Sep 14, 2018 at 5:28 PM, Will Deacon <will.deacon@arm.com> wrote: > On Thu, Sep 06, 2018 at 01:06:23PM +0200, Andrey Konovalov wrote: >> On Thu, Sep 6, 2018 at 12:05 PM, Will Deacon <will.deacon@arm.com> wrote: >> > On Wed, Sep 05, 2018 at 02:10:32PM -0700, Andrew Morton wrote: >> >> On Wed, 29 Aug 2018 13:35:04 +0200 Andrey Konovalov <andreyknvl@google.com> wrote: >> >> >> >> > This patchset adds a new mode to KASAN [1], which is called KHWASAN >> >> > (Kernel HardWare assisted Address SANitizer). >> >> >> >> We're at v6 and there are no reviewed-by's or acked-by's to be seen. >> >> Is that a fair commentary on what has been happening, or have people >> >> been remiss in sending and gathering such things? >> > >> > I still have concerns about the consequences of merging this as anything >> > other than a debug option [1]. Unfortunately, merging it as a debug option >> > defeats the whole point, so I think we need to spend more effort on developing >> > tools that can help us to find and fix the subtle bugs which will arise from >> > enabling tagged pointers in the kernel. >> >> I totally don't mind calling it a debug option. Do I need to somehow >> specify it somewhere? > > Ok, sorry, I completely misunderstood you earlier on then! For some reason > I thought you wanted this on by default. > > In which case, I'm ok with the overall idea as long as we make the caveats > clear in the Kconfig text. In particular, that enabling this option may > introduce problems relating to pointer casting and comparison, but can > offer better coverage and lower memory consumption than a fully > software-based KASAN solution. Great! I'll explicitly call it debug feature and mention the caveats in v7. Thanks! ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <db103bdc2109396af0c6007f1669ebbbb63b872b.1535462971.git.andreyknvl@google.com>]
[parent not found: <3f2dee71-1615-4a34-d611-3ccaf407551e@virtuozzo.com>]
* Re: [PATCH v6 16/18] khwasan, mm, arm64: tag non slab memory allocated via pagealloc [not found] ` <3f2dee71-1615-4a34-d611-3ccaf407551e@virtuozzo.com> @ 2018-09-11 16:10 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-11 16:10 UTC (permalink / raw) To: Andrey Ryabinin Cc: Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart On Fri, Sep 7, 2018 at 6:06 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote: > > > On 08/29/2018 02:35 PM, Andrey Konovalov wrote: > >> void kasan_poison_slab(struct page *page) >> { >> + unsigned long i; >> + >> + if (IS_ENABLED(CONFIG_SLAB)) >> + page->s_mem = reset_tag(page->s_mem); > > Why reinitialize here, instead of single initialization in alloc_slabmgmt()? Hm, don't see why I did it this way, looks odd to me as well. Will fix in v7, thanks! ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <868c9168481ff5103034ac1e37b830d28ed5f4ee.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW [not found] ` <868c9168481ff5103034ac1e37b830d28ed5f4ee.1535462971.git.andreyknvl@google.com> @ 2018-09-12 14:47 ` Dmitry Vyukov 2018-09-12 14:51 ` Dmitry Vyukov 2018-09-17 18:42 ` Andrey Konovalov 0 siblings, 2 replies; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 14:47 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > This commit splits the current CONFIG_KASAN config option into two: > 1. CONFIG_KASAN_GENERIC, that enables the generic software-only KASAN > version (the one that exists now); > 2. CONFIG_KASAN_HW, that enables KHWASAN. > > With CONFIG_KASAN_HW enabled, compiler options are changed to instrument > kernel files wiht -fsantize=hwaddress (except the ones for which > KASAN_SANITIZE := n is set). > > Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW support both > CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes. > > This commit also adds empty placeholder (for now) implementation of > KHWASAN specific hooks inserted by the compiler and adjusts common hooks > implementation to compile correctly with each of the config options. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > arch/arm64/Kconfig | 1 + > include/linux/compiler-clang.h | 3 +- > include/linux/compiler-gcc.h | 4 ++ > include/linux/compiler.h | 3 +- > include/linux/kasan.h | 16 +++++-- > lib/Kconfig.kasan | 77 ++++++++++++++++++++++++++-------- > mm/kasan/Makefile | 6 ++- > mm/kasan/kasan.h | 3 +- > mm/kasan/khwasan.c | 75 +++++++++++++++++++++++++++++++++ > mm/slub.c | 2 +- > scripts/Makefile.kasan | 27 +++++++++++- > 11 files changed, 188 insertions(+), 29 deletions(-) > create mode 100644 mm/kasan/khwasan.c > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 29e75b47becd..991564148f54 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -105,6 +105,7 @@ config ARM64 > select HAVE_ARCH_HUGE_VMAP > select HAVE_ARCH_JUMP_LABEL > select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) > + select HAVE_ARCH_KASAN_HW if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) > select HAVE_ARCH_KGDB > select HAVE_ARCH_MMAP_RND_BITS > select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT > diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h > index b1ce500fe8b3..2c258a9d4c67 100644 > --- a/include/linux/compiler-clang.h > +++ b/include/linux/compiler-clang.h > @@ -17,11 +17,12 @@ > #define KASAN_ABI_VERSION 5 > > /* emulate gcc's __SANITIZE_ADDRESS__ flag */ > -#if __has_feature(address_sanitizer) > +#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) > #define __SANITIZE_ADDRESS__ > #endif > > #define __no_sanitize_address __attribute__((no_sanitize("address"))) > +#define __no_sanitize_hwaddress __attribute__((no_sanitize("hwaddress"))) It seems that it would be better to have just 1 attribute for both types. Currently __no_sanitize_address is used just in a single place. But if it ever used more, people will need to always spell both which looks unnecessary, or, worse will only fix asan but forget about khwasan. If we do just: #define __no_sanitize_address __attribute__((no_sanitize("address", "hwaddress"))) Then we don't need any changes in compiler-gcc.h nor in compiler.h, and no chance or forgetting one of them. > /* > * Not all versions of clang implement the the type-generic versions > diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h > index 763bbad1e258..a186b55c8c4c 100644 > --- a/include/linux/compiler-gcc.h > +++ b/include/linux/compiler-gcc.h > @@ -227,6 +227,10 @@ > #define __no_sanitize_address > #endif > > +#if !defined(__no_sanitize_hwaddress) > +#define __no_sanitize_hwaddress /* gcc doesn't support KHWASAN */ > +#endif > + > /* > * Turn individual warnings and errors on and off locally, depending > * on version. > diff --git a/include/linux/compiler.h b/include/linux/compiler.h > index 681d866efb1e..3f2ba192d57d 100644 > --- a/include/linux/compiler.h > +++ b/include/linux/compiler.h > @@ -195,7 +195,8 @@ void __read_once_size(const volatile void *p, void *res, int size) > * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 > * '__maybe_unused' allows us to avoid defined-but-not-used warnings. > */ > -# define __no_kasan_or_inline __no_sanitize_address __maybe_unused > +# define __no_kasan_or_inline __no_sanitize_address __no_sanitize_hwaddress \ > + __maybe_unused > #else > # define __no_kasan_or_inline __always_inline > #endif > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 54d577ad2181..beb56a26fe9b 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -45,8 +45,6 @@ void kasan_free_pages(struct page *page, unsigned int order); > > void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > slab_flags_t *flags); > -void kasan_cache_shrink(struct kmem_cache *cache); > -void kasan_cache_shutdown(struct kmem_cache *cache); > > void kasan_poison_slab(struct page *page); > void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); > @@ -97,8 +95,6 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {} > static inline void kasan_cache_create(struct kmem_cache *cache, > unsigned int *size, > slab_flags_t *flags) {} > -static inline void kasan_cache_shrink(struct kmem_cache *cache) {} > -static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} > > static inline void kasan_poison_slab(struct page *page) {} > static inline void kasan_unpoison_object_data(struct kmem_cache *cache, > @@ -152,4 +148,16 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } > > #endif /* CONFIG_KASAN */ > > +#ifdef CONFIG_KASAN_GENERIC > + > +void kasan_cache_shrink(struct kmem_cache *cache); > +void kasan_cache_shutdown(struct kmem_cache *cache); > + > +#else /* CONFIG_KASAN_GENERIC */ > + > +static inline void kasan_cache_shrink(struct kmem_cache *cache) {} > +static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} > + > +#endif /* CONFIG_KASAN_GENERIC */ > + > #endif /* LINUX_KASAN_H */ > diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan > index befb127507c0..5a22629f30e7 100644 > --- a/lib/Kconfig.kasan > +++ b/lib/Kconfig.kasan > @@ -1,34 +1,75 @@ > config HAVE_ARCH_KASAN > bool > > +config HAVE_ARCH_KASAN_HW > + bool > + > if HAVE_ARCH_KASAN > > config KASAN > - bool "KASan: runtime memory debugger" > + bool "KASAN: runtime memory debugger" > + help > + Enables KASAN (KernelAddressSANitizer) - runtime memory debugger, > + designed to find out-of-bounds accesses and use-after-free bugs. Perhaps also give link to Documentation/dev-tools/kasan.rst while we are here. > + > +choice > + prompt "KASAN mode" > + depends on KASAN > + default KASAN_GENERIC > + help > + KASAN has two modes: KASAN (a classic version, similar to userspace In these few sentences we call the old mode with 3 different terms: "generic", "classic" and "KASAN" :) This is somewhat confusing. Let's call it "generic" throughout (here and in the docs patch). "Generic" as in "supported on multiple arch and not-dependent on hardware features". "Classic" makes sense for people who knew KASAN before, but for future readers in won't make sense. > + ASan, enabled with CONFIG_KASAN_GENERIC) and KHWASAN (a version > + based on pointer tagging, only for arm64, similar to userspace > + HWASan, enabled with CONFIG_KASAN_HW). > + > +config KASAN_GENERIC > + bool "KASAN: the generic mode" > depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) > select SLUB_DEBUG if SLUB > select CONSTRUCTORS > select STACKDEPOT > help > - Enables kernel address sanitizer - runtime memory debugger, > - designed to find out-of-bounds accesses and use-after-free bugs. > - This is strictly a debugging feature and it requires a gcc version > - of 4.9.2 or later. Detection of out of bounds accesses to stack or > - global variables requires gcc 5.0 or later. > - This feature consumes about 1/8 of available memory and brings about > - ~x3 performance slowdown. > + Enables the generic mode of KASAN. > + This is strictly a debugging feature and it requires a GCC version > + of 4.9.2 or later. Detection of out-of-bounds accesses to stack or > + global variables requires GCC 5.0 or later. > + This mode consumes about 1/8 of available memory at kernel start > + and introduces an overhead of ~x1.5 for the rest of the allocations. > + The performance slowdown is ~x3. > For better error detection enable CONFIG_STACKTRACE. > - Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB > + Currently CONFIG_KASAN_GENERIC doesn't work with CONFIG_DEBUG_SLAB > (the resulting kernel does not boot). > > +if HAVE_ARCH_KASAN_HW This choice looks somewhat weird on non-arm64. It's kinda a choice menu, but one can't really choose anything. Should we put the whole choice under HAVE_ARCH_KASAN_HW, and just select KASAN_GENERIC otherwise? I don't know what't the practice here. Andrey R? > +config KASAN_HW > + bool "KHWASAN: the hardware assisted mode" > + depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) > + select SLUB_DEBUG if SLUB > + select CONSTRUCTORS > + select STACKDEPOT > + help > + Enabled KHWASAN (KASAN mode based on pointer tagging). > + This mode requires Top Byte Ignore support by the CPU and therefore > + only supported for arm64. > + This feature requires clang revision 330044 or later. > + This mode consumes about 1/16 of available memory at kernel start > + and introduces an overhead of ~20% for the rest of the allocations. > + For better error detection enable CONFIG_STACKTRACE. > + Currently CONFIG_KASAN_HW doesn't work with CONFIG_DEBUG_SLAB > + (the resulting kernel does not boot). > + > +endif > + > +endchoice > + > config KASAN_EXTRA > - bool "KAsan: extra checks" > - depends on KASAN && DEBUG_KERNEL && !COMPILE_TEST > + bool "KASAN: extra checks" > + depends on KASAN_GENERIC && DEBUG_KERNEL && !COMPILE_TEST > help > - This enables further checks in the kernel address sanitizer, for now > - it only includes the address-use-after-scope check that can lead > - to excessive kernel stack usage, frame size warnings and longer > - compile time. > + This enables further checks in KASAN, for now it only includes the > + address-use-after-scope check that can lead to excessive kernel > + stack usage, frame size warnings and longer compile time. > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more > > > @@ -53,16 +94,16 @@ config KASAN_INLINE > memory accesses. This is faster than outline (in some workloads > it gives about x2 boost over outline instrumentation), but > make kernel's .text size much bigger. > - This requires a gcc version of 5.0 or later. > + For CONFIG_KASAN_GENERIC this requires GCC 5.0 or later. > > endchoice > > config TEST_KASAN > - tristate "Module for testing kasan for bug detection" > + tristate "Module for testing KASAN for bug detection" > depends on m && KASAN > help > This is a test module doing various nasty things like > out of bounds accesses, use after free. It is useful for testing > - kernel debugging features like kernel address sanitizer. > + kernel debugging features like KASAN. > > endif > diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile > index a6df14bffb6b..14955add96d3 100644 > --- a/mm/kasan/Makefile > +++ b/mm/kasan/Makefile > @@ -2,6 +2,7 @@ > KASAN_SANITIZE := n > UBSAN_SANITIZE_common.o := n > UBSAN_SANITIZE_kasan.o := n > +UBSAN_SANITIZE_khwasan.o := n > KCOV_INSTRUMENT := n > > CFLAGS_REMOVE_kasan.o = -pg > @@ -10,5 +11,8 @@ CFLAGS_REMOVE_kasan.o = -pg > > CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) > CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) > +CFLAGS_khwasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) > > -obj-y := common.o kasan.o report.o kasan_init.o quarantine.o > +obj-$(CONFIG_KASAN) := common.o kasan_init.o report.o > +obj-$(CONFIG_KASAN_GENERIC) += kasan.o quarantine.o > +obj-$(CONFIG_KASAN_HW) += khwasan.o > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 659463800f10..19b950eaccff 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -114,7 +114,8 @@ void kasan_report(unsigned long addr, size_t size, > bool is_write, unsigned long ip); > void kasan_report_invalid_free(void *object, unsigned long ip); > > -#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB) > +#if defined(CONFIG_KASAN_GENERIC) && \ > + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) > void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); > void quarantine_reduce(void); > void quarantine_remove_cache(struct kmem_cache *cache); > diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c > new file mode 100644 > index 000000000000..e2c3a7f7fd1f > --- /dev/null > +++ b/mm/kasan/khwasan.c > @@ -0,0 +1,75 @@ > +/* > + * This file contains core KHWASAN code. > + * > + * Copyright (c) 2018 Google, Inc. > + * Author: Andrey Konovalov <andreyknvl@google.com> > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License version 2 as > + * published by the Free Software Foundation. > + * > + */ > + > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > +#define DISABLE_BRANCH_PROFILING > + > +#include <linux/export.h> > +#include <linux/interrupt.h> > +#include <linux/init.h> > +#include <linux/kasan.h> > +#include <linux/kernel.h> > +#include <linux/kmemleak.h> > +#include <linux/linkage.h> > +#include <linux/memblock.h> > +#include <linux/memory.h> > +#include <linux/mm.h> > +#include <linux/module.h> > +#include <linux/printk.h> > +#include <linux/random.h> > +#include <linux/sched.h> > +#include <linux/sched/task_stack.h> > +#include <linux/slab.h> > +#include <linux/stacktrace.h> > +#include <linux/string.h> > +#include <linux/types.h> > +#include <linux/vmalloc.h> > +#include <linux/bug.h> > + > +#include "kasan.h" > +#include "../slab.h" > + > +void check_memory_region(unsigned long addr, size_t size, bool write, > + unsigned long ret_ip) > +{ > +} > + > +#define DEFINE_HWASAN_LOAD_STORE(size) \ > + void __hwasan_load##size##_noabort(unsigned long addr) \ > + { \ > + } \ > + EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ > + void __hwasan_store##size##_noabort(unsigned long addr) \ > + { \ > + } \ > + EXPORT_SYMBOL(__hwasan_store##size##_noabort) > + > +DEFINE_HWASAN_LOAD_STORE(1); > +DEFINE_HWASAN_LOAD_STORE(2); > +DEFINE_HWASAN_LOAD_STORE(4); > +DEFINE_HWASAN_LOAD_STORE(8); > +DEFINE_HWASAN_LOAD_STORE(16); > + > +void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) > +{ > +} > +EXPORT_SYMBOL(__hwasan_loadN_noabort); > + > +void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) > +{ > +} > +EXPORT_SYMBOL(__hwasan_storeN_noabort); > + > +void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) > +{ > +} > +EXPORT_SYMBOL(__hwasan_tag_memory); > diff --git a/mm/slub.c b/mm/slub.c > index 30b9bf777bab..4206e1b616e7 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2955,7 +2955,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, > do_slab_free(s, page, head, tail, cnt, addr); > } > > -#ifdef CONFIG_KASAN > +#ifdef CONFIG_KASAN_GENERIC > void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) > { > do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); > diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan > index 69552a39951d..49c6e056c697 100644 > --- a/scripts/Makefile.kasan > +++ b/scripts/Makefile.kasan > @@ -1,5 +1,5 @@ > # SPDX-License-Identifier: GPL-2.0 > -ifdef CONFIG_KASAN > +ifdef CONFIG_KASAN_GENERIC > ifdef CONFIG_KASAN_INLINE > call_threshold := 10000 > else > @@ -42,6 +42,29 @@ ifdef CONFIG_KASAN_EXTRA > CFLAGS_KASAN += $(call cc-option, -fsanitize-address-use-after-scope) > endif > > -CFLAGS_KASAN_NOSANITIZE := -fno-builtin > +endif > + > +ifdef CONFIG_KASAN_HW > + > +ifdef CONFIG_KASAN_INLINE > + instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET) > +else > + instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1 > +endif > > +CFLAGS_KASAN := -fsanitize=kernel-hwaddress \ > + -mllvm -hwasan-instrument-stack=0 \ > + $(instrumentation_flags) > + > +ifeq ($(call cc-option, $(CFLAGS_KASAN) -Werror),) > + ifneq ($(CONFIG_COMPILE_TEST),y) > + $(warning Cannot use CONFIG_KASAN_HW: \ > + -fsanitize=hwaddress is not supported by compiler) > + endif > +endif > + > +endif > + > +ifdef CONFIG_KASAN > +CFLAGS_KASAN_NOSANITIZE := -fno-builtin > endif > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW 2018-09-12 14:47 ` [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW Dmitry Vyukov @ 2018-09-12 14:51 ` Dmitry Vyukov 2018-09-17 18:42 ` Andrey Konovalov 1 sibling, 0 replies; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 14:51 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 4:47 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> This commit splits the current CONFIG_KASAN config option into two: >> 1. CONFIG_KASAN_GENERIC, that enables the generic software-only KASAN >> version (the one that exists now); >> 2. CONFIG_KASAN_HW, that enables KHWASAN. >> >> With CONFIG_KASAN_HW enabled, compiler options are changed to instrument >> kernel files wiht -fsantize=hwaddress (except the ones for which >> KASAN_SANITIZE := n is set). >> >> Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW support both >> CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes. >> >> This commit also adds empty placeholder (for now) implementation of >> KHWASAN specific hooks inserted by the compiler and adjusts common hooks >> implementation to compile correctly with each of the config options. >> >> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> >> --- >> arch/arm64/Kconfig | 1 + >> include/linux/compiler-clang.h | 3 +- >> include/linux/compiler-gcc.h | 4 ++ >> include/linux/compiler.h | 3 +- >> include/linux/kasan.h | 16 +++++-- >> lib/Kconfig.kasan | 77 ++++++++++++++++++++++++++-------- >> mm/kasan/Makefile | 6 ++- >> mm/kasan/kasan.h | 3 +- >> mm/kasan/khwasan.c | 75 +++++++++++++++++++++++++++++++++ >> mm/slub.c | 2 +- >> scripts/Makefile.kasan | 27 +++++++++++- >> 11 files changed, 188 insertions(+), 29 deletions(-) >> create mode 100644 mm/kasan/khwasan.c >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index 29e75b47becd..991564148f54 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -105,6 +105,7 @@ config ARM64 >> select HAVE_ARCH_HUGE_VMAP >> select HAVE_ARCH_JUMP_LABEL >> select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) >> + select HAVE_ARCH_KASAN_HW if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) >> select HAVE_ARCH_KGDB >> select HAVE_ARCH_MMAP_RND_BITS >> select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT >> diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h >> index b1ce500fe8b3..2c258a9d4c67 100644 >> --- a/include/linux/compiler-clang.h >> +++ b/include/linux/compiler-clang.h >> @@ -17,11 +17,12 @@ >> #define KASAN_ABI_VERSION 5 >> >> /* emulate gcc's __SANITIZE_ADDRESS__ flag */ >> -#if __has_feature(address_sanitizer) >> +#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) >> #define __SANITIZE_ADDRESS__ >> #endif >> >> #define __no_sanitize_address __attribute__((no_sanitize("address"))) >> +#define __no_sanitize_hwaddress __attribute__((no_sanitize("hwaddress"))) > > It seems that it would be better to have just 1 attribute for both types. > Currently __no_sanitize_address is used just in a single place. But if > it ever used more, people will need to always spell both which looks > unnecessary, or, worse will only fix asan but forget about khwasan. > > If we do just: > > #define __no_sanitize_address __attribute__((no_sanitize("address", > "hwaddress"))) > > Then we don't need any changes in compiler-gcc.h nor in compiler.h, > and no chance or forgetting one of them. > >> /* >> * Not all versions of clang implement the the type-generic versions >> diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h >> index 763bbad1e258..a186b55c8c4c 100644 >> --- a/include/linux/compiler-gcc.h >> +++ b/include/linux/compiler-gcc.h >> @@ -227,6 +227,10 @@ >> #define __no_sanitize_address >> #endif >> >> +#if !defined(__no_sanitize_hwaddress) >> +#define __no_sanitize_hwaddress /* gcc doesn't support KHWASAN */ >> +#endif >> + >> /* >> * Turn individual warnings and errors on and off locally, depending >> * on version. >> diff --git a/include/linux/compiler.h b/include/linux/compiler.h >> index 681d866efb1e..3f2ba192d57d 100644 >> --- a/include/linux/compiler.h >> +++ b/include/linux/compiler.h >> @@ -195,7 +195,8 @@ void __read_once_size(const volatile void *p, void *res, int size) >> * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 >> * '__maybe_unused' allows us to avoid defined-but-not-used warnings. >> */ >> -# define __no_kasan_or_inline __no_sanitize_address __maybe_unused >> +# define __no_kasan_or_inline __no_sanitize_address __no_sanitize_hwaddress \ >> + __maybe_unused >> #else >> # define __no_kasan_or_inline __always_inline >> #endif >> diff --git a/include/linux/kasan.h b/include/linux/kasan.h >> index 54d577ad2181..beb56a26fe9b 100644 >> --- a/include/linux/kasan.h >> +++ b/include/linux/kasan.h >> @@ -45,8 +45,6 @@ void kasan_free_pages(struct page *page, unsigned int order); >> >> void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, >> slab_flags_t *flags); >> -void kasan_cache_shrink(struct kmem_cache *cache); >> -void kasan_cache_shutdown(struct kmem_cache *cache); >> >> void kasan_poison_slab(struct page *page); >> void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); >> @@ -97,8 +95,6 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {} >> static inline void kasan_cache_create(struct kmem_cache *cache, >> unsigned int *size, >> slab_flags_t *flags) {} >> -static inline void kasan_cache_shrink(struct kmem_cache *cache) {} >> -static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} >> >> static inline void kasan_poison_slab(struct page *page) {} >> static inline void kasan_unpoison_object_data(struct kmem_cache *cache, >> @@ -152,4 +148,16 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } >> >> #endif /* CONFIG_KASAN */ >> >> +#ifdef CONFIG_KASAN_GENERIC >> + >> +void kasan_cache_shrink(struct kmem_cache *cache); >> +void kasan_cache_shutdown(struct kmem_cache *cache); >> + >> +#else /* CONFIG_KASAN_GENERIC */ >> + >> +static inline void kasan_cache_shrink(struct kmem_cache *cache) {} >> +static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} >> + >> +#endif /* CONFIG_KASAN_GENERIC */ >> + >> #endif /* LINUX_KASAN_H */ >> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan >> index befb127507c0..5a22629f30e7 100644 >> --- a/lib/Kconfig.kasan >> +++ b/lib/Kconfig.kasan >> @@ -1,34 +1,75 @@ >> config HAVE_ARCH_KASAN >> bool >> >> +config HAVE_ARCH_KASAN_HW >> + bool >> + >> if HAVE_ARCH_KASAN >> >> config KASAN >> - bool "KASan: runtime memory debugger" >> + bool "KASAN: runtime memory debugger" >> + help >> + Enables KASAN (KernelAddressSANitizer) - runtime memory debugger, >> + designed to find out-of-bounds accesses and use-after-free bugs. > > Perhaps also give link to Documentation/dev-tools/kasan.rst while we are here. > >> + >> +choice >> + prompt "KASAN mode" >> + depends on KASAN >> + default KASAN_GENERIC >> + help >> + KASAN has two modes: KASAN (a classic version, similar to userspace > > In these few sentences we call the old mode with 3 different terms: > "generic", "classic" and "KASAN" :) > This is somewhat confusing. Let's call it "generic" throughout (here > and in the docs patch). "Generic" as in "supported on multiple arch > and not-dependent on hardware features". "Classic" makes sense for > people who knew KASAN before, but for future readers in won't make > sense. > > >> + ASan, enabled with CONFIG_KASAN_GENERIC) and KHWASAN (a version >> + based on pointer tagging, only for arm64, similar to userspace >> + HWASan, enabled with CONFIG_KASAN_HW). >> + >> +config KASAN_GENERIC >> + bool "KASAN: the generic mode" >> depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) >> select SLUB_DEBUG if SLUB >> select CONSTRUCTORS >> select STACKDEPOT >> help >> - Enables kernel address sanitizer - runtime memory debugger, >> - designed to find out-of-bounds accesses and use-after-free bugs. >> - This is strictly a debugging feature and it requires a gcc version >> - of 4.9.2 or later. Detection of out of bounds accesses to stack or >> - global variables requires gcc 5.0 or later. >> - This feature consumes about 1/8 of available memory and brings about >> - ~x3 performance slowdown. >> + Enables the generic mode of KASAN. >> + This is strictly a debugging feature and it requires a GCC version >> + of 4.9.2 or later. Detection of out-of-bounds accesses to stack or >> + global variables requires GCC 5.0 or later. >> + This mode consumes about 1/8 of available memory at kernel start >> + and introduces an overhead of ~x1.5 for the rest of the allocations. >> + The performance slowdown is ~x3. >> For better error detection enable CONFIG_STACKTRACE. >> - Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB >> + Currently CONFIG_KASAN_GENERIC doesn't work with CONFIG_DEBUG_SLAB >> (the resulting kernel does not boot). >> >> +if HAVE_ARCH_KASAN_HW > > This choice looks somewhat weird on non-arm64. It's kinda a choice > menu, but one can't really choose anything. Should we put the whole > choice under HAVE_ARCH_KASAN_HW, and just select KASAN_GENERIC > otherwise? I don't know what't the practice here. Andrey R? > >> +config KASAN_HW >> + bool "KHWASAN: the hardware assisted mode" Do we need a hyphen here? hardware-assisted? >> + depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) >> + select SLUB_DEBUG if SLUB >> + select CONSTRUCTORS >> + select STACKDEPOT >> + help >> + Enabled KHWASAN (KASAN mode based on pointer tagging). >> + This mode requires Top Byte Ignore support by the CPU and therefore >> + only supported for arm64. >> + This feature requires clang revision 330044 or later. >> + This mode consumes about 1/16 of available memory at kernel start >> + and introduces an overhead of ~20% for the rest of the allocations. >> + For better error detection enable CONFIG_STACKTRACE. >> + Currently CONFIG_KASAN_HW doesn't work with CONFIG_DEBUG_SLAB >> + (the resulting kernel does not boot). >> + >> +endif >> + >> +endchoice >> + >> config KASAN_EXTRA >> - bool "KAsan: extra checks" >> - depends on KASAN && DEBUG_KERNEL && !COMPILE_TEST >> + bool "KASAN: extra checks" >> + depends on KASAN_GENERIC && DEBUG_KERNEL && !COMPILE_TEST >> help >> - This enables further checks in the kernel address sanitizer, for now >> - it only includes the address-use-after-scope check that can lead >> - to excessive kernel stack usage, frame size warnings and longer >> - compile time. >> + This enables further checks in KASAN, for now it only includes the >> + address-use-after-scope check that can lead to excessive kernel >> + stack usage, frame size warnings and longer compile time. >> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more >> >> >> @@ -53,16 +94,16 @@ config KASAN_INLINE >> memory accesses. This is faster than outline (in some workloads >> it gives about x2 boost over outline instrumentation), but >> make kernel's .text size much bigger. >> - This requires a gcc version of 5.0 or later. >> + For CONFIG_KASAN_GENERIC this requires GCC 5.0 or later. >> >> endchoice >> >> config TEST_KASAN >> - tristate "Module for testing kasan for bug detection" >> + tristate "Module for testing KASAN for bug detection" >> depends on m && KASAN >> help >> This is a test module doing various nasty things like >> out of bounds accesses, use after free. It is useful for testing >> - kernel debugging features like kernel address sanitizer. >> + kernel debugging features like KASAN. >> >> endif >> diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile >> index a6df14bffb6b..14955add96d3 100644 >> --- a/mm/kasan/Makefile >> +++ b/mm/kasan/Makefile >> @@ -2,6 +2,7 @@ >> KASAN_SANITIZE := n >> UBSAN_SANITIZE_common.o := n >> UBSAN_SANITIZE_kasan.o := n >> +UBSAN_SANITIZE_khwasan.o := n >> KCOV_INSTRUMENT := n >> >> CFLAGS_REMOVE_kasan.o = -pg >> @@ -10,5 +11,8 @@ CFLAGS_REMOVE_kasan.o = -pg >> >> CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) >> CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) >> +CFLAGS_khwasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) >> >> -obj-y := common.o kasan.o report.o kasan_init.o quarantine.o >> +obj-$(CONFIG_KASAN) := common.o kasan_init.o report.o >> +obj-$(CONFIG_KASAN_GENERIC) += kasan.o quarantine.o >> +obj-$(CONFIG_KASAN_HW) += khwasan.o >> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h >> index 659463800f10..19b950eaccff 100644 >> --- a/mm/kasan/kasan.h >> +++ b/mm/kasan/kasan.h >> @@ -114,7 +114,8 @@ void kasan_report(unsigned long addr, size_t size, >> bool is_write, unsigned long ip); >> void kasan_report_invalid_free(void *object, unsigned long ip); >> >> -#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB) >> +#if defined(CONFIG_KASAN_GENERIC) && \ >> + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) >> void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); >> void quarantine_reduce(void); >> void quarantine_remove_cache(struct kmem_cache *cache); >> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c >> new file mode 100644 >> index 000000000000..e2c3a7f7fd1f >> --- /dev/null >> +++ b/mm/kasan/khwasan.c >> @@ -0,0 +1,75 @@ >> +/* >> + * This file contains core KHWASAN code. >> + * >> + * Copyright (c) 2018 Google, Inc. >> + * Author: Andrey Konovalov <andreyknvl@google.com> >> + * >> + * This program is free software; you can redistribute it and/or modify >> + * it under the terms of the GNU General Public License version 2 as >> + * published by the Free Software Foundation. >> + * >> + */ >> + >> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt >> +#define DISABLE_BRANCH_PROFILING >> + >> +#include <linux/export.h> >> +#include <linux/interrupt.h> >> +#include <linux/init.h> >> +#include <linux/kasan.h> >> +#include <linux/kernel.h> >> +#include <linux/kmemleak.h> >> +#include <linux/linkage.h> >> +#include <linux/memblock.h> >> +#include <linux/memory.h> >> +#include <linux/mm.h> >> +#include <linux/module.h> >> +#include <linux/printk.h> >> +#include <linux/random.h> >> +#include <linux/sched.h> >> +#include <linux/sched/task_stack.h> >> +#include <linux/slab.h> >> +#include <linux/stacktrace.h> >> +#include <linux/string.h> >> +#include <linux/types.h> >> +#include <linux/vmalloc.h> >> +#include <linux/bug.h> >> + >> +#include "kasan.h" >> +#include "../slab.h" >> + >> +void check_memory_region(unsigned long addr, size_t size, bool write, >> + unsigned long ret_ip) >> +{ >> +} >> + >> +#define DEFINE_HWASAN_LOAD_STORE(size) \ >> + void __hwasan_load##size##_noabort(unsigned long addr) \ >> + { \ >> + } \ >> + EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ >> + void __hwasan_store##size##_noabort(unsigned long addr) \ >> + { \ >> + } \ >> + EXPORT_SYMBOL(__hwasan_store##size##_noabort) >> + >> +DEFINE_HWASAN_LOAD_STORE(1); >> +DEFINE_HWASAN_LOAD_STORE(2); >> +DEFINE_HWASAN_LOAD_STORE(4); >> +DEFINE_HWASAN_LOAD_STORE(8); >> +DEFINE_HWASAN_LOAD_STORE(16); >> + >> +void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) >> +{ >> +} >> +EXPORT_SYMBOL(__hwasan_loadN_noabort); >> + >> +void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) >> +{ >> +} >> +EXPORT_SYMBOL(__hwasan_storeN_noabort); >> + >> +void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) >> +{ >> +} >> +EXPORT_SYMBOL(__hwasan_tag_memory); >> diff --git a/mm/slub.c b/mm/slub.c >> index 30b9bf777bab..4206e1b616e7 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2955,7 +2955,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, >> do_slab_free(s, page, head, tail, cnt, addr); >> } >> >> -#ifdef CONFIG_KASAN >> +#ifdef CONFIG_KASAN_GENERIC >> void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) >> { >> do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); >> diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan >> index 69552a39951d..49c6e056c697 100644 >> --- a/scripts/Makefile.kasan >> +++ b/scripts/Makefile.kasan >> @@ -1,5 +1,5 @@ >> # SPDX-License-Identifier: GPL-2.0 >> -ifdef CONFIG_KASAN >> +ifdef CONFIG_KASAN_GENERIC >> ifdef CONFIG_KASAN_INLINE >> call_threshold := 10000 >> else >> @@ -42,6 +42,29 @@ ifdef CONFIG_KASAN_EXTRA >> CFLAGS_KASAN += $(call cc-option, -fsanitize-address-use-after-scope) >> endif >> >> -CFLAGS_KASAN_NOSANITIZE := -fno-builtin >> +endif >> + >> +ifdef CONFIG_KASAN_HW >> + >> +ifdef CONFIG_KASAN_INLINE >> + instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET) >> +else >> + instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1 >> +endif >> >> +CFLAGS_KASAN := -fsanitize=kernel-hwaddress \ >> + -mllvm -hwasan-instrument-stack=0 \ >> + $(instrumentation_flags) >> + >> +ifeq ($(call cc-option, $(CFLAGS_KASAN) -Werror),) >> + ifneq ($(CONFIG_COMPILE_TEST),y) >> + $(warning Cannot use CONFIG_KASAN_HW: \ >> + -fsanitize=hwaddress is not supported by compiler) >> + endif >> +endif >> + >> +endif >> + >> +ifdef CONFIG_KASAN >> +CFLAGS_KASAN_NOSANITIZE := -fno-builtin >> endif >> -- >> 2.19.0.rc0.228.g281dcd1b4d0-goog >> ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW 2018-09-12 14:47 ` [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW Dmitry Vyukov 2018-09-12 14:51 ` Dmitry Vyukov @ 2018-09-17 18:42 ` Andrey Konovalov 1 sibling, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-17 18:42 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 4:47 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> >> #define __no_sanitize_address __attribute__((no_sanitize("address"))) >> +#define __no_sanitize_hwaddress __attribute__((no_sanitize("hwaddress"))) > > It seems that it would be better to have just 1 attribute for both types. > Currently __no_sanitize_address is used just in a single place. But if > it ever used more, people will need to always spell both which looks > unnecessary, or, worse will only fix asan but forget about khwasan. > > If we do just: > > #define __no_sanitize_address __attribute__((no_sanitize("address", > "hwaddress"))) > > Then we don't need any changes in compiler-gcc.h nor in compiler.h, > and no chance or forgetting one of them. Will do in v7. >> config KASAN >> - bool "KASan: runtime memory debugger" >> + bool "KASAN: runtime memory debugger" >> + help >> + Enables KASAN (KernelAddressSANitizer) - runtime memory debugger, >> + designed to find out-of-bounds accesses and use-after-free bugs. > > Perhaps also give link to Documentation/dev-tools/kasan.rst while we are here. Will do in v7. > >> + >> +choice >> + prompt "KASAN mode" >> + depends on KASAN >> + default KASAN_GENERIC >> + help >> + KASAN has two modes: KASAN (a classic version, similar to userspace > > In these few sentences we call the old mode with 3 different terms: > "generic", "classic" and "KASAN" :) > This is somewhat confusing. Let's call it "generic" throughout (here > and in the docs patch). "Generic" as in "supported on multiple arch > and not-dependent on hardware features". "Classic" makes sense for > people who knew KASAN before, but for future readers in won't make > sense. Will use "generic" in v7. >> >> +if HAVE_ARCH_KASAN_HW > > This choice looks somewhat weird on non-arm64. It's kinda a choice > menu, but one can't really choose anything. Should we put the whole > choice under HAVE_ARCH_KASAN_HW, and just select KASAN_GENERIC > otherwise? I don't know what't the practice here. Andrey R? I think having one option that is auto selected is fine. >> +config KASAN_HW >> + bool "KHWASAN: the hardware assisted mode" Do we need a hyphen here? hardware-assisted? Yes, will fix in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <b4ba65afa55f2fdfd2856fb03c5aba99c7a8bdd7.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW [not found] ` <b4ba65afa55f2fdfd2856fb03c5aba99c7a8bdd7.1535462971.git.andreyknvl@google.com> @ 2018-09-12 14:54 ` Dmitry Vyukov 2018-09-19 17:27 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 14:54 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > KWHASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires > 1/16th of the kernel virtual address space for the shadow memory. > > This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when KHWASAN is enabled. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > arch/arm64/Makefile | 2 +- > arch/arm64/include/asm/memory.h | 13 +++++++++---- > 2 files changed, 10 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile > index 106039d25e2f..17047b8ab984 100644 > --- a/arch/arm64/Makefile > +++ b/arch/arm64/Makefile > @@ -94,7 +94,7 @@ endif > # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) > # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) > # in 32-bit arithmetic > -KASAN_SHADOW_SCALE_SHIFT := 3 > +KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_HW), 4, 3) > KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ > (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ > + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index b96442960aea..f5e262ee76c1 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -74,12 +74,17 @@ > #define KERNEL_END _end > > /* > - * KASAN requires 1/8th of the kernel virtual address space for the shadow > - * region. KASAN can bloat the stack significantly, so double the (minimum) > - * stack size when KASAN is in use. > + * KASAN and KHWASAN require 1/8th and 1/16th of the kernel virtual address I am somewhat confused by the terminology. "KASAN" is not actually "CONFIG_KASAN" below, it is actually "CONFIG_KASAN_GENERIC". While "KHWASAN" translates to "KASAN_HW" few lines later. I think we need some consistent terminology for comments and config names until it's too late. > + * space for the shadow region respectively. They can bloat the stack > + * significantly, so double the (minimum) stack size when they are in use. > */ > -#ifdef CONFIG_KASAN > +#ifdef CONFIG_KASAN_GENERIC > #define KASAN_SHADOW_SCALE_SHIFT 3 > +#endif > +#ifdef CONFIG_KASAN_HW > +#define KASAN_SHADOW_SCALE_SHIFT 4 > +#endif > +#ifdef CONFIG_KASAN > #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) > #define KASAN_THREAD_SHIFT 1 > #else > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW 2018-09-12 14:54 ` [PATCH v6 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW Dmitry Vyukov @ 2018-09-19 17:27 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-19 17:27 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 4:54 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> /* >> - * KASAN requires 1/8th of the kernel virtual address space for the shadow >> - * region. KASAN can bloat the stack significantly, so double the (minimum) >> - * stack size when KASAN is in use. >> + * KASAN and KHWASAN require 1/8th and 1/16th of the kernel virtual address > > > I am somewhat confused by the terminology. > "KASAN" is not actually "CONFIG_KASAN" below, it is actually > "CONFIG_KASAN_GENERIC". While "KHWASAN" translates to "KASAN_HW" few > lines later. > I think we need some consistent terminology for comments and config > names until it's too late. > As per offline discussion will rename in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <6cd298a90d02068969713f2fd440eae21227467b.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 07/18] khwasan: add tag related helper functions [not found] ` <6cd298a90d02068969713f2fd440eae21227467b.1535462971.git.andreyknvl@google.com> @ 2018-09-12 16:21 ` Dmitry Vyukov 2018-09-17 18:59 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 16:21 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > This commit adds a few helper functions, that are meant to be used to > work with tags embedded in the top byte of kernel pointers: to set, to > get or to reset (set to 0xff) the top byte. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > arch/arm64/mm/kasan_init.c | 2 ++ > include/linux/kasan.h | 29 +++++++++++++++++ > mm/kasan/kasan.h | 55 ++++++++++++++++++++++++++++++++ > mm/kasan/khwasan.c | 65 ++++++++++++++++++++++++++++++++++++++ > 4 files changed, 151 insertions(+) > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c > index 7a31e8ccbad2..e7f37c0b7e14 100644 > --- a/arch/arm64/mm/kasan_init.c > +++ b/arch/arm64/mm/kasan_init.c > @@ -250,6 +250,8 @@ void __init kasan_init(void) > memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE); > cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); > > + khwasan_init(); > + > /* At this point kasan is fully initialized. Enable error messages */ > init_task.kasan_depth = 0; > pr_info("KernelAddressSanitizer initialized\n"); > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 1c31bb089154..1f852244e739 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -166,6 +166,35 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} > > #define KASAN_SHADOW_INIT 0xFF > > +void khwasan_init(void); > + > +void *khwasan_reset_tag(const void *addr); > + > +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr); > +void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, > + const void *addr); > + > +#else /* CONFIG_KASAN_HW */ > + > +static inline void khwasan_init(void) { } > + > +static inline void *khwasan_reset_tag(const void *addr) > +{ > + return (void *)addr; > +} > + > +static inline void *khwasan_preset_slub_tag(struct kmem_cache *cache, > + const void *addr) > +{ > + return (void *)addr; > +} > + > +static inline void *khwasan_preset_slab_tag(struct kmem_cache *cache, > + unsigned int idx, const void *addr) > +{ > + return (void *)addr; > +} > + > #endif /* CONFIG_KASAN_HW */ > > #endif /* LINUX_KASAN_H */ > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 19b950eaccff..a7cc27d96608 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -8,6 +8,10 @@ > #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) > #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) > > +#define KHWASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ > +#define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ > +#define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ > + > #define KASAN_FREE_PAGE 0xFF /* page was freed */ > #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ > #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ > @@ -126,6 +130,57 @@ static inline void quarantine_reduce(void) { } > static inline void quarantine_remove_cache(struct kmem_cache *cache) { } > #endif > > +#ifdef CONFIG_KASAN_HW > + > +#define KHWASAN_TAG_SHIFT 56 > +#define KHWASAN_TAG_MASK (0xFFUL << KHWASAN_TAG_SHIFT) > + > +u8 random_tag(void); > + > +static inline void *set_tag(const void *addr, u8 tag) > +{ > + u64 a = (u64)addr; > + > + a &= ~KHWASAN_TAG_MASK; > + a |= ((u64)tag << KHWASAN_TAG_SHIFT); > + > + return (void *)a; > +} > + > +static inline u8 get_tag(const void *addr) > +{ > + return (u8)((u64)addr >> KHWASAN_TAG_SHIFT); > +} > + > +static inline void *reset_tag(const void *addr) > +{ > + return set_tag(addr, KHWASAN_TAG_KERNEL); > +} > + > +#else /* CONFIG_KASAN_HW */ > + > +static inline u8 random_tag(void) > +{ > + return 0; > +} > + > +static inline void *set_tag(const void *addr, u8 tag) > +{ > + return (void *)addr; > +} > + > +static inline u8 get_tag(const void *addr) > +{ > + return 0; > +} > + > +static inline void *reset_tag(const void *addr) > +{ > + return (void *)addr; > +} > + > +#endif /* CONFIG_KASAN_HW */ > + > /* > * Exported functions for interfaces called from assembly or from generated > * code. Declarations here to avoid warning about missing declarations. > diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c > index e2c3a7f7fd1f..9d91bf3c8246 100644 > --- a/mm/kasan/khwasan.c > +++ b/mm/kasan/khwasan.c > @@ -38,6 +38,71 @@ > #include "kasan.h" > #include "../slab.h" > > +static DEFINE_PER_CPU(u32, prng_state); > + > +void khwasan_init(void) > +{ > + int cpu; > + > + for_each_possible_cpu(cpu) > + per_cpu(prng_state, cpu) = get_random_u32(); > +} > + > +/* > + * If a preemption happens between this_cpu_read and this_cpu_write, the only > + * side effect is that we'll give a few allocated in different contexts objects > + * the same tag. Since KHWASAN is meant to be used a probabilistic bug-detection > + * debug feature, this doesn’t have significant negative impact. > + * > + * Ideally the tags use strong randomness to prevent any attempts to predict > + * them during explicit exploit attempts. But strong randomness is expensive, > + * and we did an intentional trade-off to use a PRNG. This non-atomic RMW > + * sequence has in fact positive effect, since interrupts that randomly skew > + * PRNG at unpredictable points do only good. > + */ > +u8 random_tag(void) > +{ > + u32 state = this_cpu_read(prng_state); > + > + state = 1664525 * state + 1013904223; > + this_cpu_write(prng_state, state); > + > + return (u8)(state % (KHWASAN_TAG_MAX + 1)); > +} > + > +void *khwasan_reset_tag(const void *addr) > +{ > + return reset_tag(addr); > +} > + > +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr) Can't we do this in the existing kasan_init_slab_obj() hook? It looks like it should do exactly this -- allow any one-time initialization for objects. We could extend it to accept index and return a new pointer. If that does not work for some reason, I would try to at least unify the hook for slab/slub, e.g. pass idx=-1 from slub and then use random_tag(). It also seems that we do preset tag for slab multiple times (from slab_get_obj()). Using kasan_init_slab_obj() should resolve this too (hopefully we don't call it multiple times). > +{ > + /* > + * Since it's desirable to only call object contructors ones during > + * slab allocation, we preassign tags to all such objects. > + * Also preassign tags for SLAB_TYPESAFE_BY_RCU slabs to avoid > + * use-after-free reports. > + */ > + if (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU) > + return set_tag(addr, random_tag()); > + return (void *)addr; > +} > + > +void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, > + const void *addr) > +{ > + /* > + * See comment in khwasan_preset_slub_tag. > + * For SLAB allocator we can't preassign tags randomly since the > + * freelist is stored as an array of indexes instead of a linked > + * list. Assign tags based on objects indexes, so that objects that > + * are next to each other get different tags. > + */ > + if (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU) > + return set_tag(addr, (u8)idx); > + return (void *)addr; > +} > + > void check_memory_region(unsigned long addr, size_t size, bool write, > unsigned long ret_ip) > { > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 07/18] khwasan: add tag related helper functions 2018-09-12 16:21 ` [PATCH v6 07/18] khwasan: add tag related helper functions Dmitry Vyukov @ 2018-09-17 18:59 ` Andrey Konovalov 2018-09-18 15:45 ` Dmitry Vyukov 0 siblings, 1 reply; 34+ messages in thread From: Andrey Konovalov @ 2018-09-17 18:59 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 6:21 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr) > > Can't we do this in the existing kasan_init_slab_obj() hook? It looks > like it should do exactly this -- allow any one-time initialization > for objects. We could extend it to accept index and return a new > pointer. > If that does not work for some reason, I would try to at least unify > the hook for slab/slub, e.g. pass idx=-1 from slub and then use > random_tag(). > It also seems that we do preset tag for slab multiple times (from > slab_get_obj()). Using kasan_init_slab_obj() should resolve this too > (hopefully we don't call it multiple times). The issue is that SLAB stores freelist as an array of indexes instead of using an actual linked list like SLUB. So you can't store the tag in the pointer while the object is in the freelist, since there's no pointer. And, technically, we don't preset tags for SLAB, we just use the id as the tag every time a pointer is used, so perhaps we should rename the callback. As to unifying the callbacks, sure, we can do that. ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 07/18] khwasan: add tag related helper functions 2018-09-17 18:59 ` Andrey Konovalov @ 2018-09-18 15:45 ` Dmitry Vyukov 0 siblings, 0 replies; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-18 15:45 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Mon, Sep 17, 2018 at 8:59 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > On Wed, Sep 12, 2018 at 6:21 PM, Dmitry Vyukov <dvyukov@google.com> wrote: >> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > >>> +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr) >> >> Can't we do this in the existing kasan_init_slab_obj() hook? It looks >> like it should do exactly this -- allow any one-time initialization >> for objects. We could extend it to accept index and return a new >> pointer. >> If that does not work for some reason, I would try to at least unify >> the hook for slab/slub, e.g. pass idx=-1 from slub and then use >> random_tag(). >> It also seems that we do preset tag for slab multiple times (from >> slab_get_obj()). Using kasan_init_slab_obj() should resolve this too >> (hopefully we don't call it multiple times). > > The issue is that SLAB stores freelist as an array of indexes instead > of using an actual linked list like SLUB. So you can't store the tag > in the pointer while the object is in the freelist, since there's no > pointer. And, technically, we don't preset tags for SLAB, we just use > the id as the tag every time a pointer is used, so perhaps we should > rename the callback. As to unifying the callbacks, sure, we can do > that. As per offline discussion: potentially we can use kasan_init_slab_obj() if we add tag in kmalloc hook by using obj_to_idx(). ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <19d757c2cafc277f0143a8ac34e179061f3487f5.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 06/18] khwasan, arm64: untag virt address in __kimg_to_phys and _virt_addr_is_linear [not found] ` <19d757c2cafc277f0143a8ac34e179061f3487f5.1535462971.git.andreyknvl@google.com> @ 2018-09-12 16:33 ` Dmitry Vyukov 2018-09-18 17:09 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 16:33 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > __kimg_to_phys (which is used by virt_to_phys) and _virt_addr_is_linear > (which is used by virt_addr_valid) assume that the top byte of the address > is 0xff, which isn't always the case with KHWASAN enabled. > > The solution is to reset the tag in those macros. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > arch/arm64/include/asm/memory.h | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index f5e262ee76c1..f5e2953b7009 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -92,6 +92,13 @@ > #define KASAN_THREAD_SHIFT 0 > #endif > > +#ifdef CONFIG_KASAN_HW > +#define KASAN_TAG_SHIFTED(tag) ((unsigned long)(tag) << 56) > +#define KASAN_SET_TAG(addr, tag) (((addr) & ~KASAN_TAG_SHIFTED(0xff)) | \ > + KASAN_TAG_SHIFTED(tag)) > +#define KASAN_RESET_TAG(addr) KASAN_SET_TAG(addr, 0xff) > +#endif > + Wouldn't it be better to #define KASAN_RESET_TAG(addr) addr when CONFIG_KASAN_HW is not enabled, and then not duplicate the macros below? That's what we do in kasan.h for all hooks. I see that a subsequent patch duplicates yet another macro in this file. While we could use: #define __kimg_to_phys(addr) (KASAN_RESET_TAG(addr) - kimage_voffset) with and without kasan. Duplicating them increases risk that somebody will change only the non-kasan version but forget kasan version. > #define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT) > > /* > @@ -232,7 +239,12 @@ static inline unsigned long kaslr_offset(void) > #define __is_lm_address(addr) (!!((addr) & BIT(VA_BITS - 1))) > > #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) > + > +#ifdef CONFIG_KASAN_HW > +#define __kimg_to_phys(addr) (KASAN_RESET_TAG(addr) - kimage_voffset) > +#else > #define __kimg_to_phys(addr) ((addr) - kimage_voffset) > +#endif > > #define __virt_to_phys_nodebug(x) ({ \ > phys_addr_t __x = (phys_addr_t)(x); \ > @@ -308,7 +320,13 @@ static inline void *phys_to_virt(phys_addr_t x) > #endif > #endif > > +#ifdef CONFIG_KASAN_HW > +#define _virt_addr_is_linear(kaddr) (KASAN_RESET_TAG((u64)(kaddr)) >= \ > + PAGE_OFFSET) > +#else > #define _virt_addr_is_linear(kaddr) (((u64)(kaddr)) >= PAGE_OFFSET) > +#endif > + > #define virt_addr_valid(kaddr) (_virt_addr_is_linear(kaddr) && \ > _virt_addr_valid(kaddr)) > > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 06/18] khwasan, arm64: untag virt address in __kimg_to_phys and _virt_addr_is_linear 2018-09-12 16:33 ` [PATCH v6 06/18] khwasan, arm64: untag virt address in __kimg_to_phys and _virt_addr_is_linear Dmitry Vyukov @ 2018-09-18 17:09 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-18 17:09 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 6:33 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> +#ifdef CONFIG_KASAN_HW >> +#define KASAN_TAG_SHIFTED(tag) ((unsigned long)(tag) << 56) >> +#define KASAN_SET_TAG(addr, tag) (((addr) & ~KASAN_TAG_SHIFTED(0xff)) | \ >> + KASAN_TAG_SHIFTED(tag)) >> +#define KASAN_RESET_TAG(addr) KASAN_SET_TAG(addr, 0xff) >> +#endif >> + > > > Wouldn't it be better to > #define KASAN_RESET_TAG(addr) addr > when CONFIG_KASAN_HW is not enabled, and then not duplicate the macros > below? That's what we do in kasan.h for all hooks. > I see that a subsequent patch duplicates yet another macro in this > file. While we could use: > > #define __kimg_to_phys(addr) (KASAN_RESET_TAG(addr) - kimage_voffset) > > with and without kasan. Duplicating them increases risk that somebody > will change only the non-kasan version but forget kasan version. Will do in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <95b5beb7ec13b7e998efe84c9a7a5c1fa49a9fe3.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU [not found] ` <95b5beb7ec13b7e998efe84c9a7a5c1fa49a9fe3.1535462971.git.andreyknvl@google.com> @ 2018-09-12 16:36 ` Dmitry Vyukov 2018-09-18 16:50 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 16:36 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > An object constructor can initialize pointers within this objects based on > the address of the object. Since the object address might be tagged, we > need to assign a tag before calling constructor. > > The implemented approach is to assign tags to objects with constructors > when a slab is allocated and call constructors once as usual. The > downside is that such object would always have the same tag when it is > reallocated, so we won't catch use-after-frees on it. > > Also pressign tags for objects from SLAB_TYPESAFE_BY_RCU caches, since > they can be validy accessed after having been freed. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > mm/slab.c | 6 +++++- > mm/slub.c | 4 ++++ > 2 files changed, 9 insertions(+), 1 deletion(-) > > diff --git a/mm/slab.c b/mm/slab.c > index 6fdca9ec2ea4..3b4227059f2e 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -403,7 +403,11 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) > static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, > unsigned int idx) > { > - return page->s_mem + cache->size * idx; > + void *obj; > + > + obj = page->s_mem + cache->size * idx; > + obj = khwasan_preset_slab_tag(cache, idx, obj); > + return obj; > } > > /* > diff --git a/mm/slub.c b/mm/slub.c > index 4206e1b616e7..086d6558a6b6 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1531,12 +1531,14 @@ static bool shuffle_freelist(struct kmem_cache *s, struct page *page) > /* First entry is used as the base of the freelist */ > cur = next_freelist_entry(s, page, &pos, start, page_limit, > freelist_count); > + cur = khwasan_preset_slub_tag(s, cur); > page->freelist = cur; > > for (idx = 1; idx < page->objects; idx++) { > setup_object(s, page, cur); > next = next_freelist_entry(s, page, &pos, start, page_limit, > freelist_count); > + next = khwasan_preset_slub_tag(s, next); > set_freepointer(s, cur, next); > cur = next; > } > @@ -1613,8 +1615,10 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > shuffle = shuffle_freelist(s, page); > > if (!shuffle) { > + start = khwasan_preset_slub_tag(s, start); > for_each_object_idx(p, idx, s, start, page->objects) { > setup_object(s, page, p); > + p = khwasan_preset_slub_tag(s, p); As I commented in the previous patch, can't we do this in kasan_init_slab_obj(), which should be called in all the right places already? > if (likely(idx < page->objects)) > set_freepointer(s, p, p + s->size); > else > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU 2018-09-12 16:36 ` [PATCH v6 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU Dmitry Vyukov @ 2018-09-18 16:50 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-18 16:50 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 6:36 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> if (!shuffle) { >> + start = khwasan_preset_slub_tag(s, start); >> for_each_object_idx(p, idx, s, start, page->objects) { >> setup_object(s, page, p); >> + p = khwasan_preset_slub_tag(s, p); > > > As I commented in the previous patch, can't we do this in > kasan_init_slab_obj(), which should be called in all the right places > already? > As per offline discussion, will do in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <f5e73b5ead3355932ad8b5fc96b141c3f5b8c16c.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation [not found] ` <f5e73b5ead3355932ad8b5fc96b141c3f5b8c16c.1535462971.git.andreyknvl@google.com> @ 2018-09-12 17:13 ` Dmitry Vyukov 2018-09-17 19:12 ` Andrey Konovalov 2018-09-12 17:15 ` Dmitry Vyukov 1 sibling, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 17:13 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > KHWASAN inline instrumentation mode (which embeds checks of shadow memory > into the generated code, instead of inserting a callback) generates a brk > instruction when a tag mismatch is detected. > > This commit add a KHWASAN brk handler, that decodes the immediate value > passed to the brk instructions (to extract information about the memory > access that triggered the mismatch), reads the register values (x0 contains > the guilty address) and reports the bug. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > arch/arm64/include/asm/brk-imm.h | 2 + > arch/arm64/kernel/traps.c | 69 +++++++++++++++++++++++++++++++- > 2 files changed, 69 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h > index ed693c5bcec0..e4a7013321dc 100644 > --- a/arch/arm64/include/asm/brk-imm.h > +++ b/arch/arm64/include/asm/brk-imm.h > @@ -16,10 +16,12 @@ > * 0x400: for dynamic BRK instruction > * 0x401: for compile time BRK instruction > * 0x800: kernel-mode BUG() and WARN() traps > + * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff) > */ > #define FAULT_BRK_IMM 0x100 > #define KGDB_DYN_DBG_BRK_IMM 0x400 > #define KGDB_COMPILED_DBG_BRK_IMM 0x401 > #define BUG_BRK_IMM 0x800 > +#define KHWASAN_BRK_IMM 0x900 > > #endif > diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c > index 039e9ff379cc..fd70347d1ce7 100644 > --- a/arch/arm64/kernel/traps.c > +++ b/arch/arm64/kernel/traps.c > @@ -35,6 +35,7 @@ > #include <linux/sizes.h> > #include <linux/syscalls.h> > #include <linux/mm_types.h> > +#include <linux/kasan.h> > > #include <asm/atomic.h> > #include <asm/bug.h> > @@ -269,10 +270,14 @@ void arm64_notify_die(const char *str, struct pt_regs *regs, > } > } > > -void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) > +void __arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) > { > regs->pc += size; > +} > > +void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) > +{ > + __arm64_skip_faulting_instruction(regs, size); > /* > * If we were single stepping, we want to get the step exception after > * we return from the trap. > @@ -775,7 +780,7 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr) > } > > /* If thread survives, skip over the BUG instruction and continue: */ > - arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); > + __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); > return DBG_HOOK_HANDLED; > } > > @@ -785,6 +790,59 @@ static struct break_hook bug_break_hook = { > .fn = bug_handler, > }; > > +#ifdef CONFIG_KASAN_HW > + > +#define KHWASAN_ESR_RECOVER 0x20 > +#define KHWASAN_ESR_WRITE 0x10 > +#define KHWASAN_ESR_SIZE_MASK 0x0f > +#define KHWASAN_ESR_SIZE(esr) (1 << ((esr) & KHWASAN_ESR_SIZE_MASK)) > + > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) > +{ > + bool recover = esr & KHWASAN_ESR_RECOVER; > + bool write = esr & KHWASAN_ESR_WRITE; > + size_t size = KHWASAN_ESR_SIZE(esr); > + u64 addr = regs->regs[0]; > + u64 pc = regs->pc; > + > + if (user_mode(regs)) > + return DBG_HOOK_ERROR; > + > + kasan_report(addr, size, write, pc); > + > + /* > + * The instrumentation allows to control whether we can proceed after > + * a crash was detected. This is done by passing the -recover flag to > + * the compiler. Disabling recovery allows to generate more compact > + * code. > + * > + * Unfortunately disabling recovery doesn't work for the kernel right > + * now. KHWASAN reporting is disabled in some contexts (for example when > + * the allocator accesses slab object metadata; same is true for KASAN; > + * this is controlled by current->kasan_depth). All these accesses are > + * detected by the tool, even though the reports for them are not > + * printed. I am not following this part. Slab accesses metadata. OK. This is detected as bad access. OK. Report is not printed. OK. We skip BRK and resume execution. What is the problem? > + * > + * This is something that might be fixed at some point in the future. > + */ > + if (!recover) > + die("Oops - KHWASAN", regs, 0); > + > + /* If thread survives, skip over the brk instruction and continue: */ > + __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); > + return DBG_HOOK_HANDLED; > +} > + > +#define KHWASAN_ESR_VAL (0xf2000000 | KHWASAN_BRK_IMM) > +#define KHWASAN_ESR_MASK 0xffffff00 > + > +static struct break_hook khwasan_break_hook = { > + .esr_val = KHWASAN_ESR_VAL, > + .esr_mask = KHWASAN_ESR_MASK, > + .fn = khwasan_handler, > +}; > +#endif > + > /* > * Initial handler for AArch64 BRK exceptions > * This handler only used until debug_traps_init(). > @@ -792,6 +850,10 @@ static struct break_hook bug_break_hook = { > int __init early_brk64(unsigned long addr, unsigned int esr, > struct pt_regs *regs) > { > +#ifdef CONFIG_KASAN_HW > + if ((esr & KHWASAN_ESR_MASK) == KHWASAN_ESR_VAL) > + return khwasan_handler(regs, esr) != DBG_HOOK_HANDLED; > +#endif > return bug_handler(regs, esr) != DBG_HOOK_HANDLED; > } > > @@ -799,4 +861,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr, > void __init trap_init(void) > { > register_break_hook(&bug_break_hook); > +#ifdef CONFIG_KASAN_HW > + register_break_hook(&khwasan_break_hook); > +#endif > } > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation 2018-09-12 17:13 ` [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation Dmitry Vyukov @ 2018-09-17 19:12 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-17 19:12 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 7:13 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) >> +{ >> + bool recover = esr & KHWASAN_ESR_RECOVER; >> + bool write = esr & KHWASAN_ESR_WRITE; >> + size_t size = KHWASAN_ESR_SIZE(esr); >> + u64 addr = regs->regs[0]; >> + u64 pc = regs->pc; >> + >> + if (user_mode(regs)) >> + return DBG_HOOK_ERROR; >> + >> + kasan_report(addr, size, write, pc); >> + >> + /* >> + * The instrumentation allows to control whether we can proceed after >> + * a crash was detected. This is done by passing the -recover flag to >> + * the compiler. Disabling recovery allows to generate more compact >> + * code. >> + * >> + * Unfortunately disabling recovery doesn't work for the kernel right >> + * now. KHWASAN reporting is disabled in some contexts (for example when >> + * the allocator accesses slab object metadata; same is true for KASAN; >> + * this is controlled by current->kasan_depth). All these accesses are >> + * detected by the tool, even though the reports for them are not >> + * printed. > > > I am not following this part. > Slab accesses metadata. OK. > This is detected as bad access. OK. > Report is not printed. OK. > We skip BRK and resume execution. > What is the problem? When the kernel is compiled with -fsanitize=kernel-hwaddress without any additional flags (like it's done now with KASAN_HW) everything works as you described and there's no problem. However if one were to recompile the kernel with hwasan recovery disabled, KHWASAN wouldn't work due to the reasons described in the comment. Should I make it more clear? > > > >> + * >> + * This is something that might be fixed at some point in the future. >> + */ >> + if (!recover) >> + die("Oops - KHWASAN", regs, 0); ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation [not found] ` <f5e73b5ead3355932ad8b5fc96b141c3f5b8c16c.1535462971.git.andreyknvl@google.com> 2018-09-12 17:13 ` [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation Dmitry Vyukov @ 2018-09-12 17:15 ` Dmitry Vyukov 2018-09-12 17:39 ` Jann Horn 1 sibling, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 17:15 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > KHWASAN inline instrumentation mode (which embeds checks of shadow memory > into the generated code, instead of inserting a callback) generates a brk > instruction when a tag mismatch is detected. > > This commit add a KHWASAN brk handler, that decodes the immediate value > passed to the brk instructions (to extract information about the memory > access that triggered the mismatch), reads the register values (x0 contains > the guilty address) and reports the bug. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > arch/arm64/include/asm/brk-imm.h | 2 + > arch/arm64/kernel/traps.c | 69 +++++++++++++++++++++++++++++++- > 2 files changed, 69 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h > index ed693c5bcec0..e4a7013321dc 100644 > --- a/arch/arm64/include/asm/brk-imm.h > +++ b/arch/arm64/include/asm/brk-imm.h > @@ -16,10 +16,12 @@ > * 0x400: for dynamic BRK instruction > * 0x401: for compile time BRK instruction > * 0x800: kernel-mode BUG() and WARN() traps > + * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff) > */ > #define FAULT_BRK_IMM 0x100 > #define KGDB_DYN_DBG_BRK_IMM 0x400 > #define KGDB_COMPILED_DBG_BRK_IMM 0x401 > #define BUG_BRK_IMM 0x800 > +#define KHWASAN_BRK_IMM 0x900 > > #endif > diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c > index 039e9ff379cc..fd70347d1ce7 100644 > --- a/arch/arm64/kernel/traps.c > +++ b/arch/arm64/kernel/traps.c > @@ -35,6 +35,7 @@ > #include <linux/sizes.h> > #include <linux/syscalls.h> > #include <linux/mm_types.h> > +#include <linux/kasan.h> > > #include <asm/atomic.h> > #include <asm/bug.h> > @@ -269,10 +270,14 @@ void arm64_notify_die(const char *str, struct pt_regs *regs, > } > } > > -void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) > +void __arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) > { > regs->pc += size; > +} > > +void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) > +{ > + __arm64_skip_faulting_instruction(regs, size); > /* > * If we were single stepping, we want to get the step exception after > * we return from the trap. > @@ -775,7 +780,7 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr) > } > > /* If thread survives, skip over the BUG instruction and continue: */ > - arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); > + __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); > return DBG_HOOK_HANDLED; > } > > @@ -785,6 +790,59 @@ static struct break_hook bug_break_hook = { > .fn = bug_handler, > }; > > +#ifdef CONFIG_KASAN_HW > + > +#define KHWASAN_ESR_RECOVER 0x20 > +#define KHWASAN_ESR_WRITE 0x10 > +#define KHWASAN_ESR_SIZE_MASK 0x0f > +#define KHWASAN_ESR_SIZE(esr) (1 << ((esr) & KHWASAN_ESR_SIZE_MASK)) > + > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) > +{ > + bool recover = esr & KHWASAN_ESR_RECOVER; > + bool write = esr & KHWASAN_ESR_WRITE; > + size_t size = KHWASAN_ESR_SIZE(esr); > + u64 addr = regs->regs[0]; > + u64 pc = regs->pc; > + > + if (user_mode(regs)) > + return DBG_HOOK_ERROR; > + > + kasan_report(addr, size, write, pc); > + > + /* > + * The instrumentation allows to control whether we can proceed after > + * a crash was detected. This is done by passing the -recover flag to > + * the compiler. Disabling recovery allows to generate more compact > + * code. > + * > + * Unfortunately disabling recovery doesn't work for the kernel right > + * now. KHWASAN reporting is disabled in some contexts (for example when > + * the allocator accesses slab object metadata; same is true for KASAN; > + * this is controlled by current->kasan_depth). All these accesses are > + * detected by the tool, even though the reports for them are not > + * printed. > + * > + * This is something that might be fixed at some point in the future. > + */ > + if (!recover) > + die("Oops - KHWASAN", regs, 0); Why die and not panic? Die seems to be much less used function, and it calls panic anyway, and we call panic in kasan_report if panic_on_warn is set. > + /* If thread survives, skip over the brk instruction and continue: */ > + __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); > + return DBG_HOOK_HANDLED; > +} > + > +#define KHWASAN_ESR_VAL (0xf2000000 | KHWASAN_BRK_IMM) > +#define KHWASAN_ESR_MASK 0xffffff00 > + > +static struct break_hook khwasan_break_hook = { > + .esr_val = KHWASAN_ESR_VAL, > + .esr_mask = KHWASAN_ESR_MASK, > + .fn = khwasan_handler, > +}; > +#endif > + > /* > * Initial handler for AArch64 BRK exceptions > * This handler only used until debug_traps_init(). > @@ -792,6 +850,10 @@ static struct break_hook bug_break_hook = { > int __init early_brk64(unsigned long addr, unsigned int esr, > struct pt_regs *regs) > { > +#ifdef CONFIG_KASAN_HW > + if ((esr & KHWASAN_ESR_MASK) == KHWASAN_ESR_VAL) > + return khwasan_handler(regs, esr) != DBG_HOOK_HANDLED; > +#endif > return bug_handler(regs, esr) != DBG_HOOK_HANDLED; > } > > @@ -799,4 +861,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr, > void __init trap_init(void) > { > register_break_hook(&bug_break_hook); > +#ifdef CONFIG_KASAN_HW > + register_break_hook(&khwasan_break_hook); > +#endif > } > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation 2018-09-12 17:15 ` Dmitry Vyukov @ 2018-09-12 17:39 ` Jann Horn 2018-09-13 8:37 ` Dmitry Vyukov 0 siblings, 1 reply; 34+ messages in thread From: Jann Horn @ 2018-09-12 17:39 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, dave.martin, Ard Biesheuvel, Eric W. Biederman, Ingo Molnar, Paul Lawrence, geert, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, kstewart On Wed, Sep 12, 2018 at 7:16 PM Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: [...] > > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) > > +{ > > + bool recover = esr & KHWASAN_ESR_RECOVER; > > + bool write = esr & KHWASAN_ESR_WRITE; > > + size_t size = KHWASAN_ESR_SIZE(esr); > > + u64 addr = regs->regs[0]; > > + u64 pc = regs->pc; > > + > > + if (user_mode(regs)) > > + return DBG_HOOK_ERROR; > > + > > + kasan_report(addr, size, write, pc); > > + > > + /* > > + * The instrumentation allows to control whether we can proceed after > > + * a crash was detected. This is done by passing the -recover flag to > > + * the compiler. Disabling recovery allows to generate more compact > > + * code. > > + * > > + * Unfortunately disabling recovery doesn't work for the kernel right > > + * now. KHWASAN reporting is disabled in some contexts (for example when > > + * the allocator accesses slab object metadata; same is true for KASAN; > > + * this is controlled by current->kasan_depth). All these accesses are > > + * detected by the tool, even though the reports for them are not > > + * printed. > > + * > > + * This is something that might be fixed at some point in the future. > > + */ > > + if (!recover) > > + die("Oops - KHWASAN", regs, 0); > > Why die and not panic? Die seems to be much less used function, and it > calls panic anyway, and we call panic in kasan_report if panic_on_warn > is set. die() is vaguely equivalent to BUG(); die() and BUG() normally only terminate the current process, which may or may not leave the system somewhat usable, while panic() always brings down the whole system. AFAIK panic() shouldn't be used unless you're in some very low-level code where you know that trying to just kill the current process can't work and the entire system is broken beyond repair. If KASAN traps on some random memory access, there's a good chance that just killing the current process will allow at least parts of the system to continue. I'm not sure whether BUG() or die() is more appropriate here, but I think it definitely should not be a panic(). ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation 2018-09-12 17:39 ` Jann Horn @ 2018-09-13 8:37 ` Dmitry Vyukov 2018-09-13 18:09 ` Nick Desaulniers 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-13 8:37 UTC (permalink / raw) To: Jann Horn Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W. Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman <gregk> On Wed, Sep 12, 2018 at 7:39 PM, Jann Horn <jannh@google.com> wrote: > On Wed, Sep 12, 2018 at 7:16 PM Dmitry Vyukov <dvyukov@google.com> wrote: >> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > [...] >> > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) >> > +{ >> > + bool recover = esr & KHWASAN_ESR_RECOVER; >> > + bool write = esr & KHWASAN_ESR_WRITE; >> > + size_t size = KHWASAN_ESR_SIZE(esr); >> > + u64 addr = regs->regs[0]; >> > + u64 pc = regs->pc; >> > + >> > + if (user_mode(regs)) >> > + return DBG_HOOK_ERROR; >> > + >> > + kasan_report(addr, size, write, pc); >> > + >> > + /* >> > + * The instrumentation allows to control whether we can proceed after >> > + * a crash was detected. This is done by passing the -recover flag to >> > + * the compiler. Disabling recovery allows to generate more compact >> > + * code. >> > + * >> > + * Unfortunately disabling recovery doesn't work for the kernel right >> > + * now. KHWASAN reporting is disabled in some contexts (for example when >> > + * the allocator accesses slab object metadata; same is true for KASAN; >> > + * this is controlled by current->kasan_depth). All these accesses are >> > + * detected by the tool, even though the reports for them are not >> > + * printed. >> > + * >> > + * This is something that might be fixed at some point in the future. >> > + */ >> > + if (!recover) >> > + die("Oops - KHWASAN", regs, 0); >> >> Why die and not panic? Die seems to be much less used function, and it >> calls panic anyway, and we call panic in kasan_report if panic_on_warn >> is set. > > die() is vaguely equivalent to BUG(); die() and BUG() normally only > terminate the current process, which may or may not leave the system > somewhat usable, while panic() always brings down the whole system. > AFAIK panic() shouldn't be used unless you're in some very low-level > code where you know that trying to just kill the current process can't > work and the entire system is broken beyond repair. > > If KASAN traps on some random memory access, there's a good chance > that just killing the current process will allow at least parts of the > system to continue. I'm not sure whether BUG() or die() is more > appropriate here, but I think it definitely should not be a panic(). Nick, do you know if die() will be enough to catch problems on Android phones? panic_on_warn would turn this into panic, but I guess one does not want panic_on_warn on a canary phone. ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation 2018-09-13 8:37 ` Dmitry Vyukov @ 2018-09-13 18:09 ` Nick Desaulniers 2018-09-13 18:23 ` Jann Horn 2018-09-14 5:11 ` Dmitry Vyukov 0 siblings, 2 replies; 34+ messages in thread From: Nick Desaulniers @ 2018-09-13 18:09 UTC (permalink / raw) To: Dmitry Vyukov Cc: Jann Horn, Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg KH On Thu, Sep 13, 2018 at 1:37 AM Dmitry Vyukov <dvyukov@google.com> wrote: > > On Wed, Sep 12, 2018 at 7:39 PM, Jann Horn <jannh@google.com> wrote: > > On Wed, Sep 12, 2018 at 7:16 PM Dmitry Vyukov <dvyukov@google.com> wrote: > >> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > > [...] > >> > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) > >> > +{ > >> > + bool recover = esr & KHWASAN_ESR_RECOVER; > >> > + bool write = esr & KHWASAN_ESR_WRITE; > >> > + size_t size = KHWASAN_ESR_SIZE(esr); > >> > + u64 addr = regs->regs[0]; > >> > + u64 pc = regs->pc; > >> > + > >> > + if (user_mode(regs)) > >> > + return DBG_HOOK_ERROR; > >> > + > >> > + kasan_report(addr, size, write, pc); > >> > + > >> > + /* > >> > + * The instrumentation allows to control whether we can proceed after > >> > + * a crash was detected. This is done by passing the -recover flag to > >> > + * the compiler. Disabling recovery allows to generate more compact > >> > + * code. > >> > + * > >> > + * Unfortunately disabling recovery doesn't work for the kernel right > >> > + * now. KHWASAN reporting is disabled in some contexts (for example when > >> > + * the allocator accesses slab object metadata; same is true for KASAN; > >> > + * this is controlled by current->kasan_depth). All these accesses are > >> > + * detected by the tool, even though the reports for them are not > >> > + * printed. > >> > + * > >> > + * This is something that might be fixed at some point in the future. > >> > + */ > >> > + if (!recover) > >> > + die("Oops - KHWASAN", regs, 0); > >> > >> Why die and not panic? Die seems to be much less used function, and it > >> calls panic anyway, and we call panic in kasan_report if panic_on_warn > >> is set. > > > > die() is vaguely equivalent to BUG(); die() and BUG() normally only > > terminate the current process, which may or may not leave the system > > somewhat usable, while panic() always brings down the whole system. > > AFAIK panic() shouldn't be used unless you're in some very low-level > > code where you know that trying to just kill the current process can't > > work and the entire system is broken beyond repair. > > > > If KASAN traps on some random memory access, there's a good chance > > that just killing the current process will allow at least parts of the > > system to continue. I'm not sure whether BUG() or die() is more > > appropriate here, but I think it definitely should not be a panic(). > > > Nick, do you know if die() will be enough to catch problems on Android > phones? panic_on_warn would turn this into panic, but I guess one does > not want panic_on_warn on a canary phone. die() has arch specific implementations, so looking at: arch/arm64/kernel/traps.c:196#die it looks like panic is invoked if in_interrupt() or panic_on_oops(), which is a configure option. So maybe the config for KHWASAN should also enable that? Otherwise seems easy to forget. But maybe that should remain configurable separately? Looking at the kernel configs for the Pixel 2, it does seem like CONFIG_PANIC_ON_OOPS=y is already enabled. https://android.googlesource.com/kernel/msm/+/android-msm-wahoo-4.4-pie/arch/arm64/configs/wahoo_defconfig#746 Specifically to catch problems on Android, our internal debug builds can report on panics, but not oops, IIUC. -- Thanks, ~Nick Desaulniers ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation 2018-09-13 18:09 ` Nick Desaulniers @ 2018-09-13 18:23 ` Jann Horn 2018-09-14 5:11 ` Dmitry Vyukov 1 sibling, 0 replies; 34+ messages in thread From: Jann Horn @ 2018-09-13 18:23 UTC (permalink / raw) To: Nick Desaulniers Cc: Dmitry Vyukov, Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Marc Zyngier, dave.martin, Ard Biesheuvel, Eric W. Biederman, Ingo Molnar, Paul Lawrence, geert, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart On Thu, Sep 13, 2018 at 8:09 PM Nick Desaulniers <ndesaulniers@google.com> wrote: > > On Thu, Sep 13, 2018 at 1:37 AM Dmitry Vyukov <dvyukov@google.com> wrote: > > > > On Wed, Sep 12, 2018 at 7:39 PM, Jann Horn <jannh@google.com> wrote: > > > On Wed, Sep 12, 2018 at 7:16 PM Dmitry Vyukov <dvyukov@google.com> wrote: > > >> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > > > [...] > > >> > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) > > >> > +{ > > >> > + bool recover = esr & KHWASAN_ESR_RECOVER; > > >> > + bool write = esr & KHWASAN_ESR_WRITE; > > >> > + size_t size = KHWASAN_ESR_SIZE(esr); > > >> > + u64 addr = regs->regs[0]; > > >> > + u64 pc = regs->pc; > > >> > + > > >> > + if (user_mode(regs)) > > >> > + return DBG_HOOK_ERROR; > > >> > + > > >> > + kasan_report(addr, size, write, pc); > > >> > + > > >> > + /* > > >> > + * The instrumentation allows to control whether we can proceed after > > >> > + * a crash was detected. This is done by passing the -recover flag to > > >> > + * the compiler. Disabling recovery allows to generate more compact > > >> > + * code. > > >> > + * > > >> > + * Unfortunately disabling recovery doesn't work for the kernel right > > >> > + * now. KHWASAN reporting is disabled in some contexts (for example when > > >> > + * the allocator accesses slab object metadata; same is true for KASAN; > > >> > + * this is controlled by current->kasan_depth). All these accesses are > > >> > + * detected by the tool, even though the reports for them are not > > >> > + * printed. > > >> > + * > > >> > + * This is something that might be fixed at some point in the future. > > >> > + */ > > >> > + if (!recover) > > >> > + die("Oops - KHWASAN", regs, 0); > > >> > > >> Why die and not panic? Die seems to be much less used function, and it > > >> calls panic anyway, and we call panic in kasan_report if panic_on_warn > > >> is set. > > > > > > die() is vaguely equivalent to BUG(); die() and BUG() normally only > > > terminate the current process, which may or may not leave the system > > > somewhat usable, while panic() always brings down the whole system. > > > AFAIK panic() shouldn't be used unless you're in some very low-level > > > code where you know that trying to just kill the current process can't > > > work and the entire system is broken beyond repair. > > > > > > If KASAN traps on some random memory access, there's a good chance > > > that just killing the current process will allow at least parts of the > > > system to continue. I'm not sure whether BUG() or die() is more > > > appropriate here, but I think it definitely should not be a panic(). > > > > > > Nick, do you know if die() will be enough to catch problems on Android > > phones? panic_on_warn would turn this into panic, but I guess one does > > not want panic_on_warn on a canary phone. > > die() has arch specific implementations, so looking at: > > arch/arm64/kernel/traps.c:196#die > > it looks like panic is invoked if in_interrupt() or panic_on_oops(), > which is a configure option. So maybe the config for KHWASAN should > also enable that? Otherwise seems easy to forget. But maybe that > should remain configurable separately? In the upstream kernel, it is desirable to be able to discover bugs and debug them from inside the running system. When you detect a bug that makes it impossible for the current task to continue safely, you're supposed to use something like BUG() to terminate the task; if you can continue safely, you're supposed to use WARN(). Either way, you should *not* just shoot down the whole kernel unless the system is corrupted so badly that trying to keep running would be pointless. You can configure the kernel such that it crashes the device when such an event occurs, and doing so can sometimes be beneficial; but please don't hardcode such policies into the upstream kernel. ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation 2018-09-13 18:09 ` Nick Desaulniers 2018-09-13 18:23 ` Jann Horn @ 2018-09-14 5:11 ` Dmitry Vyukov 1 sibling, 0 replies; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-14 5:11 UTC (permalink / raw) To: Nick Desaulniers Cc: Jann Horn, Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg KH On Thu, Sep 13, 2018 at 8:09 PM, 'Nick Desaulniers' via kasan-dev <kasan-dev@googlegroups.com> wrote: > On Thu, Sep 13, 2018 at 1:37 AM Dmitry Vyukov <dvyukov@google.com> wrote: >> >> On Wed, Sep 12, 2018 at 7:39 PM, Jann Horn <jannh@google.com> wrote: >> > On Wed, Sep 12, 2018 at 7:16 PM Dmitry Vyukov <dvyukov@google.com> wrote: >> >> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> > [...] >> >> > +static int khwasan_handler(struct pt_regs *regs, unsigned int esr) >> >> > +{ >> >> > + bool recover = esr & KHWASAN_ESR_RECOVER; >> >> > + bool write = esr & KHWASAN_ESR_WRITE; >> >> > + size_t size = KHWASAN_ESR_SIZE(esr); >> >> > + u64 addr = regs->regs[0]; >> >> > + u64 pc = regs->pc; >> >> > + >> >> > + if (user_mode(regs)) >> >> > + return DBG_HOOK_ERROR; >> >> > + >> >> > + kasan_report(addr, size, write, pc); >> >> > + >> >> > + /* >> >> > + * The instrumentation allows to control whether we can proceed after >> >> > + * a crash was detected. This is done by passing the -recover flag to >> >> > + * the compiler. Disabling recovery allows to generate more compact >> >> > + * code. >> >> > + * >> >> > + * Unfortunately disabling recovery doesn't work for the kernel right >> >> > + * now. KHWASAN reporting is disabled in some contexts (for example when >> >> > + * the allocator accesses slab object metadata; same is true for KASAN; >> >> > + * this is controlled by current->kasan_depth). All these accesses are >> >> > + * detected by the tool, even though the reports for them are not >> >> > + * printed. >> >> > + * >> >> > + * This is something that might be fixed at some point in the future. >> >> > + */ >> >> > + if (!recover) >> >> > + die("Oops - KHWASAN", regs, 0); >> >> >> >> Why die and not panic? Die seems to be much less used function, and it >> >> calls panic anyway, and we call panic in kasan_report if panic_on_warn >> >> is set. >> > >> > die() is vaguely equivalent to BUG(); die() and BUG() normally only >> > terminate the current process, which may or may not leave the system >> > somewhat usable, while panic() always brings down the whole system. >> > AFAIK panic() shouldn't be used unless you're in some very low-level >> > code where you know that trying to just kill the current process can't >> > work and the entire system is broken beyond repair. >> > >> > If KASAN traps on some random memory access, there's a good chance >> > that just killing the current process will allow at least parts of the >> > system to continue. I'm not sure whether BUG() or die() is more >> > appropriate here, but I think it definitely should not be a panic(). >> >> >> Nick, do you know if die() will be enough to catch problems on Android >> phones? panic_on_warn would turn this into panic, but I guess one does >> not want panic_on_warn on a canary phone. > > die() has arch specific implementations, so looking at: > > arch/arm64/kernel/traps.c:196#die > > it looks like panic is invoked if in_interrupt() or panic_on_oops(), > which is a configure option. So maybe the config for KHWASAN should > also enable that? Otherwise seems easy to forget. But maybe that > should remain configurable separately? > > Looking at the kernel configs for the Pixel 2, it does seem like > CONFIG_PANIC_ON_OOPS=y is already enabled. > https://android.googlesource.com/kernel/msm/+/android-msm-wahoo-4.4-pie/arch/arm64/configs/wahoo_defconfig#746 Then I think we are good here. > Specifically to catch problems on Android, our internal debug builds > can report on panics, but not oops, IIUC. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <f39cdef4fde40b6d2ef356db3e0126bda0e1e8c7.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 13/18] khwasan: add bug reporting routines [not found] ` <f39cdef4fde40b6d2ef356db3e0126bda0e1e8c7.1535462971.git.andreyknvl@google.com> @ 2018-09-12 17:50 ` Dmitry Vyukov 2018-09-18 17:36 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 17:50 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > This commit adds rountines, that print KHWASAN error reports. Those are > quite similar to KASAN, the difference is: > > 1. The way KHWASAN finds the first bad shadow cell (with a mismatching > tag). KHWASAN compares memory tags from the shadow memory to the pointer > tag. > > 2. KHWASAN reports all bugs with the "KASAN: invalid-access" header. This > is done, so various external tools that already parse the kernel logs > looking for KASAN reports wouldn't need to be changed. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > include/linux/kasan.h | 3 +++ > mm/kasan/kasan.h | 7 +++++ > mm/kasan/kasan_report.c | 7 ++--- > mm/kasan/khwasan_report.c | 21 +++++++++++++++ > mm/kasan/report.c | 57 +++++++++++++++++++++------------------ > 5 files changed, 64 insertions(+), 31 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 1f852244e739..4424359a9dfa 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -174,6 +174,9 @@ void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr); > void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, > const void *addr); > > +void kasan_report(unsigned long addr, size_t size, > + bool write, unsigned long ip); > + > #else /* CONFIG_KASAN_HW */ > > static inline void khwasan_init(void) { } > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 82672473740c..d60859d26be7 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -119,8 +119,15 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value); > void check_memory_region(unsigned long addr, size_t size, bool write, > unsigned long ret_ip); > > +void *find_first_bad_addr(void *addr, size_t size); > const char *get_bug_type(struct kasan_access_info *info); > > +#ifdef CONFIG_KASAN_HW We already have #ifdef CONFIG_KASAN_HW section below with additional functions for KASAN_HW and empty stubs otherwise. I would add this one there as well. > +void print_tags(u8 addr_tag, const void *addr); > +#else > +static inline void print_tags(u8 addr_tag, const void *addr) { } > +#endif > + > void kasan_report(unsigned long addr, size_t size, > bool is_write, unsigned long ip); > void kasan_report_invalid_free(void *object, unsigned long ip); > diff --git a/mm/kasan/kasan_report.c b/mm/kasan/kasan_report.c > index 2d8decbecbd5..fdf2d77e3125 100644 > --- a/mm/kasan/kasan_report.c > +++ b/mm/kasan/kasan_report.c > @@ -33,10 +33,10 @@ > #include "kasan.h" > #include "../slab.h" > > -static const void *find_first_bad_addr(const void *addr, size_t size) > +void *find_first_bad_addr(void *addr, size_t size) > { > u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr); > - const void *first_bad_addr = addr; > + void *first_bad_addr = addr; > > while (!shadow_val && first_bad_addr < addr + size) { > first_bad_addr += KASAN_SHADOW_SCALE_SIZE; > @@ -50,9 +50,6 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info) > const char *bug_type = "unknown-crash"; > u8 *shadow_addr; > > - info->first_bad_addr = find_first_bad_addr(info->access_addr, > - info->access_size); > - > shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr); > > /* > diff --git a/mm/kasan/khwasan_report.c b/mm/kasan/khwasan_report.c > index 2edbc3c76be5..51238b404b08 100644 > --- a/mm/kasan/khwasan_report.c > +++ b/mm/kasan/khwasan_report.c > @@ -37,3 +37,24 @@ const char *get_bug_type(struct kasan_access_info *info) > { > return "invalid-access"; > } > + > +void *find_first_bad_addr(void *addr, size_t size) > +{ > + u8 tag = get_tag(addr); > + void *untagged_addr = reset_tag(addr); > + u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr); > + void *first_bad_addr = untagged_addr; > + > + while (*shadow == tag && first_bad_addr < untagged_addr + size) { I think it's better to check that are within bounds before accessing shadow. Otherwise it's kinda potential out-of-bounds access ;) I know that we _should_ not do an oob here, but still. Also feels that this function can be shortened to something like: u8 tag = get_tag(addr); void *p = reset_tag(addr); void *end = p + size; while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p)) p += KASAN_SHADOW_SCALE_SIZE; return p; > + first_bad_addr += KASAN_SHADOW_SCALE_SIZE; > + shadow = (u8 *)kasan_mem_to_shadow(first_bad_addr); > + } > + return first_bad_addr; > +} > + > +void print_tags(u8 addr_tag, const void *addr) > +{ > + u8 *shadow = (u8 *)kasan_mem_to_shadow(addr); > + > + pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, *shadow); > +} > diff --git a/mm/kasan/report.c b/mm/kasan/report.c > index 155247a6f8a8..e031c78f2e52 100644 > --- a/mm/kasan/report.c > +++ b/mm/kasan/report.c > @@ -64,11 +64,10 @@ static int __init kasan_set_multi_shot(char *str) > } > __setup("kasan_multi_shot", kasan_set_multi_shot); > > -static void print_error_description(struct kasan_access_info *info, > - const char *bug_type) > +static void print_error_description(struct kasan_access_info *info) > { > pr_err("BUG: KASAN: %s in %pS\n", > - bug_type, (void *)info->ip); > + get_bug_type(info), (void *)info->ip); > pr_err("%s of size %zu at addr %px by task %s/%d\n", > info->is_write ? "Write" : "Read", info->access_size, > info->access_addr, current->comm, task_pid_nr(current)); > @@ -272,6 +271,8 @@ void kasan_report_invalid_free(void *object, unsigned long ip) > > start_report(&flags); > pr_err("BUG: KASAN: double-free or invalid-free in %pS\n", (void *)ip); > + print_tags(get_tag(object), reset_tag(object)); > + object = reset_tag(object); > pr_err("\n"); > print_address_description(object); > pr_err("\n"); > @@ -279,41 +280,45 @@ void kasan_report_invalid_free(void *object, unsigned long ip) > end_report(&flags); > } > > -static void kasan_report_error(struct kasan_access_info *info) > -{ > - unsigned long flags; > - > - start_report(&flags); > - > - print_error_description(info, get_bug_type(info)); > - pr_err("\n"); > - > - if (!addr_has_shadow(info->access_addr)) { > - dump_stack(); > - } else { > - print_address_description((void *)info->access_addr); > - pr_err("\n"); > - print_shadow_for_address(info->first_bad_addr); > - } > - > - end_report(&flags); > -} > - > void kasan_report(unsigned long addr, size_t size, > bool is_write, unsigned long ip) > { > struct kasan_access_info info; > + void *tagged_addr; > + void *untagged_addr; > + unsigned long flags; > > if (likely(!report_enabled())) > return; > > disable_trace_on_warning(); > > - info.access_addr = (void *)addr; > - info.first_bad_addr = (void *)addr; > + tagged_addr = (void *)addr; > + untagged_addr = reset_tag(tagged_addr); > + > + info.access_addr = tagged_addr; > + if (addr_has_shadow(untagged_addr)) > + info.first_bad_addr = find_first_bad_addr(tagged_addr, size); > + else > + info.first_bad_addr = untagged_addr; > info.access_size = size; > info.is_write = is_write; > info.ip = ip; > > - kasan_report_error(&info); > + start_report(&flags); > + > + print_error_description(&info); > + if (addr_has_shadow(untagged_addr)) > + print_tags(get_tag(tagged_addr), info.first_bad_addr); > + pr_err("\n"); > + > + if (addr_has_shadow(untagged_addr)) { > + print_address_description(untagged_addr); > + pr_err("\n"); > + print_shadow_for_address(info.first_bad_addr); > + } else { > + dump_stack(); > + } > + > + end_report(&flags); > } > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 13/18] khwasan: add bug reporting routines 2018-09-12 17:50 ` [PATCH v6 13/18] khwasan: add bug reporting routines Dmitry Vyukov @ 2018-09-18 17:36 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-18 17:36 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 7:50 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> +#ifdef CONFIG_KASAN_HW > > We already have #ifdef CONFIG_KASAN_HW section below with additional > functions for KASAN_HW and empty stubs otherwise. I would add this one > there as well. Will do in v7. > >> +void print_tags(u8 addr_tag, const void *addr); >> +#else >> +static inline void print_tags(u8 addr_tag, const void *addr) { } >> +#endif >> +void *find_first_bad_addr(void *addr, size_t size) >> +{ >> + u8 tag = get_tag(addr); >> + void *untagged_addr = reset_tag(addr); >> + u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr); >> + void *first_bad_addr = untagged_addr; >> + >> + while (*shadow == tag && first_bad_addr < untagged_addr + size) { > > I think it's better to check that are within bounds before accessing > shadow. Otherwise it's kinda potential out-of-bounds access ;) > I know that we _should_ not do an oob here, but still. > Also feels that this function can be shortened to something like: > > u8 tag = get_tag(addr); > void *p = reset_tag(addr); > void *end = p + size; > > while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p)) > p += KASAN_SHADOW_SCALE_SIZE; > return p; Will do in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 14/18] khwasan: add hooks implementation [not found] ` <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com> @ 2018-09-12 18:30 ` Dmitry Vyukov 2018-09-19 11:54 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 18:30 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > This commit adds KHWASAN specific hooks implementation and adjusts > common KASAN and KHWASAN ones. > > 1. When a new slab cache is created, KHWASAN rounds up the size of the > objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16). > > 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory, > that corresponds to this object to this tag, and embeds this tag value > into the top byte of the returned pointer. > > 3. On each kfree KHWASAN poisons the shadow memory with a random tag to > allow detection of use-after-free bugs. > > The rest of the logic of the hook implementation is very much similar to > the one provided by KASAN. KHWASAN saves allocation and free stack metadata > to the slab object the same was KASAN does this. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > mm/kasan/common.c | 82 +++++++++++++++++++++++++++++++++++----------- > mm/kasan/kasan.h | 8 +++++ > mm/kasan/khwasan.c | 40 ++++++++++++++++++++++ > 3 files changed, 111 insertions(+), 19 deletions(-) > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index bed8e13c6e1d..938229b26f3a 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) > { > void *shadow_start, *shadow_end; > > + /* Perform shadow offset calculation based on untagged address */ > + address = reset_tag(address); > + > shadow_start = kasan_mem_to_shadow(address); > shadow_end = kasan_mem_to_shadow(address + size); > > @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) > > void kasan_unpoison_shadow(const void *address, size_t size) > { > - kasan_poison_shadow(address, size, 0); > + u8 tag = get_tag(address); > + > + /* Perform shadow offset calculation based on untagged address */ The comment is not super-useful. It would be more useful to say why we need to do this. Most callers explicitly untag pointer passed to this function, for some it's unclear if the pointer contains tag or not. For example, __hwasan_tag_memory -- what does it accept? Tagged or untagged? > + address = reset_tag(address); > + > + kasan_poison_shadow(address, size, tag); > > if (size & KASAN_SHADOW_MASK) { > u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); > - *shadow = size & KASAN_SHADOW_MASK; > + > + if (IS_ENABLED(CONFIG_KASAN_HW)) > + *shadow = tag; > + else > + *shadow = size & KASAN_SHADOW_MASK; > } > } It seems that this function is just different for kasan and khwasan. Currently for kasan we have: kasan_poison_shadow(address, size, tag); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); *shadow = size & KASAN_SHADOW_MASK; } But what we want to say for khwasan is: kasan_poison_shadow(address, round_up(size, KASAN_SHADOW_SCALE_SIZE), get_tag(address)); Not sure if we want to keep a common implementation or just have separate implementations... > > @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) > > void kasan_alloc_pages(struct page *page, unsigned int order) > { > - if (likely(!PageHighMem(page))) > - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); > + if (unlikely(PageHighMem(page))) > + return; > + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); > } > > void kasan_free_pages(struct page *page, unsigned int order) > @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > slab_flags_t *flags) > { > unsigned int orig_size = *size; > + unsigned int redzone_size = 0; This variable seems to be always initialized below. We don't general initialize local variables in this case. > int redzone_adjust; > > /* Add alloc meta. */ > @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > *size += sizeof(struct kasan_alloc_meta); > > /* Add free meta. */ > - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || > - cache->object_size < sizeof(struct kasan_free_meta)) { > + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && > + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || > + cache->object_size < sizeof(struct kasan_free_meta))) { > cache->kasan_info.free_meta_offset = *size; > *size += sizeof(struct kasan_free_meta); > } > - redzone_adjust = optimal_redzone(cache->object_size) - > - (*size - cache->object_size); > > + redzone_size = optimal_redzone(cache->object_size); > + redzone_adjust = redzone_size - (*size - cache->object_size); > if (redzone_adjust > 0) > *size += redzone_adjust; > > *size = min_t(unsigned int, KMALLOC_MAX_SIZE, > - max(*size, cache->object_size + > - optimal_redzone(cache->object_size))); > + max(*size, cache->object_size + redzone_size)); > > /* > * If the metadata doesn't fit, don't enable KASAN at all. > @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > return; > } > > + cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); > + > *flags |= SLAB_KASAN; > } > > @@ -328,15 +344,30 @@ void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) > return kasan_kmalloc(cache, object, cache->object_size, flags); > } > > +static inline bool shadow_invalid(u8 tag, s8 shadow_byte) > +{ > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + return shadow_byte < 0 || > + shadow_byte >= KASAN_SHADOW_SCALE_SIZE; > + else > + return tag != (u8)shadow_byte; > +} > + > static bool __kasan_slab_free(struct kmem_cache *cache, void *object, > unsigned long ip, bool quarantine) > { > s8 shadow_byte; > + u8 tag; > + void *tagged_object; > unsigned long rounded_up_size; > > + tag = get_tag(object); > + tagged_object = object; > + object = reset_tag(object); > + > if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != > object)) { > - kasan_report_invalid_free(object, ip); > + kasan_report_invalid_free(tagged_object, ip); > return true; > } > > @@ -345,20 +376,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, > return false; > > shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); > - if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { > - kasan_report_invalid_free(object, ip); > + if (shadow_invalid(tag, shadow_byte)) { > + kasan_report_invalid_free(tagged_object, ip); > return true; > } > > rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); > kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); > > - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) > + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || > + unlikely(!(cache->flags & SLAB_KASAN))) > return false; > > set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); > quarantine_put(get_free_info(cache, object), cache); > - return true; > + > + return IS_ENABLED(CONFIG_KASAN_GENERIC); > } > > bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) > @@ -371,6 +404,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, > { > unsigned long redzone_start; > unsigned long redzone_end; > + u8 tag; > > if (gfpflags_allow_blocking(flags)) > quarantine_reduce(); > @@ -383,14 +417,24 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, > redzone_end = round_up((unsigned long)object + cache->object_size, > KASAN_SHADOW_SCALE_SIZE); > > - kasan_unpoison_shadow(object, size); > + /* > + * Objects with contructors and objects from SLAB_TYPESAFE_BY_RCU slabs > + * have tags preassigned and are already tagged. > + */ > + if (IS_ENABLED(CONFIG_KASAN_HW) && > + (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU)) > + tag = get_tag(object); > + else > + tag = random_tag(); > + > + kasan_unpoison_shadow(set_tag(object, tag), size); > kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, > KASAN_KMALLOC_REDZONE); > > if (cache->flags & SLAB_KASAN) > set_track(&get_alloc_info(cache, object)->alloc_track, flags); > > - return (void *)object; > + return set_tag(object, tag); > } > EXPORT_SYMBOL(kasan_kmalloc); > > @@ -440,7 +484,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) > page = virt_to_head_page(ptr); > > if (unlikely(!PageSlab(page))) { > - if (ptr != page_address(page)) { > + if (reset_tag(ptr) != page_address(page)) { > kasan_report_invalid_free(ptr, ip); > return; > } > @@ -453,7 +497,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) > > void kasan_kfree_large(void *ptr, unsigned long ip) > { > - if (ptr != page_address(virt_to_head_page(ptr))) > + if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) > kasan_report_invalid_free(ptr, ip); > /* The object will be poisoned by page_alloc. */ > } > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index d60859d26be7..6f4f2ebf5f57 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -12,10 +12,18 @@ > #define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ > #define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ > > +#ifdef CONFIG_KASAN_GENERIC > #define KASAN_FREE_PAGE 0xFF /* page was freed */ > #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ > #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ > #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */ > +#else > +#define KASAN_FREE_PAGE KHWASAN_TAG_INVALID > +#define KASAN_PAGE_REDZONE KHWASAN_TAG_INVALID > +#define KASAN_KMALLOC_REDZONE KHWASAN_TAG_INVALID > +#define KASAN_KMALLOC_FREE KHWASAN_TAG_INVALID > +#endif > + > #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ > > /* > diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c > index 9d91bf3c8246..6b1309278e39 100644 > --- a/mm/kasan/khwasan.c > +++ b/mm/kasan/khwasan.c > @@ -106,15 +106,52 @@ void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, > void check_memory_region(unsigned long addr, size_t size, bool write, > unsigned long ret_ip) > { > + u8 tag; > + u8 *shadow_first, *shadow_last, *shadow; > + void *untagged_addr; > + > + tag = get_tag((const void *)addr); > + > + /* Ignore accesses for pointers tagged with 0xff (native kernel /* on a separate line > + * pointer tag) to suppress false positives caused by kmap. > + * > + * Some kernel code was written to account for archs that don't keep > + * high memory mapped all the time, but rather map and unmap particular > + * pages when needed. Instead of storing a pointer to the kernel memory, > + * this code saves the address of the page structure and offset within > + * that page for later use. Those pages are then mapped and unmapped > + * with kmap/kunmap when necessary and virt_to_page is used to get the > + * virtual address of the page. For arm64 (that keeps the high memory > + * mapped all the time), kmap is turned into a page_address call. > + > + * The issue is that with use of the page_address + virt_to_page > + * sequence the top byte value of the original pointer gets lost (gets > + * set to KHWASAN_TAG_KERNEL (0xFF). Missed closing bracket. > + */ > + if (tag == KHWASAN_TAG_KERNEL) > + return; > + > + untagged_addr = reset_tag((const void *)addr); > + shadow_first = kasan_mem_to_shadow(untagged_addr); > + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); > + > + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { > + if (*shadow != tag) { > + kasan_report(addr, size, write, ret_ip); > + return; > + } > + } > } > > #define DEFINE_HWASAN_LOAD_STORE(size) \ > void __hwasan_load##size##_noabort(unsigned long addr) \ > { \ > + check_memory_region(addr, size, false, _RET_IP_); \ > } \ > EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ > void __hwasan_store##size##_noabort(unsigned long addr) \ > { \ > + check_memory_region(addr, size, true, _RET_IP_); \ > } \ > EXPORT_SYMBOL(__hwasan_store##size##_noabort) > > @@ -126,15 +163,18 @@ DEFINE_HWASAN_LOAD_STORE(16); > > void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) > { > + check_memory_region(addr, size, false, _RET_IP_); > } > EXPORT_SYMBOL(__hwasan_loadN_noabort); > > void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) > { > + check_memory_region(addr, size, true, _RET_IP_); > } > EXPORT_SYMBOL(__hwasan_storeN_noabort); > > void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) > { > + kasan_poison_shadow((void *)addr, size, tag); > } > EXPORT_SYMBOL(__hwasan_tag_memory); > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 14/18] khwasan: add hooks implementation 2018-09-12 18:30 ` [PATCH v6 14/18] khwasan: add hooks implementation Dmitry Vyukov @ 2018-09-19 11:54 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-19 11:54 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 8:30 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> void kasan_unpoison_shadow(const void *address, size_t size) >> { >> - kasan_poison_shadow(address, size, 0); >> + u8 tag = get_tag(address); >> + >> + /* Perform shadow offset calculation based on untagged address */ > > The comment is not super-useful. It would be more useful to say why we > need to do this. > Most callers explicitly untag pointer passed to this function, for > some it's unclear if the pointer contains tag or not. > For example, __hwasan_tag_memory -- what does it accept? Tagged or untagged? There are some callers that pass tagged pointers to this functions, e.g. ksize or kasan_unpoison_object_data. I'll expand the comment. > > >> + address = reset_tag(address); >> + >> + kasan_poison_shadow(address, size, tag); >> >> if (size & KASAN_SHADOW_MASK) { >> u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); >> - *shadow = size & KASAN_SHADOW_MASK; >> + >> + if (IS_ENABLED(CONFIG_KASAN_HW)) >> + *shadow = tag; >> + else >> + *shadow = size & KASAN_SHADOW_MASK; >> } >> } > > > It seems that this function is just different for kasan and khwasan. > Currently for kasan we have: > > kasan_poison_shadow(address, size, tag); > if (size & KASAN_SHADOW_MASK) { > u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); > *shadow = size & KASAN_SHADOW_MASK; > } > > But what we want to say for khwasan is: > > kasan_poison_shadow(address, round_up(size, KASAN_SHADOW_SCALE_SIZE), > get_tag(address)); > > Not sure if we want to keep a common implementation or just have > separate implementations... As per offline discussion leaving as is. >> void kasan_free_pages(struct page *page, unsigned int order) >> @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, >> slab_flags_t *flags) >> { >> unsigned int orig_size = *size; >> + unsigned int redzone_size = 0; > > This variable seems to be always initialized below. We don't general > initialize local variables in this case. Will fix in v7. >> void check_memory_region(unsigned long addr, size_t size, bool write, >> unsigned long ret_ip) >> { >> + u8 tag; >> + u8 *shadow_first, *shadow_last, *shadow; >> + void *untagged_addr; >> + >> + tag = get_tag((const void *)addr); >> + >> + /* Ignore accesses for pointers tagged with 0xff (native kernel > > /* on a separate line Will fix in v7. > >> + * pointer tag) to suppress false positives caused by kmap. >> + * >> + * Some kernel code was written to account for archs that don't keep >> + * high memory mapped all the time, but rather map and unmap particular >> + * pages when needed. Instead of storing a pointer to the kernel memory, >> + * this code saves the address of the page structure and offset within >> + * that page for later use. Those pages are then mapped and unmapped >> + * with kmap/kunmap when necessary and virt_to_page is used to get the >> + * virtual address of the page. For arm64 (that keeps the high memory >> + * mapped all the time), kmap is turned into a page_address call. >> + >> + * The issue is that with use of the page_address + virt_to_page >> + * sequence the top byte value of the original pointer gets lost (gets >> + * set to KHWASAN_TAG_KERNEL (0xFF). > > Missed closing bracket. Will fix in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
[parent not found: <1a3b3030b6ee01931b397583b69f3af94e2a2308.1535462971.git.andreyknvl@google.com>]
* Re: [PATCH v6 17/18] khwasan: update kasan documentation [not found] ` <1a3b3030b6ee01931b397583b69f3af94e2a2308.1535462971.git.andreyknvl@google.com> @ 2018-09-12 18:39 ` Dmitry Vyukov 2018-09-18 18:42 ` Andrey Konovalov 0 siblings, 1 reply; 34+ messages in thread From: Dmitry Vyukov @ 2018-09-12 18:39 UTC (permalink / raw) To: Andrey Konovalov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: > This patch updates KASAN documentation to reflect the addition of KHWASAN. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > Documentation/dev-tools/kasan.rst | 213 +++++++++++++++++------------- > 1 file changed, 123 insertions(+), 90 deletions(-) > > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst > index aabc8738b3d8..842d95af74d3 100644 > --- a/Documentation/dev-tools/kasan.rst > +++ b/Documentation/dev-tools/kasan.rst > @@ -8,11 +8,19 @@ KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides > a fast and comprehensive solution for finding use-after-free and out-of-bounds > bugs. > > -KASAN uses compile-time instrumentation for checking every memory access, > -therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is > -required for detection of out-of-bounds accesses to stack or global variables. > +KASAN has two modes: classic KASAN (a classic version, similar to user space > +ASan) and KHWASAN (a version based on memory tagging, similar to user space > +HWASan). > > -Currently KASAN is supported only for the x86_64 and arm64 architectures. > +KASAN uses compile-time instrumentation to insert validity checks before every > +memory access, and therefore requires a compiler version that supports that. > +For classic KASAN you need GCC version 4.9.2 or later. GCC 5.0 or later is > +required for detection of out-of-bounds accesses on stack and global variables. > +KHWASAN in turns is only supported in clang and requires revision 330044 or in turn? > +later. > + > +Currently classic KASAN is supported for the x86_64, arm64 and xtensa > +architectures, and KHWASAN is supported only for arm64. > > Usage > ----- > @@ -21,12 +29,14 @@ To enable KASAN configure kernel with:: > > CONFIG_KASAN = y > > -and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and > -inline are compiler instrumentation types. The former produces smaller binary > -the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC > +and choose between CONFIG_KASAN_GENERIC (to enable classic KASAN) and > +CONFIG_KASAN_HW (to enabled KHWASAN). You also need to choose choose between to enable > +CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and inline are compiler > +instrumentation types. The former produces smaller binary while the latter is > +1.1 - 2 times faster. For classic KASAN inline instrumentation requires GCC > version 5.0 or later. > > -KASAN works with both SLUB and SLAB memory allocators. > +Both KASAN modes work with both SLUB and SLAB memory allocators. > For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. > > To disable instrumentation for specific files or directories, add a line > @@ -43,85 +53,80 @@ similar to the following to the respective kernel Makefile: > Error reports > ~~~~~~~~~~~~~ > > -A typical out of bounds access report looks like this:: > +A typical out-of-bounds access classic KASAN report looks like this:: > > ================================================================== > - BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3 > - Write of size 1 by task modprobe/1689 > - ============================================================================= > - BUG kmalloc-128 (Not tainted): kasan error > - ----------------------------------------------------------------------------- > - > - Disabling lock debugging due to kernel taint > - INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689 > - __slab_alloc+0x4b4/0x4f0 > - kmem_cache_alloc_trace+0x10b/0x190 > - kmalloc_oob_right+0x3d/0x75 [test_kasan] > - init_module+0x9/0x47 [test_kasan] > - do_one_initcall+0x99/0x200 > - load_module+0x2cb3/0x3b20 > - SyS_finit_module+0x76/0x80 > - system_call_fastpath+0x12/0x17 > - INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080 > - INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720 > - > - Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ > - Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk > - Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk. > - Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc ........ > - Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ > - CPU: 0 PID: 1689 Comm: modprobe Tainted: G B 3.18.0-rc1-mm1+ #98 > - Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 > - ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78 > - ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8 > - ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558 > + BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0xa8/0xbc [test_kasan] > + Write of size 1 at addr ffff8800696f3d3b by task insmod/2734 > + > + CPU: 0 PID: 2734 Comm: insmod Not tainted 4.15.0+ #98 > + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 > Call Trace: > - [<ffffffff81cc68ae>] dump_stack+0x46/0x58 > - [<ffffffff811fd848>] print_trailer+0xf8/0x160 > - [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] > - [<ffffffff811ff0f5>] object_err+0x35/0x40 > - [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan] > - [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0 > - [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40 > - [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40 > - [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40 > - [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] > - [<ffffffff8120a995>] __asan_store1+0x75/0xb0 > - [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan] > - [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan] > - [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan] > - [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan] > - [<ffffffff810002d9>] do_one_initcall+0x99/0x200 > - [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160 > - [<ffffffff81114f63>] load_module+0x2cb3/0x3b20 > - [<ffffffff8110fd70>] ? m_show+0x240/0x240 > - [<ffffffff81115f06>] SyS_finit_module+0x76/0x80 > - [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17 > + __dump_stack lib/dump_stack.c:17 > + dump_stack+0x83/0xbc lib/dump_stack.c:53 > + print_address_description+0x73/0x280 mm/kasan/report.c:254 KASAN does not print line numbers per se. I think we need to show unmodified output to not confuse readers (probably remove the useless ? lines). > + kasan_report_error mm/kasan/report.c:352 > + kasan_report+0x10e/0x220 mm/kasan/report.c:410 > + __asan_report_store1_noabort+0x17/0x20 mm/kasan/report.c:505 > + kmalloc_oob_right+0xa8/0xbc [test_kasan] lib/test_kasan.c:42 > + kmalloc_tests_init+0x16/0x769 [test_kasan] > + do_one_initcall+0x9e/0x240 init/main.c:832 > + do_init_module+0x1b6/0x542 kernel/module.c:3462 > + load_module+0x6042/0x9030 kernel/module.c:3786 > + SYSC_init_module+0x18f/0x1c0 kernel/module.c:3858 > + SyS_init_module+0x9/0x10 kernel/module.c:3841 > + do_syscall_64+0x198/0x480 arch/x86/entry/common.c:287 > + entry_SYSCALL_64_after_hwframe+0x21/0x86 arch/x86/entry/entry_64.S:251 > + RIP: 0033:0x7fdd79df99da > + RSP: 002b:00007fff2229bdf8 EFLAGS: 00000202 ORIG_RAX: 00000000000000af > + RAX: ffffffffffffffda RBX: 000055c408121190 RCX: 00007fdd79df99da > + RDX: 00007fdd7a0b8f88 RSI: 0000000000055670 RDI: 00007fdd7a47e000 > + RBP: 000055c4081200b0 R08: 0000000000000003 R09: 0000000000000000 > + R10: 00007fdd79df5d0a R11: 0000000000000202 R12: 00007fdd7a0b8f88 > + R13: 000055c408120090 R14: 0000000000000000 R15: 0000000000000000 > + > + Allocated by task 2734: > + save_stack+0x43/0xd0 mm/kasan/common.c:176 > + set_track+0x20/0x30 mm/kasan/common.c:188 > + kasan_kmalloc+0x9a/0xc0 mm/kasan/kasan.c:372 > + kmem_cache_alloc_trace+0xcd/0x1a0 mm/slub.c:2761 > + kmalloc ./include/linux/slab.h:512 > + kmalloc_oob_right+0x56/0xbc [test_kasan] lib/test_kasan.c:36 > + kmalloc_tests_init+0x16/0x769 [test_kasan] > + do_one_initcall+0x9e/0x240 init/main.c:832 > + do_init_module+0x1b6/0x542 kernel/module.c:3462 > + load_module+0x6042/0x9030 kernel/module.c:3786 > + SYSC_init_module+0x18f/0x1c0 kernel/module.c:3858 > + SyS_init_module+0x9/0x10 kernel/module.c:3841 > + do_syscall_64+0x198/0x480 arch/x86/entry/common.c:287 > + entry_SYSCALL_64_after_hwframe+0x21/0x86 arch/x86/entry/entry_64.S:251 > + > + The buggy address belongs to the object at ffff8800696f3cc0 > + which belongs to the cache kmalloc-128 of size 128 > + The buggy address is located 123 bytes inside of > + 128-byte region [ffff8800696f3cc0, ffff8800696f3d40) > + The buggy address belongs to the page: > + page:ffffea0001a5bcc0 count:1 mapcount:0 mapping: (null) index:0x0 > + flags: 0x100000000000100(slab) > + raw: 0100000000000100 0000000000000000 0000000000000000 0000000180150015 > + raw: ffffea0001a8ce40 0000000300000003 ffff88006d001640 0000000000000000 > + page dumped because: kasan: bad access detected > + > Memory state around the buggy address: > - ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc > - ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc > - ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc > - ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc > - ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 > - >ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc > - ^ > - ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc > - ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc > - ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb > - ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > - ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > + ffff8800696f3c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc > + ffff8800696f3c80: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00 > + >ffff8800696f3d00: 00 00 00 00 00 00 00 03 fc fc fc fc fc fc fc fc > + ^ > + ffff8800696f3d80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc fc > + ffff8800696f3e00: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb > ================================================================== > > -The header of the report discribe what kind of bug happened and what kind of > -access caused it. It's followed by the description of the accessed slub object > -(see 'SLUB Debug output' section in Documentation/vm/slub.rst for details) and > -the description of the accessed memory page. > +The header of the report provides a short summary of what kind of bug happened > +and what kind of access caused it. It's followed by a stack trace of the bad > +access, a stack trace of where the accessed memory was allocated (in case bad > +access happens on a slab object), and a stack trace of where the object was > +freed (in case of a use-after-free bug report). Next comes a description of > +the accessed slab object and information about the accessed memory page. > > In the last section the report shows memory state around the accessed address. > Reading this part requires some understanding of how KASAN works. > @@ -138,18 +143,24 @@ inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h). > In the report above the arrows point to the shadow byte 03, which means that > the accessed address is partially accessible. > > +For KHWASAN this last report section shows the memory tags around the accessed > +address (see Implementation details section). > Implementation details > ---------------------- > > +Classic KASAN > +~~~~~~~~~~~~~ > + > From a high level, our approach to memory error detection is similar to that > of kmemcheck: use shadow memory to record whether each byte of memory is safe > -to access, and use compile-time instrumentation to check shadow memory on each > -memory access. > +to access, and use compile-time instrumentation to insert checks of shadow > +memory on each memory access. > > -AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory > -(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and > -offset to translate a memory address to its corresponding shadow address. > +Classic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB > +to cover 128TB on x86_64) and uses direct mapping with a scale and offset to > +translate a memory address to its corresponding shadow address. > > Here is the function which translates an address to its corresponding shadow > address:: > @@ -162,12 +173,34 @@ address:: > > where ``KASAN_SHADOW_SCALE_SHIFT = 3``. > > -Compile-time instrumentation used for checking memory accesses. Compiler inserts > -function calls (__asan_load*(addr), __asan_store*(addr)) before each memory > -access of size 1, 2, 4, 8 or 16. These functions check whether memory access is > -valid or not by checking corresponding shadow memory. > +Compile-time instrumentation is used to insert memory access checks. Compiler > +inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each > +memory access of size 1, 2, 4, 8 or 16. These functions check whether memory > +access is valid or not by checking corresponding shadow memory. > > GCC 5.0 has possibility to perform inline instrumentation. Instead of making > function calls GCC directly inserts the code to check the shadow memory. > This option significantly enlarges kernel but it gives x1.1-x2 performance > boost over outline instrumented kernel. > + > +KHWASAN > +~~~~~~~ > + > +KHWASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to store > +a pointer tag in the top byte of kernel pointers. KHWASAN also uses shadow > +memory to store memory tags associated with each 16-byte memory cell (therefore > +it dedicates 1/16th of the kernel memory for shadow memory). > + > +On each memory allocation KHWASAN generates a random tag, tags allocated memory > +with this tag, and embeds this tag into the returned pointer. KHWASAN uses > +compile-time instrumentation to insert checks before each memory access. These > +checks make sure that tag of the memory that is being accessed is equal to tag > +of the pointer that is used to access this memory. In case of a tag mismatch > +KHWASAN prints a bug report. > + > +KHWASAN also has two instrumentation modes (outline, that emits callbacks to > +check memory accesses; and inline, that performs the shadow memory checks > +inline). With outline instrumentation mode, a bug report is simply printed > +from the function that performs the access check. With inline instrumentation > +a brk instruction is emitted by the compiler, and a dedicated brk handler is > +used to print KHWASAN reports. > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog > ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 17/18] khwasan: update kasan documentation 2018-09-12 18:39 ` [PATCH v6 17/18] khwasan: update kasan documentation Dmitry Vyukov @ 2018-09-18 18:42 ` Andrey Konovalov 0 siblings, 0 replies; 34+ messages in thread From: Andrey Konovalov @ 2018-09-18 18:42 UTC (permalink / raw) To: Dmitry Vyukov Cc: Andrey Ryabinin, Alexander Potapenko, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart <kst> On Wed, Sep 12, 2018 at 8:39 PM, Dmitry Vyukov <dvyukov@google.com> wrote: > On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote: >> This patch updates KASAN documentation to reflect the addition of KHWASAN. >> -Currently KASAN is supported only for the x86_64 and arm64 architectures. >> +KASAN uses compile-time instrumentation to insert validity checks before every >> +memory access, and therefore requires a compiler version that supports that. >> +For classic KASAN you need GCC version 4.9.2 or later. GCC 5.0 or later is >> +required for detection of out-of-bounds accesses on stack and global variables. >> +KHWASAN in turns is only supported in clang and requires revision 330044 or > > in turn? > >> -and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and >> -inline are compiler instrumentation types. The former produces smaller binary >> -the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC >> +and choose between CONFIG_KASAN_GENERIC (to enable classic KASAN) and >> +CONFIG_KASAN_HW (to enabled KHWASAN). You also need to choose choose between > > to enable > >> + print_address_description+0x73/0x280 mm/kasan/report.c:254 > > > KASAN does not print line numbers per se. > I think we need to show unmodified output to not confuse readers > (probably remove the useless ? lines). Will fix all in v7. ^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2018-09-19 18:53 UTC | newest] Thread overview: 34+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <cover.1535462971.git.andreyknvl@google.com> 2018-09-05 21:10 ` [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer Andrew Morton 2018-09-05 21:55 ` Nick Desaulniers 2018-09-06 10:05 ` Will Deacon 2018-09-06 11:06 ` Andrey Konovalov 2018-09-06 16:39 ` Nick Desaulniers 2018-09-14 15:28 ` Will Deacon 2018-09-19 18:53 ` Andrey Konovalov [not found] ` <db103bdc2109396af0c6007f1669ebbbb63b872b.1535462971.git.andreyknvl@google.com> [not found] ` <3f2dee71-1615-4a34-d611-3ccaf407551e@virtuozzo.com> 2018-09-11 16:10 ` [PATCH v6 16/18] khwasan, mm, arm64: tag non slab memory allocated via pagealloc Andrey Konovalov [not found] ` <868c9168481ff5103034ac1e37b830d28ed5f4ee.1535462971.git.andreyknvl@google.com> 2018-09-12 14:47 ` [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW Dmitry Vyukov 2018-09-12 14:51 ` Dmitry Vyukov 2018-09-17 18:42 ` Andrey Konovalov [not found] ` <b4ba65afa55f2fdfd2856fb03c5aba99c7a8bdd7.1535462971.git.andreyknvl@google.com> 2018-09-12 14:54 ` [PATCH v6 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW Dmitry Vyukov 2018-09-19 17:27 ` Andrey Konovalov [not found] ` <6cd298a90d02068969713f2fd440eae21227467b.1535462971.git.andreyknvl@google.com> 2018-09-12 16:21 ` [PATCH v6 07/18] khwasan: add tag related helper functions Dmitry Vyukov 2018-09-17 18:59 ` Andrey Konovalov 2018-09-18 15:45 ` Dmitry Vyukov [not found] ` <19d757c2cafc277f0143a8ac34e179061f3487f5.1535462971.git.andreyknvl@google.com> 2018-09-12 16:33 ` [PATCH v6 06/18] khwasan, arm64: untag virt address in __kimg_to_phys and _virt_addr_is_linear Dmitry Vyukov 2018-09-18 17:09 ` Andrey Konovalov [not found] ` <95b5beb7ec13b7e998efe84c9a7a5c1fa49a9fe3.1535462971.git.andreyknvl@google.com> 2018-09-12 16:36 ` [PATCH v6 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU Dmitry Vyukov 2018-09-18 16:50 ` Andrey Konovalov [not found] ` <f5e73b5ead3355932ad8b5fc96b141c3f5b8c16c.1535462971.git.andreyknvl@google.com> 2018-09-12 17:13 ` [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation Dmitry Vyukov 2018-09-17 19:12 ` Andrey Konovalov 2018-09-12 17:15 ` Dmitry Vyukov 2018-09-12 17:39 ` Jann Horn 2018-09-13 8:37 ` Dmitry Vyukov 2018-09-13 18:09 ` Nick Desaulniers 2018-09-13 18:23 ` Jann Horn 2018-09-14 5:11 ` Dmitry Vyukov [not found] ` <f39cdef4fde40b6d2ef356db3e0126bda0e1e8c7.1535462971.git.andreyknvl@google.com> 2018-09-12 17:50 ` [PATCH v6 13/18] khwasan: add bug reporting routines Dmitry Vyukov 2018-09-18 17:36 ` Andrey Konovalov [not found] ` <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com> 2018-09-12 18:30 ` [PATCH v6 14/18] khwasan: add hooks implementation Dmitry Vyukov 2018-09-19 11:54 ` Andrey Konovalov [not found] ` <1a3b3030b6ee01931b397583b69f3af94e2a2308.1535462971.git.andreyknvl@google.com> 2018-09-12 18:39 ` [PATCH v6 17/18] khwasan: update kasan documentation Dmitry Vyukov 2018-09-18 18:42 ` Andrey Konovalov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).