* Re: [RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc
[not found] ` <6eb08c160ae23eb890bd937ddf8346ba211df09f.1521828274.git.andreyknvl@google.com>
@ 2018-03-24 8:29 ` Ingo Molnar
2018-03-27 12:20 ` Andrey Konovalov
0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2018-03-24 8:29 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser, kvmarm,
Kees Cook, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, linux-sparse, Geert Uytterhoeven, linux-arm-kernel,
David Rientjes, Andrey Ryabinin, Ramana
* Andrey Konovalov <andreyknvl@google.com> wrote:
> The krealloc function checks where the same buffer was reused or a new one
> allocated by comparing kernel pointers. KHWASAN changes memory tag on the
> krealloc'ed chunk of memory and therefore also changes the pointer tag of
> the returned pointer. Therefore we need to perform comparison on untagged
> (with tags reset) pointers to check whether it's the same memory region or
> not.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
> mm/slab_common.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index a33e61315ca6..5911f2194cf7 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags)
> }
>
> ret = __do_krealloc(p, new_size, flags);
> - if (ret && p != ret)
> + if (ret && khwasan_reset_tag(p) != khwasan_reset_tag(ret))
> kfree(p);
Small nit:
If 'reset' here means an all zeroes tag (upper byte) then khwasan_clear_tag()
might be a slightly easier to read primitive?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS
[not found] ` <1fb0a050a84d49f5c3b2210337339412475d1688.1521828273.git.andreyknvl@google.com>
@ 2018-03-24 8:43 ` Ingo Molnar
2018-03-27 16:23 ` Andrey Konovalov
0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2018-03-24 8:43 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser, kvmarm,
Kees Cook, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, linux-sparse, Geert Uytterhoeven, linux-arm-kernel,
David Rientjes, Andrey Ryabinin, Ramana
* Andrey Konovalov <andreyknvl@google.com> wrote:
> This commit splits the current CONFIG_KASAN config option into two:
> 1. CONFIG_KASAN_CLASSIC, that enables the classic KASAN version (the one
> that exists now);
> 2. CONFIG_KASAN_TAGS, that enables KHWASAN.
Sorry, but this is pretty obscure naming scheme that doesn't explain the primary
difference between these KASAN models to users: that the first one is a pure
software implementation and the other is hardware-assisted.
Reminds me of the transparency of galactic buerocracy in "The Hitchhiker's Guide
to the Galaxy":
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing
cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the
Leopard.”
I'd suggest something more expressive, such as:
CONFIG_KASAN
CONFIG_KASAN_GENERIC
CONFIG_KASAN_HW_ASSIST
or so?
The 'generic' variant will basically run on any CPU. The 'hardware assisted' one
needs support from the CPU.
The following ones might also work:
CONFIG_KASAN_HWASSIST
CONFIG_KASAN_HW_TAGS
CONFIG_KASAN_HWTAGS
... or simply CONFIG_KASAN_SW/CONFIG_KASAN_HW.
If other types of KASAN hardware acceleration are implemented in the future then
the CONFIG_KASAN_HW namespace can be extended:
CONFIG_KASAN_HW_TAGS
CONFIG_KASAN_HW_KEYS
etc.
> Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_CLASSIC support both
> CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.
It would be very surprising if that wasn't so!
Or did you mean 'Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS'! ;-)
Thanks,
Ingo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc
2018-03-24 8:29 ` [RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc Ingo Molnar
@ 2018-03-27 12:20 ` Andrey Konovalov
2018-03-27 20:01 ` Ingo Molnar
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-03-27 12:20 UTC (permalink / raw)
To: Ingo Molnar
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser, kvmarm,
Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand, kasan-dev,
linux-sparse, Geert Uytterhoeven, Linux ARM, David Rientjes,
Andrey Ryabinin, Ramana Radhakrishnan <Ramana.
On Sat, Mar 24, 2018 at 9:29 AM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Andrey Konovalov <andreyknvl@google.com> wrote:
>
>> The krealloc function checks where the same buffer was reused or a new one
>> allocated by comparing kernel pointers. KHWASAN changes memory tag on the
>> krealloc'ed chunk of memory and therefore also changes the pointer tag of
>> the returned pointer. Therefore we need to perform comparison on untagged
>> (with tags reset) pointers to check whether it's the same memory region or
>> not.
>>
>> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>> ---
>> mm/slab_common.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> index a33e61315ca6..5911f2194cf7 100644
>> --- a/mm/slab_common.c
>> +++ b/mm/slab_common.c
>> @@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags)
>> }
>>
>> ret = __do_krealloc(p, new_size, flags);
>> - if (ret && p != ret)
>> + if (ret && khwasan_reset_tag(p) != khwasan_reset_tag(ret))
>> kfree(p);
>
> Small nit:
>
> If 'reset' here means an all zeroes tag (upper byte) then khwasan_clear_tag()
> might be a slightly easier to read primitive?
'Reset' means to set the upper byte to the value that is native for
kernel pointers, and that is 0xFF. So it sets the tag to all ones, not
all zeroes. I can still rename it to khwasan_clear_tag(), if you think
that makes sense in this case as well.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS
2018-03-24 8:43 ` [RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS Ingo Molnar
@ 2018-03-27 16:23 ` Andrey Konovalov
2018-03-27 20:02 ` Ingo Molnar
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-03-27 16:23 UTC (permalink / raw)
To: Ingo Molnar
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser, kvmarm,
Kees Cook, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Andrey Ryabinin
On Sat, Mar 24, 2018 at 9:43 AM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Andrey Konovalov <andreyknvl@google.com> wrote:
>
>> This commit splits the current CONFIG_KASAN config option into two:
>> 1. CONFIG_KASAN_CLASSIC, that enables the classic KASAN version (the one
>> that exists now);
>> 2. CONFIG_KASAN_TAGS, that enables KHWASAN.
>
> Sorry, but this is pretty obscure naming scheme that doesn't explain the primary
> difference between these KASAN models to users: that the first one is a pure
> software implementation and the other is hardware-assisted.
>
> Reminds me of the transparency of galactic buerocracy in "The Hitchhiker's Guide
> to the Galaxy":
>
> “But look, you found the notice, didn’t you?”
> “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing
> cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the
> Leopard.”
>
> I'd suggest something more expressive, such as:
>
> CONFIG_KASAN
> CONFIG_KASAN_GENERIC
> CONFIG_KASAN_HW_ASSIST
>
> or so?
>
> The 'generic' variant will basically run on any CPU. The 'hardware assisted' one
> needs support from the CPU.
>
> The following ones might also work:
>
> CONFIG_KASAN_HWASSIST
> CONFIG_KASAN_HW_TAGS
> CONFIG_KASAN_HWTAGS
>
> ... or simply CONFIG_KASAN_SW/CONFIG_KASAN_HW.
>
> If other types of KASAN hardware acceleration are implemented in the future then
> the CONFIG_KASAN_HW namespace can be extended:
>
> CONFIG_KASAN_HW_TAGS
> CONFIG_KASAN_HW_KEYS
> etc.
How about these two:
CONFIG_KASAN_GENERIC
CONFIG_KASAN_HW
?
Shorter config name looks better to me and I think it makes sense to
name the new config just HW, as there's only one HW implementation
right now. When (and if) there are more, we can expand the config name
as you suggested (CONFIG_KASAN_HW_TAGS, CONFIG_KASAN_HW_KEYS, etc).
>
>> Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_CLASSIC support both
>> CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.
>
> It would be very surprising if that wasn't so!
>
> Or did you mean 'Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS'! ;-)
>
> Thanks,
>
> Ingo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc
2018-03-27 12:20 ` Andrey Konovalov
@ 2018-03-27 20:01 ` Ingo Molnar
0 siblings, 0 replies; 18+ messages in thread
From: Ingo Molnar @ 2018-03-27 20:01 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser, kvmarm,
Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand, kasan-dev,
linux-sparse, Geert Uytterhoeven, Linux ARM, David Rientjes,
Andrey Ryabinin, Ramana Radhakrishnan <Ramana.
* Andrey Konovalov <andreyknvl@google.com> wrote:
> On Sat, Mar 24, 2018 at 9:29 AM, Ingo Molnar <mingo@kernel.org> wrote:
> >
> > * Andrey Konovalov <andreyknvl@google.com> wrote:
> >
> >> The krealloc function checks where the same buffer was reused or a new one
> >> allocated by comparing kernel pointers. KHWASAN changes memory tag on the
> >> krealloc'ed chunk of memory and therefore also changes the pointer tag of
> >> the returned pointer. Therefore we need to perform comparison on untagged
> >> (with tags reset) pointers to check whether it's the same memory region or
> >> not.
> >>
> >> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> >> ---
> >> mm/slab_common.c | 2 +-
> >> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/mm/slab_common.c b/mm/slab_common.c
> >> index a33e61315ca6..5911f2194cf7 100644
> >> --- a/mm/slab_common.c
> >> +++ b/mm/slab_common.c
> >> @@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags)
> >> }
> >>
> >> ret = __do_krealloc(p, new_size, flags);
> >> - if (ret && p != ret)
> >> + if (ret && khwasan_reset_tag(p) != khwasan_reset_tag(ret))
> >> kfree(p);
> >
> > Small nit:
> >
> > If 'reset' here means an all zeroes tag (upper byte) then khwasan_clear_tag()
> > might be a slightly easier to read primitive?
>
> 'Reset' means to set the upper byte to the value that is native for
> kernel pointers, and that is 0xFF. So it sets the tag to all ones, not
> all zeroes. I can still rename it to khwasan_clear_tag(), if you think
> that makes sense in this case as well.
Ok, if it's not 0 then I agree that 'reset' is the better name. 'clear' would in
fact be actively confusing.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS
2018-03-27 16:23 ` Andrey Konovalov
@ 2018-03-27 20:02 ` Ingo Molnar
0 siblings, 0 replies; 18+ messages in thread
From: Ingo Molnar @ 2018-03-27 20:02 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser, kvmarm,
Kees Cook, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Andrey Ryabinin
* Andrey Konovalov <andreyknvl@google.com> wrote:
> On Sat, Mar 24, 2018 at 9:43 AM, Ingo Molnar <mingo@kernel.org> wrote:
> >
> > * Andrey Konovalov <andreyknvl@google.com> wrote:
> >
> >> This commit splits the current CONFIG_KASAN config option into two:
> >> 1. CONFIG_KASAN_CLASSIC, that enables the classic KASAN version (the one
> >> that exists now);
> >> 2. CONFIG_KASAN_TAGS, that enables KHWASAN.
> >
> > Sorry, but this is pretty obscure naming scheme that doesn't explain the primary
> > difference between these KASAN models to users: that the first one is a pure
> > software implementation and the other is hardware-assisted.
> >
> > Reminds me of the transparency of galactic buerocracy in "The Hitchhiker's Guide
> > to the Galaxy":
> >
> > “But look, you found the notice, didn’t you?”
> > “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing
> > cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the
> > Leopard.”
> >
> > I'd suggest something more expressive, such as:
> >
> > CONFIG_KASAN
> > CONFIG_KASAN_GENERIC
> > CONFIG_KASAN_HW_ASSIST
> >
> > or so?
> >
> > The 'generic' variant will basically run on any CPU. The 'hardware assisted' one
> > needs support from the CPU.
> >
> > The following ones might also work:
> >
> > CONFIG_KASAN_HWASSIST
> > CONFIG_KASAN_HW_TAGS
> > CONFIG_KASAN_HWTAGS
> >
> > ... or simply CONFIG_KASAN_SW/CONFIG_KASAN_HW.
> >
> > If other types of KASAN hardware acceleration are implemented in the future then
> > the CONFIG_KASAN_HW namespace can be extended:
> >
> > CONFIG_KASAN_HW_TAGS
> > CONFIG_KASAN_HW_KEYS
> > etc.
>
> How about these two:
>
> CONFIG_KASAN_GENERIC
> CONFIG_KASAN_HW
>
> ?
>
> Shorter config name looks better to me and I think it makes sense to
> name the new config just HW, as there's only one HW implementation
> right now. When (and if) there are more, we can expand the config name
> as you suggested (CONFIG_KASAN_HW_TAGS, CONFIG_KASAN_HW_KEYS, etc).
Sure, sounds good to me!
Thanks,
Ingo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 05/15] khwasan: initialize shadow to 0xff
[not found] ` <2ab69c9c-6e80-ea79-e72e-012753ed3db0@virtuozzo.com>
@ 2018-04-03 14:43 ` Andrey Konovalov
0 siblings, 0 replies; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-03 14:43 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Kees Cook, Herbert Xu, Jonathan Corbet, linux-doc,
Mark Brand, kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven,
Linux ARM, David Rientjes, Ramana
On Fri, Mar 30, 2018 at 6:07 PM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
> On 03/23/2018 09:05 PM, Andrey Konovalov wrote:
>> A KHWASAN shadow memory cell contains a memory tag, that corresponds to
>> the tag in the top byte of the pointer, that points to that memory. The
>> native top byte value of kernel pointers is 0xff, so with KHWASAN we
>> need to initialize shadow memory to 0xff. This commit does that.
>>
>> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>> ---
>> arch/arm64/mm/kasan_init.c | 11 ++++++++++-
>> include/linux/kasan.h | 8 ++++++++
>> mm/kasan/common.c | 7 +++++++
>> 3 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index dabfc1ecda3d..d4bceba60010 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -90,6 +90,10 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
>> do {
>> phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
>> : kasan_alloc_zeroed_page(node);
>> +#if KASAN_SHADOW_INIT != 0
>> + if (!early)
>> + memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
>> +#endif
>
> Less ugly way to do the same:
> if (KASAN_SHADOW_INIT != 0 && !early)
> memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
>
>
> But the right approach here would be allocating uninitialized memory (see memblock_virt_alloc_try_nid_raw())
> and do "if (!early) memset(.., KASAN_SHADOW_INIT, ..)" afterwards.
Will do!
>
>
>> next = addr + PAGE_SIZE;
>> set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
>> } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep)));
>> @@ -139,6 +143,11 @@ asmlinkage void __init kasan_early_init(void)
>> KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
>> BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
>> +
>> +#if KASAN_SHADOW_INIT != 0
>> + memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
>> +#endif
>> +
>
> if (KASAN_SHADOW_INIT)
> memset(...)
>
> Note that, if poisoning of stack variables will work in the same fashion as classic
> KASAN (compiler generated code writes to shadow in function prologue) than content
> of this page will be ruined very fast. Which makes this initialization questionable.
I think I agree with you on this. Since this page immediately gets
dirty and we ignore all reports until proper shadow is set up anyway,
there's no need to initialize it.
>
>
>
>> kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
>> true);
>> }
>> @@ -235,7 +244,7 @@ void __init kasan_init(void)
>> set_pte(&kasan_zero_pte[i],
>> pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
>>
>> - memset(kasan_zero_page, 0, PAGE_SIZE);
>> + memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
>> cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
>>
>> /* At this point kasan is fully initialized. Enable error messages */
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 3c45e273a936..700734dff218 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -139,6 +139,8 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
>>
>> #ifdef CONFIG_KASAN_CLASSIC
>>
>> +#define KASAN_SHADOW_INIT 0
>> +
>> void kasan_cache_shrink(struct kmem_cache *cache);
>> void kasan_cache_shutdown(struct kmem_cache *cache);
>>
>> @@ -149,4 +151,10 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
>>
>> #endif /* CONFIG_KASAN_CLASSIC */
>>
>> +#ifdef CONFIG_KASAN_TAGS
>> +
>> +#define KASAN_SHADOW_INIT 0xFF
>> +
>> +#endif /* CONFIG_KASAN_TAGS */
>> +
>> #endif /* LINUX_KASAN_H */
>> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
>> index 08f6c8cb9f84..f4ccb9425655 100644
>> --- a/mm/kasan/common.c
>> +++ b/mm/kasan/common.c
>> @@ -253,6 +253,9 @@ int kasan_module_alloc(void *addr, size_t size)
>> __builtin_return_address(0));
>>
>> if (ret) {
>> +#if KASAN_SHADOW_INIT != 0
>> + __memset(ret, KASAN_SHADOW_INIT, shadow_size);
>> +#endif
>
> Drop __GFP_ZERO from above and remove this #if/#endif.
Will do!
>
>
>> find_vm_area(addr)->flags |= VM_KASAN;
>> kmemleak_ignore(ret);
>> return 0;
>> @@ -297,6 +300,10 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
>> if (!ret)
>> return NOTIFY_BAD;
>>
>> +#if KASAN_SHADOW_INIT != 0
>> + __memset(ret, KASAN_SHADOW_INIT, shadow_end - shadow_start);
>> +#endif
>> +
>
> No need to initialize anything here, kasan_free_pages() will do this later.
OK, will fix this!
>
>
>> kmemleak_ignore(ret);
>> return NOTIFY_OK;
>> }
>>
Thanks!
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 08/15] khwasan: add tag related helper functions
[not found] ` <a724eee6-7aff-df8b-de2b-8e9446e94623@virtuozzo.com>
@ 2018-04-03 14:45 ` Andrey Konovalov
0 siblings, 0 replies; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-03 14:45 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Fri, Mar 30, 2018 at 6:13 PM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
>
>
> On 03/23/2018 09:05 PM, Andrey Konovalov wrote:
>
>> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
>> index 24d75245e9d0..da4b17997c71 100644
>> --- a/mm/kasan/khwasan.c
>> +++ b/mm/kasan/khwasan.c
>> @@ -39,6 +39,57 @@
>> #include "kasan.h"
>> #include "../slab.h"
>>
>> +int khwasan_enabled;
>
> This is not unused (set, but never used).
It's used in the "khwasan: add hooks implementation" patch. I'll move
it's declaration there as well.
Thanks!
>
>> +
>> +static DEFINE_PER_CPU(u32, prng_state);
>> +
>> +void khwasan_init(void)
>> +{
>> + int cpu;
>> +
>> + for_each_possible_cpu(cpu) {
>> + per_cpu(prng_state, cpu) = get_random_u32();
>> + }
>> + WRITE_ONCE(khwasan_enabled, 1);
>> +}
>> +
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
[not found] ` <805d1e85-2d3c-2327-6e6c-f14a56dc0b67@virtuozzo.com>
@ 2018-04-03 14:59 ` Andrey Konovalov
2018-04-04 12:39 ` Andrey Ryabinin
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-03 14:59 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Fri, Mar 30, 2018 at 7:47 PM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
> On 03/23/2018 09:05 PM, Andrey Konovalov wrote:
>> This commit adds KHWASAN hooks implementation.
>>
>> 1. When a new slab cache is created, KHWASAN rounds up the size of the
>> objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
>>
>> 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
>> that corresponds to this object to this tag, and embeds this tag value
>> into the top byte of the returned pointer.
>>
>> 3. On each kfree KHWASAN poisons the shadow memory with a random tag to
>> allow detection of use-after-free bugs.
>>
>> The rest of the logic of the hook implementation is very much similar to
>> the one provided by KASAN. KHWASAN saves allocation and free stack metadata
>> to the slab object the same was KASAN does this.
>>
>> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>> ---
>> mm/kasan/khwasan.c | 200 ++++++++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 197 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
>> index da4b17997c71..e8bed5a078c7 100644
>> --- a/mm/kasan/khwasan.c
>> +++ b/mm/kasan/khwasan.c
>> @@ -90,69 +90,260 @@ void *khwasan_reset_tag(const void *addr)
>> return reset_tag(addr);
>> }
>>
>> +void kasan_poison_shadow(const void *address, size_t size, u8 value)
>> +{
>> + void *shadow_start, *shadow_end;
>> +
>> + /* Perform shadow offset calculation based on untagged address */
>> + address = reset_tag(address);
>> +
>> + shadow_start = kasan_mem_to_shadow(address);
>> + shadow_end = kasan_mem_to_shadow(address + size);
>> +
>> + memset(shadow_start, value, shadow_end - shadow_start);
>> +}
>> +
>> void kasan_unpoison_shadow(const void *address, size_t size)
>> {
>> + /* KHWASAN only allows 16-byte granularity */
>> + size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> + kasan_poison_shadow(address, size, get_tag(address));
>> }
>>
>
>
> This is way too much of copy-paste/code duplication. Ideally, you should have only
> check_memory_region() stuff separated, the rest (poisoning/unpoisoning, slabs management) should be
> in common.c code.
>
> So it should be something like this:
>
> in kasan.h
> ...
> #ifdef CONFIG_KASAN_CLASSIC
> #define KASAN_FREE_PAGE 0xFF /* page was freed */
> #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
> #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
> #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
> #else
> #define KASAN_FREE_PAGE 0xFE
> #define KASAN_PAGE_REDZONE 0xFE
> #define KASAN_KMALLOC_REDZONE 0xFE
> #define KASAN_KMALLOC_FREE 0xFE
> #endif
>
> ...
>
> #ifdef CONFIG_KASAN_CLASSIC
> static inline void *reset_tag(const void *addr)
> {
> return (void *)addr;
> }
> static inline u8 get_tag(const void *addr)
> {
> return 0;
> }
> #else
> static inline u8 get_tag(const void *addr)
> {
> return (u8)((u64)addr >> KHWASAN_TAG_SHIFT);
> }
>
> static inline void *reset_tag(const void *addr)
> {
> return set_tag(addr, KHWASAN_TAG_KERNEL);
> }
> #endif
>
>
> in kasan/common.c:
>
>
> void kasan_poison_shadow(const void *address, size_t size, u8 value)
> {
> void *shadow_start, *shadow_end;
>
> address = reset_tag(address);
>
> shadow_start = kasan_mem_to_shadow(address);
> shadow_end = kasan_mem_to_shadow(address + size);
>
> memset(shadow_start, value, shadow_end - shadow_start);
> }
>
> void kasan_unpoison_shadow(const void *address, size_t size)
> {
>
> kasan_poison_shadow(address, size, get_tag(address));
>
> if (size & KASAN_SHADOW_MASK) {
> u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
>
> if (IS_ENABLED(CONFIG_KASAN_TAGS)
> *shadow = get_tag(address);
> else
> *shadow = size & KASAN_SHADOW_MASK;
> }
> }
>
> void kasan_free_pages(struct page *page, unsigned int order)
> {
> if (likely(!PageHighMem(page)))
> kasan_poison_shadow(page_address(page),
> PAGE_SIZE << order,
> KASAN_FREE_PAGE);
> }
>
> etc.
OK, I'll rework this part.
>
>
>
>> void check_memory_region(unsigned long addr, size_t size, bool write,
>> unsigned long ret_ip)
>> {
>> + u8 tag;
>> + u8 *shadow_first, *shadow_last, *shadow;
>> + void *untagged_addr;
>> +
>> + tag = get_tag((const void *)addr);
>> +
>> + /* Ignore accesses for pointers tagged with 0xff (native kernel
>> + * pointer tag) to suppress false positives caused by kmap.
>> + *
>> + * Some kernel code was written to account for archs that don't keep
>> + * high memory mapped all the time, but rather map and unmap particular
>> + * pages when needed. Instead of storing a pointer to the kernel memory,
>> + * this code saves the address of the page structure and offset within
>> + * that page for later use. Those pages are then mapped and unmapped
>> + * with kmap/kunmap when necessary and virt_to_page is used to get the
>> + * virtual address of the page. For arm64 (that keeps the high memory
>> + * mapped all the time), kmap is turned into a page_address call.
>> +
>> + * The issue is that with use of the page_address + virt_to_page
>> + * sequence the top byte value of the original pointer gets lost (gets
>> + * set to 0xff.
>> + */
>> + if (tag == 0xff)
>> + return;
>
> You can save tag somewhere in page struct and make page_address() return tagged address.
>
> I'm not sure it might be even possible to squeeze the tag into page->flags on some configurations,
> see include/linux/page-flags-layout.h
One page can contain multiple objects with different tags, so we would
need to save the tag for each of them.
>
>
>> void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>> {
>> + if (!READ_ONCE(khwasan_enabled))
>> + return object;
>
> ...
>
>> void *kasan_kmalloc(struct kmem_cache *cache, const void *object,
>> size_t size, gfp_t flags)
>> {
>
>> + if (!READ_ONCE(khwasan_enabled))
>> + return (void *)object;
>> +
>
> ...
>
>> void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>> {
>
> ...
>
>> +
>> + if (!READ_ONCE(khwasan_enabled))
>> + return (void *)ptr;
>> +
>
> I don't see any possible way of khwasan_enabled being 0 here.
Can't kmem_cache_alloc be called for the temporary caches that are
used before the slab allocator and kasan are initialized?
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/805d1e85-2d3c-2327-6e6c-f14a56dc0b67%40virtuozzo.com.
> For more options, visit https://groups.google.com/d/optout.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-03 14:59 ` [RFC PATCH v2 13/15] khwasan: add hooks implementation Andrey Konovalov
@ 2018-04-04 12:39 ` Andrey Ryabinin
2018-04-04 17:00 ` Andrey Konovalov
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Ryabinin @ 2018-04-04 12:39 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On 04/03/2018 05:59 PM, Andrey Konovalov wrote:
>>
>>
>>> void check_memory_region(unsigned long addr, size_t size, bool write,
>>> unsigned long ret_ip)
>>> {
>>> + u8 tag;
>>> + u8 *shadow_first, *shadow_last, *shadow;
>>> + void *untagged_addr;
>>> +
>>> + tag = get_tag((const void *)addr);
>>> +
>>> + /* Ignore accesses for pointers tagged with 0xff (native kernel
>>> + * pointer tag) to suppress false positives caused by kmap.
>>> + *
>>> + * Some kernel code was written to account for archs that don't keep
>>> + * high memory mapped all the time, but rather map and unmap particular
>>> + * pages when needed. Instead of storing a pointer to the kernel memory,
>>> + * this code saves the address of the page structure and offset within
>>> + * that page for later use. Those pages are then mapped and unmapped
>>> + * with kmap/kunmap when necessary and virt_to_page is used to get the
>>> + * virtual address of the page. For arm64 (that keeps the high memory
>>> + * mapped all the time), kmap is turned into a page_address call.
>>> +
>>> + * The issue is that with use of the page_address + virt_to_page
>>> + * sequence the top byte value of the original pointer gets lost (gets
>>> + * set to 0xff.
>>> + */
>>> + if (tag == 0xff)
>>> + return;
>>
>> You can save tag somewhere in page struct and make page_address() return tagged address.
>>
>> I'm not sure it might be even possible to squeeze the tag into page->flags on some configurations,
>> see include/linux/page-flags-layout.h
>
> One page can contain multiple objects with different tags, so we would
> need to save the tag for each of them.
What do you mean? Slab page? The per-page tag is needed only for !PageSlab pages.
For slab pages we have kmalloc/kmem_cache_alloc() which already return properly tagged address.
But the page allocator returns a pointer to struct page. One has to call page_address(page)
to use that page. Returning 'ignore-me'-tagged address from page_address() makes the whole
class of bugs invisible to KHWASAN. This is a serious downside comparing to classic KASAN which can
detect missuses of page allocator API.
>>
>>
>>> void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>>> {
>>> + if (!READ_ONCE(khwasan_enabled))
>>> + return object;
>>
>> ...
>>
>>> void *kasan_kmalloc(struct kmem_cache *cache, const void *object,
>>> size_t size, gfp_t flags)
>>> {
>>
>>> + if (!READ_ONCE(khwasan_enabled))
>>> + return (void *)object;
>>> +
>>
>> ...
>>
>>> void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>>> {
>>
>> ...
>>
>>> +
>>> + if (!READ_ONCE(khwasan_enabled))
>>> + return (void *)ptr;
>>> +
>>
>> I don't see any possible way of khwasan_enabled being 0 here.
>
> Can't kmem_cache_alloc be called for the temporary caches that are
> used before the slab allocator and kasan are initialized?
kasan_init() runs before allocators are initialized.
slab allocator obviously has to be initialized before it can be used.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-04 12:39 ` Andrey Ryabinin
@ 2018-04-04 17:00 ` Andrey Konovalov
2018-04-05 13:02 ` Andrey Ryabinin
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-04 17:00 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Wed, Apr 4, 2018 at 2:39 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>
>>> You can save tag somewhere in page struct and make page_address() return tagged address.
>>>
>>> I'm not sure it might be even possible to squeeze the tag into page->flags on some configurations,
>>> see include/linux/page-flags-layout.h
>>
>> One page can contain multiple objects with different tags, so we would
>> need to save the tag for each of them.
>
> What do you mean? Slab page? The per-page tag is needed only for !PageSlab pages.
> For slab pages we have kmalloc/kmem_cache_alloc() which already return properly tagged address.
>
> But the page allocator returns a pointer to struct page. One has to call page_address(page)
> to use that page. Returning 'ignore-me'-tagged address from page_address() makes the whole
> class of bugs invisible to KHWASAN. This is a serious downside comparing to classic KASAN which can
> detect missuses of page allocator API.
Yes, slab page. Here's an example:
1. do_get_write_access() allocates frozen_buffer with jbd2_alloc,
which calls kmem_cache_alloc, and then saves the result to
jh->b_frozen_data.
2. jbd2_journal_write_metadata_buffer() takes the value of
jh_in->b_frozen_data and calls virt_to_page() (and offset_in_page())
on it.
3. jbd2_journal_write_metadata_buffer() then calls kmap_atomic(),
which calls page_address(), on the resulting page address.
The tag gets erased. The page belongs to slab and can contain multiple
objects with different tags.
>>> I don't see any possible way of khwasan_enabled being 0 here.
>>
>> Can't kmem_cache_alloc be called for the temporary caches that are
>> used before the slab allocator and kasan are initialized?
>
> kasan_init() runs before allocators are initialized.
> slab allocator obviously has to be initialized before it can be used.
Checked the code, it seems you are right. Boot caches are created
after kasan_init() is called. I will remove khwasan_enabled.
Thanks!
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-04 17:00 ` Andrey Konovalov
@ 2018-04-05 13:02 ` Andrey Ryabinin
2018-04-06 12:14 ` Andrey Konovalov
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Ryabinin @ 2018-04-05 13:02 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On 04/04/2018 08:00 PM, Andrey Konovalov wrote:
> On Wed, Apr 4, 2018 at 2:39 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>>
>>>> You can save tag somewhere in page struct and make page_address() return tagged address.
>>>>
>>>> I'm not sure it might be even possible to squeeze the tag into page->flags on some configurations,
>>>> see include/linux/page-flags-layout.h
>>>
>>> One page can contain multiple objects with different tags, so we would
>>> need to save the tag for each of them.
>>
>> What do you mean? Slab page? The per-page tag is needed only for !PageSlab pages.
>> For slab pages we have kmalloc/kmem_cache_alloc() which already return properly tagged address.
>>
>> But the page allocator returns a pointer to struct page. One has to call page_address(page)
>> to use that page. Returning 'ignore-me'-tagged address from page_address() makes the whole
>> class of bugs invisible to KHWASAN. This is a serious downside comparing to classic KASAN which can
>> detect missuses of page allocator API.
>
> Yes, slab page. Here's an example:
>
> 1. do_get_write_access() allocates frozen_buffer with jbd2_alloc,
> which calls kmem_cache_alloc, and then saves the result to
> jh->b_frozen_data.
>
> 2. jbd2_journal_write_metadata_buffer() takes the value of
> jh_in->b_frozen_data and calls virt_to_page() (and offset_in_page())
> on it.
>
> 3. jbd2_journal_write_metadata_buffer() then calls kmap_atomic(),
> which calls page_address(), on the resulting page address.
>
> The tag gets erased. The page belongs to slab and can contain multiple
> objects with different tags.
>
I see. Ideally that kind of problem should be fixed by reworking/redesigning such code,
however jbd2_journal_write_metadata_buffer() is far from the only place which
does that trick. Fixing all of them would be a huge task probably, so ignoring such
accesses seems to be the only choice we have.
Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab memory.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-05 13:02 ` Andrey Ryabinin
@ 2018-04-06 12:14 ` Andrey Konovalov
2018-04-06 12:27 ` Andrey Ryabinin
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-06 12:14 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> On 04/04/2018 08:00 PM, Andrey Konovalov wrote:
>> On Wed, Apr 4, 2018 at 2:39 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>>>
>>>>> You can save tag somewhere in page struct and make page_address() return tagged address.
>>>>>
>>>>> I'm not sure it might be even possible to squeeze the tag into page->flags on some configurations,
>>>>> see include/linux/page-flags-layout.h
>>>>
>>>> One page can contain multiple objects with different tags, so we would
>>>> need to save the tag for each of them.
>>>
>>> What do you mean? Slab page? The per-page tag is needed only for !PageSlab pages.
>>> For slab pages we have kmalloc/kmem_cache_alloc() which already return properly tagged address.
>>>
>>> But the page allocator returns a pointer to struct page. One has to call page_address(page)
>>> to use that page. Returning 'ignore-me'-tagged address from page_address() makes the whole
>>> class of bugs invisible to KHWASAN. This is a serious downside comparing to classic KASAN which can
>>> detect missuses of page allocator API.
>>
>> Yes, slab page. Here's an example:
>>
>> 1. do_get_write_access() allocates frozen_buffer with jbd2_alloc,
>> which calls kmem_cache_alloc, and then saves the result to
>> jh->b_frozen_data.
>>
>> 2. jbd2_journal_write_metadata_buffer() takes the value of
>> jh_in->b_frozen_data and calls virt_to_page() (and offset_in_page())
>> on it.
>>
>> 3. jbd2_journal_write_metadata_buffer() then calls kmap_atomic(),
>> which calls page_address(), on the resulting page address.
>>
>> The tag gets erased. The page belongs to slab and can contain multiple
>> objects with different tags.
>>
>
> I see. Ideally that kind of problem should be fixed by reworking/redesigning such code,
> however jbd2_journal_write_metadata_buffer() is far from the only place which
> does that trick. Fixing all of them would be a huge task probably, so ignoring such
> accesses seems to be the only choice we have.
>
> Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab memory.
So you mean we need to find a way to ignore accesses via pointers
returned by page_address(), but still check accesses through all other
pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
open to suggestions though.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-06 12:14 ` Andrey Konovalov
@ 2018-04-06 12:27 ` Andrey Ryabinin
2018-04-10 16:07 ` Andrey Konovalov
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Ryabinin @ 2018-04-06 12:27 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On 04/06/2018 03:14 PM, Andrey Konovalov wrote:
> On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>> On 04/04/2018 08:00 PM, Andrey Konovalov wrote:
>>> On Wed, Apr 4, 2018 at 2:39 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>>>>
>>>>>> You can save tag somewhere in page struct and make page_address() return tagged address.
>>>>>>
>>>>>> I'm not sure it might be even possible to squeeze the tag into page->flags on some configurations,
>>>>>> see include/linux/page-flags-layout.h
>>>>>
>>>>> One page can contain multiple objects with different tags, so we would
>>>>> need to save the tag for each of them.
>>>>
>>>> What do you mean? Slab page? The per-page tag is needed only for !PageSlab pages.
>>>> For slab pages we have kmalloc/kmem_cache_alloc() which already return properly tagged address.
>>>>
>>>> But the page allocator returns a pointer to struct page. One has to call page_address(page)
>>>> to use that page. Returning 'ignore-me'-tagged address from page_address() makes the whole
>>>> class of bugs invisible to KHWASAN. This is a serious downside comparing to classic KASAN which can
>>>> detect missuses of page allocator API.
>>>
>>> Yes, slab page. Here's an example:
>>>
>>> 1. do_get_write_access() allocates frozen_buffer with jbd2_alloc,
>>> which calls kmem_cache_alloc, and then saves the result to
>>> jh->b_frozen_data.
>>>
>>> 2. jbd2_journal_write_metadata_buffer() takes the value of
>>> jh_in->b_frozen_data and calls virt_to_page() (and offset_in_page())
>>> on it.
>>>
>>> 3. jbd2_journal_write_metadata_buffer() then calls kmap_atomic(),
>>> which calls page_address(), on the resulting page address.
>>>
>>> The tag gets erased. The page belongs to slab and can contain multiple
>>> objects with different tags.
>>>
>>
>> I see. Ideally that kind of problem should be fixed by reworking/redesigning such code,
>> however jbd2_journal_write_metadata_buffer() is far from the only place which
>> does that trick. Fixing all of them would be a huge task probably, so ignoring such
>> accesses seems to be the only choice we have.
>>
>> Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab memory.
>
> So you mean we need to find a way to ignore accesses via pointers
> returned by page_address(), but still check accesses through all other
> pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
> open to suggestions though.
>
I'm saying that we need to ignore accesses to slab objects if pointer
to slab object obtained via page_address() + offset_in_page() trick, but don't ignore
anything else.
So, save tag somewhere in page struct and poison shadow with that tag. Make page_address() to
return tagged address for all !PageSlab() pages. For PageSlab() pages page_address() should return
0xff tagged address, so we could ignore such accesses.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-06 12:27 ` Andrey Ryabinin
@ 2018-04-10 16:07 ` Andrey Konovalov
[not found] ` <bfc3da50-66df-c6ed-ad6a-a285efe617ec@virtuozzo.com>
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-10 16:07 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Fri, Apr 6, 2018 at 2:27 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> On 04/06/2018 03:14 PM, Andrey Konovalov wrote:
>> On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>> Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab memory.
>>
>> So you mean we need to find a way to ignore accesses via pointers
>> returned by page_address(), but still check accesses through all other
>> pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
>> open to suggestions though.
>>
>
> I'm saying that we need to ignore accesses to slab objects if pointer
> to slab object obtained via page_address() + offset_in_page() trick, but don't ignore
> anything else.
>
> So, save tag somewhere in page struct and poison shadow with that tag. Make page_address() to
> return tagged address for all !PageSlab() pages. For PageSlab() pages page_address() should return
> 0xff tagged address, so we could ignore such accesses.
Which pages do you mean by !PageSlab()? The ones that are allocated
and freed by pagealloc, but mot managed by the slab allocator? Perhaps
we should then add tagging to the pagealloc hook instead?
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
[not found] ` <bfc3da50-66df-c6ed-ad6a-a285efe617ec@virtuozzo.com>
@ 2018-04-12 16:45 ` Andrey Konovalov
2018-04-12 17:20 ` Andrey Ryabinin
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-12 16:45 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Tue, Apr 10, 2018 at 6:31 PM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
>
>
> On 04/10/2018 07:07 PM, Andrey Konovalov wrote:
>> On Fri, Apr 6, 2018 at 2:27 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>> On 04/06/2018 03:14 PM, Andrey Konovalov wrote:
>>>> On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>>> Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab memory.
>>>>
>>>> So you mean we need to find a way to ignore accesses via pointers
>>>> returned by page_address(), but still check accesses through all other
>>>> pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
>>>> open to suggestions though.
>>>>
>>>
>>> I'm saying that we need to ignore accesses to slab objects if pointer
>>> to slab object obtained via page_address() + offset_in_page() trick, but don't ignore
>>> anything else.
>>>
>>> So, save tag somewhere in page struct and poison shadow with that tag. Make page_address() to
>>> return tagged address for all !PageSlab() pages. For PageSlab() pages page_address() should return
>>> 0xff tagged address, so we could ignore such accesses.
>>
>> Which pages do you mean by !PageSlab()?
>
> Literally the "PageSlab(page) == false" pages.
>
>> The ones that are allocated and freed by pagealloc, but mot managed by the slab allocator?
>
> Yes.
>
>> Perhaps we should then add tagging to the pagealloc hook instead?
>>
>
> Of course the tagging would be in kasan_alloc_pages(), where else that could be? And instead of what?
I think I misunderstood your suggestion twice already :)
To make it clear, you're suggesting:
1. Tag memory with a random tag in kasan_alloc_pages() and returned a
tagged pointer from pagealloc.
2. Restore the tag for the pointers returned from page_address for
!PageSlab() pages.
3. Set the tag to 0xff for the pointers returned from page_address for
PageSlab() pages.
Is this correct?
In 2 instead of storing the tag in page_struct, we can just recover it
from the shadow memory that corresponds to that page. What do you
think about this?
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-12 16:45 ` Andrey Konovalov
@ 2018-04-12 17:20 ` Andrey Ryabinin
2018-04-12 17:37 ` Andrey Konovalov
0 siblings, 1 reply; 18+ messages in thread
From: Andrey Ryabinin @ 2018-04-12 17:20 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On 04/12/2018 07:45 PM, Andrey Konovalov wrote:
> On Tue, Apr 10, 2018 at 6:31 PM, Andrey Ryabinin
> <aryabinin@virtuozzo.com> wrote:
>>
>>
>> On 04/10/2018 07:07 PM, Andrey Konovalov wrote:
>>> On Fri, Apr 6, 2018 at 2:27 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>> On 04/06/2018 03:14 PM, Andrey Konovalov wrote:
>>>>> On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>>>>> Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab memory.
>>>>>
>>>>> So you mean we need to find a way to ignore accesses via pointers
>>>>> returned by page_address(), but still check accesses through all other
>>>>> pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
>>>>> open to suggestions though.
>>>>>
>>>>
>>>> I'm saying that we need to ignore accesses to slab objects if pointer
>>>> to slab object obtained via page_address() + offset_in_page() trick, but don't ignore
>>>> anything else.
>>>>
>>>> So, save tag somewhere in page struct and poison shadow with that tag. Make page_address() to
>>>> return tagged address for all !PageSlab() pages. For PageSlab() pages page_address() should return
>>>> 0xff tagged address, so we could ignore such accesses.
>>>
>>> Which pages do you mean by !PageSlab()?
>>
>> Literally the "PageSlab(page) == false" pages.
>>
>>> The ones that are allocated and freed by pagealloc, but mot managed by the slab allocator?
>>
>> Yes.
>>
>>> Perhaps we should then add tagging to the pagealloc hook instead?
>>>
>>
>> Of course the tagging would be in kasan_alloc_pages(), where else that could be? And instead of what?
>
> I think I misunderstood your suggestion twice already :)
>
> To make it clear, you're suggesting:
>
> 1. Tag memory with a random tag in kasan_alloc_pages() and returned a
> tagged pointer from pagealloc.
Tag memory with a random tag in kasan_alloc_pages() and store that tag in page struct (that part is also in kasan_alloc_pages()).
page_address(page) will retrieve that tag from struct page to return tagged address.
I've no idea what do you mean by "returning a tagged pointer from pagealloc".
Once again, the page allocator (__alloc_pages_nodemask()) returns pointer to *struct page*,
not the address in the linear mapping where is that page mapped (or not mapped at all if this is highmem).
One have to call page_address()/kmap() to use that page.
> 2. Restore the tag for the pointers returned from page_address for
> !PageSlab() pages.
>
Right.
> 3. Set the tag to 0xff for the pointers returned from page_address for
> PageSlab() pages.
>
Right.
> Is this correct?
>
> In 2 instead of storing the tag in page_struct, we can just recover it
> from the shadow memory that corresponds to that page. What do you
> think about this?
Sounds ok. Don't see any problem with that.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation
2018-04-12 17:20 ` Andrey Ryabinin
@ 2018-04-12 17:37 ` Andrey Konovalov
0 siblings, 0 replies; 18+ messages in thread
From: Andrey Konovalov @ 2018-04-12 17:37 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Julien Thierry, Catalin Marinas, Christopher Li, Tyler Baicar,
Will Deacon, Paul Lawrence, Masahiro Yamada, Yury Norov,
Alexander Potapenko, Christoph Lameter, Michael Weiser,
Ingo Molnar, Herbert Xu, Jonathan Corbet, linux-doc, Mark Brand,
kasan-dev, kvmarm, linux-sparse, Geert Uytterhoeven, Linux ARM,
David Rientjes, Ramana Radhakrishnan <Ramana.Radhakrishna>
On Thu, Apr 12, 2018 at 7:20 PM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
>> 1. Tag memory with a random tag in kasan_alloc_pages() and returned a
>> tagged pointer from pagealloc.
>
> Tag memory with a random tag in kasan_alloc_pages() and store that tag in page struct (that part is also in kasan_alloc_pages()).
> page_address(page) will retrieve that tag from struct page to return tagged address.
>
> I've no idea what do you mean by "returning a tagged pointer from pagealloc".
> Once again, the page allocator (__alloc_pages_nodemask()) returns pointer to *struct page*,
> not the address in the linear mapping where is that page mapped (or not mapped at all if this is highmem).
> One have to call page_address()/kmap() to use that page.
Ah, that's what I've been missing.
OK, I'll do that.
Thanks!
>
>
>> 2. Restore the tag for the pointers returned from page_address for
>> !PageSlab() pages.
>>
>
> Right.
>
>> 3. Set the tag to 0xff for the pointers returned from page_address for
>> PageSlab() pages.
>>
>
> Right.
>
>> Is this correct?
>>
>> In 2 instead of storing the tag in page_struct, we can just recover it
>> from the shadow memory that corresponds to that page. What do you
>> think about this?
>
> Sounds ok. Don't see any problem with that.
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2018-04-12 17:37 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cover.1521828273.git.andreyknvl@google.com>
[not found] ` <6eb08c160ae23eb890bd937ddf8346ba211df09f.1521828274.git.andreyknvl@google.com>
2018-03-24 8:29 ` [RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc Ingo Molnar
2018-03-27 12:20 ` Andrey Konovalov
2018-03-27 20:01 ` Ingo Molnar
[not found] ` <1fb0a050a84d49f5c3b2210337339412475d1688.1521828273.git.andreyknvl@google.com>
2018-03-24 8:43 ` [RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS Ingo Molnar
2018-03-27 16:23 ` Andrey Konovalov
2018-03-27 20:02 ` Ingo Molnar
[not found] ` <774016f707e494da419a2d0d8a03409e6befcaf8.1521828274.git.andreyknvl@google.com>
[not found] ` <2ab69c9c-6e80-ea79-e72e-012753ed3db0@virtuozzo.com>
2018-04-03 14:43 ` [RFC PATCH v2 05/15] khwasan: initialize shadow to 0xff Andrey Konovalov
[not found] ` <b79947167d09478d3f61d2ec8de37322c0e1fe92.1521828274.git.andreyknvl@google.com>
[not found] ` <a724eee6-7aff-df8b-de2b-8e9446e94623@virtuozzo.com>
2018-04-03 14:45 ` [RFC PATCH v2 08/15] khwasan: add tag related helper functions Andrey Konovalov
[not found] ` <ba4a74ba1bc48dd66a3831143c3119d13c291fe3.1521828274.git.andreyknvl@google.com>
[not found] ` <805d1e85-2d3c-2327-6e6c-f14a56dc0b67@virtuozzo.com>
2018-04-03 14:59 ` [RFC PATCH v2 13/15] khwasan: add hooks implementation Andrey Konovalov
2018-04-04 12:39 ` Andrey Ryabinin
2018-04-04 17:00 ` Andrey Konovalov
2018-04-05 13:02 ` Andrey Ryabinin
2018-04-06 12:14 ` Andrey Konovalov
2018-04-06 12:27 ` Andrey Ryabinin
2018-04-10 16:07 ` Andrey Konovalov
[not found] ` <bfc3da50-66df-c6ed-ad6a-a285efe617ec@virtuozzo.com>
2018-04-12 16:45 ` Andrey Konovalov
2018-04-12 17:20 ` Andrey Ryabinin
2018-04-12 17:37 ` Andrey Konovalov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).