public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* KASLR vs. KASAN on x86
@ 2023-03-03 22:35 Dave Hansen
  2023-03-08 17:24 ` Andrey Ryabinin
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Hansen @ 2023-03-03 22:35 UTC (permalink / raw)
  To: the arch/x86 maintainers, LKML
  Cc: Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, kasan-dev, Kees Cook,
	Thomas Garnier

Hi KASAN folks,

Currently, x86 disables (most) KASLR when KASAN is enabled:

> /*
>  * Apply no randomization if KASLR was disabled at boot or if KASAN
>  * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
>  */
> static inline bool kaslr_memory_enabled(void)
> {
>         return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> }

I'm a bit confused by this, though.  This code predates 5-level paging
so a PGD should be assumed to be 512G.  The kernel_randomize_memory()
granularity seems to be 1 TB, which *is* PGD-aligned.

Are KASAN and kernel_randomize_memory()/KASLR (modules and
cpu_entry_area randomization is separate) really incompatible?  Does
anyone have a more thorough explanation than that comment?

This isn't a big deal since KASAN is a debugging option after all.  But,
I'm trying to unravel why this:

>         if (kaslr_enabled()) {
>                 pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
>                          kaslr_offset(),
>                          __START_KERNEL,
>                          __START_KERNEL_map,
>                          MODULES_VADDR-1);

for instance uses kaslr_enabled() which includes just randomizing
module_load_offset, but *not* __START_KERNEL.  I think this case should
be using kaslr_memory_enabled() to match up with the check in
kernel_randomize_memory().  But this really boils down to what the
difference is between kaslr_memory_enabled() and kaslr_enabled().

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: KASLR vs. KASAN on x86
  2023-03-03 22:35 KASLR vs. KASAN on x86 Dave Hansen
@ 2023-03-08 17:24 ` Andrey Ryabinin
  2023-03-13  9:41   ` Michal Koutný
  0 siblings, 1 reply; 5+ messages in thread
From: Andrey Ryabinin @ 2023-03-08 17:24 UTC (permalink / raw)
  To: Dave Hansen
  Cc: the arch/x86 maintainers, LKML, Alexander Potapenko,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Kees Cook, Thomas Garnier

On Fri, Mar 3, 2023 at 11:35 PM Dave Hansen <dave.hansen@intel.com> wrote:
>
> Hi KASAN folks,
>
> Currently, x86 disables (most) KASLR when KASAN is enabled:
>
> > /*
> >  * Apply no randomization if KASLR was disabled at boot or if KASAN
> >  * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
> >  */
> > static inline bool kaslr_memory_enabled(void)
> > {
> >         return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> > }
>
> I'm a bit confused by this, though.  This code predates 5-level paging
> so a PGD should be assumed to be 512G.  The kernel_randomize_memory()
> granularity seems to be 1 TB, which *is* PGD-aligned.
>
> Are KASAN and kernel_randomize_memory()/KASLR (modules and
> cpu_entry_area randomization is separate) really incompatible?  Does
> anyone have a more thorough explanation than that comment?
>

Yeah, I agree with you here, the comment doesn't make sense to me as well.
However, I see one problem with KASAN and kernel_randomize_memory()
compatibility:
vaddr_start - vaddr_end includes KASAN shadow memory
(Documentation/x86/x86_64/mm.rst):
   ffffea0000000000 |  -22    TB | ffffeaffffffffff |    1 TB |
virtual memory map (vmemmap_base)
   ffffeb0000000000 |  -21    TB | ffffebffffffffff |    1 TB | ... unused hole
   ffffec0000000000 |  -20    TB | fffffbffffffffff |   16 TB | KASAN
shadow memory
   fffffc0000000000 |   -4    TB | fffffdffffffffff |    2 TB | ... unused hole
                    |            |                  |         |
vaddr_end for KASLR

So the vmemmap_base and probably some part of vmalloc could easily end
up in KASAN shadow.

> This isn't a big deal since KASAN is a debugging option after all.  But,
> I'm trying to unravel why this:
>
> >         if (kaslr_enabled()) {
> >                 pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> >                          kaslr_offset(),
> >                          __START_KERNEL,
> >                          __START_KERNEL_map,
> >                          MODULES_VADDR-1);
>
> for instance uses kaslr_enabled() which includes just randomizing
> module_load_offset, but *not* __START_KERNEL.  I think this case should
> be using kaslr_memory_enabled() to match up with the check in
> kernel_randomize_memory().  But this really boils down to what the
> difference is between kaslr_memory_enabled() and kaslr_enabled().

This code looks correct to me. __START_KERNEL is just a constant, it's
never randomized.
The location of the kernel image (.text, .data ...) however is
randomized, kaslr_offset() - is the random number here.
So
kaslr_enabled() - randomization of the kernel image and modules.
kaslr_memory_enabled() - randomization of the linear mapping
(__PAGE_OFFSET), vmalloc (VMALLOC_START) and vmemmap (VMEMMAP_START)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: KASLR vs. KASAN on x86
  2023-03-08 17:24 ` Andrey Ryabinin
@ 2023-03-13  9:41   ` Michal Koutný
  2023-03-13 13:40     ` Andrey Ryabinin
  0 siblings, 1 reply; 5+ messages in thread
From: Michal Koutný @ 2023-03-13  9:41 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Kees Cook, Thomas Garnier

[-- Attachment #1: Type: text/plain, Size: 475 bytes --]

On Wed, Mar 08, 2023 at 06:24:05PM +0100, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> So the vmemmap_base and probably some part of vmalloc could easily end
> up in KASAN shadow.

Would it help to (conditionally) reduce vaddr_end to the beginning of
KASAN shadow memory?
(I'm not that familiar with KASAN, so IOW, would KASAN handle
randomized: linear mapping (__PAGE_OFFSET), vmalloc (VMALLOC_START) and
vmemmap (VMEMMAP_START) in that smaller range.)

Thanks,
Michal

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: KASLR vs. KASAN on x86
  2023-03-13  9:41   ` Michal Koutný
@ 2023-03-13 13:40     ` Andrey Ryabinin
  2023-05-31 15:05       ` Michal Koutný
  0 siblings, 1 reply; 5+ messages in thread
From: Andrey Ryabinin @ 2023-03-13 13:40 UTC (permalink / raw)
  To: Michal Koutný
  Cc: Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Kees Cook, Thomas Garnier

On Mon, Mar 13, 2023 at 10:41 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> On Wed, Mar 08, 2023 at 06:24:05PM +0100, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> > So the vmemmap_base and probably some part of vmalloc could easily end
> > up in KASAN shadow.
>
> Would it help to (conditionally) reduce vaddr_end to the beginning of
> KASAN shadow memory?
> (I'm not that familiar with KASAN, so IOW, would KASAN handle
> randomized: linear mapping (__PAGE_OFFSET), vmalloc (VMALLOC_START) and
> vmemmap (VMEMMAP_START) in that smaller range.)
>

Yes, with the vaddr_end = KASAN_SHADOW_START  it should work,
 kaslr_memory_enabled() can be removed in favor of just the kaslr_enabled()

> Thanks,
> Michal

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: KASLR vs. KASAN on x86
  2023-03-13 13:40     ` Andrey Ryabinin
@ 2023-05-31 15:05       ` Michal Koutný
  0 siblings, 0 replies; 5+ messages in thread
From: Michal Koutný @ 2023-05-31 15:05 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Kees Cook, Thomas Garnier

[-- Attachment #1: Type: text/plain, Size: 419 bytes --]

On Mon, Mar 13, 2023 at 02:40:33PM +0100, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> Yes, with the vaddr_end = KASAN_SHADOW_START  it should work,
>  kaslr_memory_enabled() can be removed in favor of just the kaslr_enabled()

Thanks. FWIW, I've found the cautionary comment at vaddr_end from the
commit 1dddd2512511 ("x86/kaslr: Fix the vaddr_end mess"), so I'm not
removing kaslr_enabled_enabled() now.

Michal

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-05-31 15:06 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-03 22:35 KASLR vs. KASAN on x86 Dave Hansen
2023-03-08 17:24 ` Andrey Ryabinin
2023-03-13  9:41   ` Michal Koutný
2023-03-13 13:40     ` Andrey Ryabinin
2023-05-31 15:05       ` Michal Koutný

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox