linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] arm64: more granular KASLR
@ 2016-03-02 17:11 Ard Biesheuvel
  2016-03-02 17:11 ` [PATCH 1/3] arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it Ard Biesheuvel
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2016-03-02 17:11 UTC (permalink / raw)
  To: linux-arm-kernel

It turns out we can squeeze out 5 to 7 bits of additional KASLR entropy in
the new arm64 implementation. This is based on the observation that the
minimal 2 MB alignment of the kernel image is only required for kernels
that are non-relocatable, and since KASLR already implies a relocatable
kernel anyway, we get this additional wiggle room almost [1] for free.

The idea is that, since we need to fix up all absolute symbol references
anyway, the hardcoded virtual start address of the kernel does not need to
be 2 MB aligned (+ TEXT_OFFSET), and the only thing we need to ensure is
that the physical misalignment and the virtual misalignment are equal modulo
the swapper block size.

Patch #1 removes the explicit mapping of the TEXT_OFFSET region below the
kernel, and only maps it if the rounding to swapper block size of the kernel
start address ends up covering it.

Patch #2 updates the early boot code to treat the physical misalignment as
the initial KASLR displacement. Note that this only affects code that is
compiled conditionally if CONFIG_RANDOMIZE_BASE=y

Patch #3 updates the stub allocation strategy to allow a more granular mapping.
Note that the allocation itself is still rounded to 2 MB as before, to prevent
the early mapping to cover adjacent regions inadvertently. As is the case for
patch #2, this only affects the new code under CONFIG_RANDOMIZE_BASE=y

Sample output from a 4k/4 levels kernel, where we have 33 bits of entropy
in the kernel addresses:

Virtual kernel memory layout:
    modules : 0xffff000000000000 - 0xffff000008000000   (   128 MB)
    vmalloc : 0xffff000008000000 - 0xffff7dffbfff0000   (129022 GB)
      .init : 0xffff0bbbe14a6000 - 0xffff0bbbe17d5000   (  3260 KB)
      .text : 0xffff0bbbe0c24000 - 0xffff0bbbe120a000   (  6040 KB)
    .rodata : 0xffff0bbbe120a000 - 0xffff0bbbe14a6000   (  2672 KB)
      .data : 0xffff0bbbe17d5000 - 0xffff0bbbe1866e00   (   584 KB)
      fixed : 0xffff7dfffe7fd000 - 0xffff7dfffec00000   (  4108 KB)
    PCI I/O : 0xffff7dfffee00000 - 0xffff7dffffe00000   (    16 MB)
    vmemmap : 0xffff7e0000000000 - 0xffff800000000000   (  2048 GB maximum)
              0xffff7e1333000000 - 0xffff7e1337000000   (    64 MB actual)
     memory : 0xffff84ccc0000000 - 0xffff84cdc0000000   (  4096 MB)

Ard Biesheuvel (3):
  arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it
  arm64: kaslr: deal with physically misaligned kernel images
  arm64: kaslr: increase randomization granularity

 arch/arm64/kernel/head.S                  | 22 +++++++++++++-------
 arch/arm64/kernel/kaslr.c                 |  6 +++---
 drivers/firmware/efi/libstub/arm64-stub.c | 14 ++++++++++---
 3 files changed, 29 insertions(+), 13 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it
  2016-03-02 17:11 [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
@ 2016-03-02 17:11 ` Ard Biesheuvel
  2016-03-02 17:11 ` [PATCH 2/3] arm64: kaslr: deal with physically misaligned kernel images Ard Biesheuvel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2016-03-02 17:11 UTC (permalink / raw)
  To: linux-arm-kernel

For historical reasons, there is a 512 KB hole called TEXT_OFFSET below
the kernel image in memory. Since this hole is part of the kernel footprint
in the early mapping when running with 4 KB pages, we cannot avoid mapping
it, but in other cases, e.g., when running with larger page sizes, or in
the future, with more granular KASLR, there is no reason to map it explicitly
like we do currently.

So update the logic so that the hole is mapped only if it occurs as a result
of rounding the start address of the kernel to swapper block size, and leave
it unmapped otherwise.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/head.S | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 50c2134a4aaf..1d4ae36db0bb 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -393,12 +393,12 @@ __create_page_tables:
 	 * Map the kernel image (starting with PHYS_OFFSET).
 	 */
 	mov	x0, x26				// swapper_pg_dir
-	ldr	x5, =KIMAGE_VADDR
+	ldr	x5, =KIMAGE_VADDR + TEXT_OFFSET	// compile time virt addr of _text
 	add	x5, x5, x23			// add KASLR displacement
 	create_pgd_entry x0, x5, x3, x6
 	ldr	w6, kernel_img_size
 	add	x6, x6, x5
-	mov	x3, x24				// phys offset
+	adrp	x3, KERNEL_START		// runtime phys addr of _text
 	create_block_map x0, x7, x3, x5, x6
 
 	/*
@@ -415,7 +415,7 @@ __create_page_tables:
 ENDPROC(__create_page_tables)
 
 kernel_img_size:
-	.long	_end - (_head - TEXT_OFFSET)
+	.long	_end - _head
 	.ltorg
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] arm64: kaslr: deal with physically misaligned kernel images
  2016-03-02 17:11 [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
  2016-03-02 17:11 ` [PATCH 1/3] arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it Ard Biesheuvel
@ 2016-03-02 17:11 ` Ard Biesheuvel
  2016-03-02 18:11   ` Ard Biesheuvel
  2016-03-02 17:11 ` [PATCH 3/3] arm64: kaslr: increase randomization granularity Ard Biesheuvel
  2016-03-02 17:16 ` [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
  3 siblings, 1 reply; 6+ messages in thread
From: Ard Biesheuvel @ 2016-03-02 17:11 UTC (permalink / raw)
  To: linux-arm-kernel

Since KASLR requires a relocatable kernel image anyway, there is no
practical reason to refuse an image whose load address is not exactly
TEXT_OFFSET bytes above a 2 MB aligned base address, as long as the
physical and virtual misalignment with respect to the swapper block
size are equal. So treat the misalignment of the physical load address
as the initial KASLR offset, and fix up the remaining code to deal with
that.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/head.S  | 16 ++++++++++++----
 arch/arm64/kernel/kaslr.c |  6 +++---
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 1d4ae36db0bb..934d6dcd7a57 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -25,6 +25,7 @@
 #include <linux/irqchip/arm-gic-v3.h>
 
 #include <asm/assembler.h>
+#include <asm/boot.h>
 #include <asm/ptrace.h>
 #include <asm/asm-offsets.h>
 #include <asm/cache.h>
@@ -211,8 +212,12 @@ section_table:
 ENTRY(stext)
 	bl	preserve_boot_args
 	bl	el2_setup			// Drop to EL1, w20=cpu_boot_mode
-	mov	x23, xzr			// KASLR offset, defaults to 0
 	adrp	x24, __PHYS_OFFSET
+#ifdef CONFIG_RANDOMIZE_BASE
+	mov	x23, xzr			// KASLR offset, defaults to 0
+#else
+	and	x23, x24, MIN_KIMG_ALIGN - 1	// unless loaded phys misaligned
+#endif
 	bl	set_cpu_boot_mode_flag
 	bl	__create_page_tables		// x25=TTBR0, x26=TTBR1
 	/*
@@ -489,11 +494,13 @@ __mmap_switched:
 	bl	kasan_early_init
 #endif
 #ifdef CONFIG_RANDOMIZE_BASE
-	cbnz	x23, 0f				// already running randomized?
+	tst	x23, ~(MIN_KIMG_ALIGN - 1)	// already running randomized?
+	b.ne	0f
 	mov	x0, x21				// pass FDT address in x0
+	mov	x1, x23				// pass modulo offset in x1
 	bl	kaslr_early_init		// parse FDT for KASLR options
 	cbz	x0, 0f				// KASLR disabled? just proceed
-	mov	x23, x0				// record KASLR offset
+	orr	x23, x23, x0			// record KASLR offset
 	ret	x28				// we must enable KASLR, return
 						// to __enable_mmu()
 0:
@@ -753,7 +760,8 @@ __enable_mmu:
 	isb
 #ifdef CONFIG_RANDOMIZE_BASE
 	mov	x19, x0				// preserve new SCTLR_EL1 value
-	blr	x27
+	add	x30, x27, x23
+	blr	x30
 
 	/*
 	 * If we return here, we have a KASLR displacement in x23 which we need
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 582983920054..b05469173ba5 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -74,7 +74,7 @@ extern void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size,
  * containing function pointers) to be reinitialized, and zero-initialized
  * .bss variables will be reset to 0.
  */
-u64 __init kaslr_early_init(u64 dt_phys)
+u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
 {
 	void *fdt;
 	u64 seed, offset, mask, module_range;
@@ -132,8 +132,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
 	 * happens, increase the KASLR offset by the size of the kernel image.
 	 */
-	if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
-	    (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))
+	if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
+	    (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
 		offset = (offset + (u64)(_end - _text)) & mask;
 
 	if (IS_ENABLED(CONFIG_KASAN))
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] arm64: kaslr: increase randomization granularity
  2016-03-02 17:11 [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
  2016-03-02 17:11 ` [PATCH 1/3] arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it Ard Biesheuvel
  2016-03-02 17:11 ` [PATCH 2/3] arm64: kaslr: deal with physically misaligned kernel images Ard Biesheuvel
@ 2016-03-02 17:11 ` Ard Biesheuvel
  2016-03-02 17:16 ` [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
  3 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2016-03-02 17:11 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, our KASLR implementation randomizes the placement of the core
kernel at 2 MB granularity. This is based on the arm64 kernel boot
protocol, which mandates that the kernel is loaded TEXT_OFFSET bytes above
a 2 MB aligned base address. This requirement is a result of the fact that
the block size used by the early mapping code may be 2 MB at the most (for
a 4 KB granule kernel)

But we can do better than that: since a KASLR kernel needs to be relocated
in any case, we can tolerate a physical misalignment as long as the virtual
misalignment is equal in size, and code to deal with this is already in
place.

The actual minimal alignment of the core kernel is either PAGE_SIZE or
THREAD_SIZE, whichever is greater. The former is obvious, but the latter
is due to the fact that the init stack is expected to live at an offset
which is a round multiple of its size.

The higher granularity adds between 5 and 7 bits of entropy, depending on
page size.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/firmware/efi/libstub/arm64-stub.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
index e0e6b74fef8f..84584e7847df 100644
--- a/drivers/firmware/efi/libstub/arm64-stub.c
+++ b/drivers/firmware/efi/libstub/arm64-stub.c
@@ -61,15 +61,23 @@ efi_status_t __init handle_kernel_image(efi_system_table_t *sys_table_arg,
 
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && phys_seed != 0) {
 		/*
+		 * Produce a displacement in the interval [0, MIN_KIMG_ALIGN)
+		 * that is a multiple of the actual minimal kernel alignment
+		 * (either PAGE_SIZE or THREAD_SIZE, whichever is greater)
+		 */
+		const u32 offset = (phys_seed >> 32) & (MIN_KIMG_ALIGN - 1) &
+				   ~(max_t(u32, PAGE_SIZE, THREAD_SIZE) - 1);
+
+		/*
 		 * If KASLR is enabled, and we have some randomness available,
 		 * locate the kernel at a randomized offset in physical memory.
 		 */
-		*reserve_size = kernel_memsize + TEXT_OFFSET;
+		*reserve_size = kernel_memsize + offset;
 		status = efi_random_alloc(sys_table_arg, *reserve_size,
 					  MIN_KIMG_ALIGN, reserve_addr,
-					  phys_seed);
+					  (u32)phys_seed);
 
-		*image_addr = *reserve_addr + TEXT_OFFSET;
+		*image_addr = *reserve_addr + offset;
 	} else {
 		/*
 		 * Else, try a straight allocation at the preferred offset.
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 0/3] arm64: more granular KASLR
  2016-03-02 17:11 [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2016-03-02 17:11 ` [PATCH 3/3] arm64: kaslr: increase randomization granularity Ard Biesheuvel
@ 2016-03-02 17:16 ` Ard Biesheuvel
  3 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2016-03-02 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

On 2 March 2016 at 18:11, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> It turns out we can squeeze out 5 to 7 bits of additional KASLR entropy in
> the new arm64 implementation. This is based on the observation that the
> minimal 2 MB alignment of the kernel image is only required for kernels
> that are non-relocatable, and since KASLR already implies a relocatable
> kernel anyway, we get this additional wiggle room almost [1] for free.
>
> The idea is that, since we need to fix up all absolute symbol references
> anyway, the hardcoded virtual start address of the kernel does not need to
> be 2 MB aligned (+ TEXT_OFFSET), and the only thing we need to ensure is
> that the physical misalignment and the virtual misalignment are equal modulo
> the swapper block size.
>
> Patch #1 removes the explicit mapping of the TEXT_OFFSET region below the
> kernel, and only maps it if the rounding to swapper block size of the kernel
> start address ends up covering it.
>
> Patch #2 updates the early boot code to treat the physical misalignment as
> the initial KASLR displacement. Note that this only affects code that is
> compiled conditionally if CONFIG_RANDOMIZE_BASE=y
>
> Patch #3 updates the stub allocation strategy to allow a more granular mapping.
> Note that the allocation itself is still rounded to 2 MB as before, to prevent
> the early mapping to cover adjacent regions inadvertently. As is the case for
> patch #2, this only affects the new code under CONFIG_RANDOMIZE_BASE=y
>
> Sample output from a 4k/4 levels kernel, where we have 33 bits of entropy
> in the kernel addresses:
>
> Virtual kernel memory layout:
>     modules : 0xffff000000000000 - 0xffff000008000000   (   128 MB)
>     vmalloc : 0xffff000008000000 - 0xffff7dffbfff0000   (129022 GB)
>       .init : 0xffff0bbbe14a6000 - 0xffff0bbbe17d5000   (  3260 KB)
>       .text : 0xffff0bbbe0c24000 - 0xffff0bbbe120a000   (  6040 KB)
>     .rodata : 0xffff0bbbe120a000 - 0xffff0bbbe14a6000   (  2672 KB)
>       .data : 0xffff0bbbe17d5000 - 0xffff0bbbe1866e00   (   584 KB)
>       fixed : 0xffff7dfffe7fd000 - 0xffff7dfffec00000   (  4108 KB)
>     PCI I/O : 0xffff7dfffee00000 - 0xffff7dffffe00000   (    16 MB)
>     vmemmap : 0xffff7e0000000000 - 0xffff800000000000   (  2048 GB maximum)
>               0xffff7e1333000000 - 0xffff7e1337000000   (    64 MB actual)
>      memory : 0xffff84ccc0000000 - 0xffff84cdc0000000   (  4096 MB)
>

[1] this does defeat the performance benefit resulting from the
increased alignment implemented by CONFIG_DEBUG_ALIGN_RODATA=y

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/3] arm64: kaslr: deal with physically misaligned kernel images
  2016-03-02 17:11 ` [PATCH 2/3] arm64: kaslr: deal with physically misaligned kernel images Ard Biesheuvel
@ 2016-03-02 18:11   ` Ard Biesheuvel
  0 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2016-03-02 18:11 UTC (permalink / raw)
  To: linux-arm-kernel

On 2 March 2016 at 18:11, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Since KASLR requires a relocatable kernel image anyway, there is no
> practical reason to refuse an image whose load address is not exactly
> TEXT_OFFSET bytes above a 2 MB aligned base address, as long as the
> physical and virtual misalignment with respect to the swapper block
> size are equal. So treat the misalignment of the physical load address
> as the initial KASLR offset, and fix up the remaining code to deal with
> that.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/kernel/head.S  | 16 ++++++++++++----
>  arch/arm64/kernel/kaslr.c |  6 +++---
>  2 files changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 1d4ae36db0bb..934d6dcd7a57 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -25,6 +25,7 @@
>  #include <linux/irqchip/arm-gic-v3.h>
>
>  #include <asm/assembler.h>
> +#include <asm/boot.h>
>  #include <asm/ptrace.h>
>  #include <asm/asm-offsets.h>
>  #include <asm/cache.h>
> @@ -211,8 +212,12 @@ section_table:
>  ENTRY(stext)
>         bl      preserve_boot_args
>         bl      el2_setup                       // Drop to EL1, w20=cpu_boot_mode
> -       mov     x23, xzr                        // KASLR offset, defaults to 0
>         adrp    x24, __PHYS_OFFSET
> +#ifdef CONFIG_RANDOMIZE_BASE

This should be ifndef. (I moved this around right before posting, but
failed to invert the condition)

> +       mov     x23, xzr                        // KASLR offset, defaults to 0
> +#else
> +       and     x23, x24, MIN_KIMG_ALIGN - 1    // unless loaded phys misaligned
> +#endif
>         bl      set_cpu_boot_mode_flag
>         bl      __create_page_tables            // x25=TTBR0, x26=TTBR1
>         /*
> @@ -489,11 +494,13 @@ __mmap_switched:
>         bl      kasan_early_init
>  #endif
>  #ifdef CONFIG_RANDOMIZE_BASE
> -       cbnz    x23, 0f                         // already running randomized?
> +       tst     x23, ~(MIN_KIMG_ALIGN - 1)      // already running randomized?
> +       b.ne    0f
>         mov     x0, x21                         // pass FDT address in x0
> +       mov     x1, x23                         // pass modulo offset in x1
>         bl      kaslr_early_init                // parse FDT for KASLR options
>         cbz     x0, 0f                          // KASLR disabled? just proceed
> -       mov     x23, x0                         // record KASLR offset
> +       orr     x23, x23, x0                    // record KASLR offset
>         ret     x28                             // we must enable KASLR, return
>                                                 // to __enable_mmu()
>  0:
> @@ -753,7 +760,8 @@ __enable_mmu:
>         isb
>  #ifdef CONFIG_RANDOMIZE_BASE
>         mov     x19, x0                         // preserve new SCTLR_EL1 value
> -       blr     x27
> +       add     x30, x27, x23
> +       blr     x30
>
>         /*
>          * If we return here, we have a KASLR displacement in x23 which we need
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 582983920054..b05469173ba5 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -74,7 +74,7 @@ extern void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size,
>   * containing function pointers) to be reinitialized, and zero-initialized
>   * .bss variables will be reset to 0.
>   */
> -u64 __init kaslr_early_init(u64 dt_phys)
> +u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
>  {
>         void *fdt;
>         u64 seed, offset, mask, module_range;
> @@ -132,8 +132,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
>          * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
>          * happens, increase the KASLR offset by the size of the kernel image.
>          */
> -       if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
> -           (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))
> +       if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
> +           (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
>                 offset = (offset + (u64)(_end - _text)) & mask;
>
>         if (IS_ENABLED(CONFIG_KASAN))
> --
> 2.5.0
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-03-02 18:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-02 17:11 [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel
2016-03-02 17:11 ` [PATCH 1/3] arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it Ard Biesheuvel
2016-03-02 17:11 ` [PATCH 2/3] arm64: kaslr: deal with physically misaligned kernel images Ard Biesheuvel
2016-03-02 18:11   ` Ard Biesheuvel
2016-03-02 17:11 ` [PATCH 3/3] arm64: kaslr: increase randomization granularity Ard Biesheuvel
2016-03-02 17:16 ` [PATCH 0/3] arm64: more granular KASLR Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).