* [PATCH 0/6] randomize kernel physical address and virtual address separately
@ 2015-01-21 3:37 Baoquan He
2015-01-21 3:37 ` [PATCH 1/6] remove a unused function parameter Baoquan He
` (8 more replies)
0 siblings, 9 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
Currently kaslr only randomize physical address of kernel loading, then add the delta
to virtual address of kernel text mapping. Because kernel virtual address can only be
from __START_KERNEL_map to LOAD_PHYSICAL_ADDR+CONFIG_RANDOMIZE_BASE_MAX_OFFSET, namely
[0xffffffff80000000, 0xffffffffc0000000], so physical address can only be randomized
in region [LOAD_PHYSICAL_ADDR, CONFIG_RANDOMIZE_BASE_MAX_OFFSET], namely [16M, 1G].
So hpa and Vivek suggested the randomization should be done separately for both physical
and virtual address. In this patchset I tried it. And after randomization, relocation
handling only depends on virtual address changing, means I only check whether virtual
address is randomized to other position, if yes relocation need be handled, if no just
skip the relocation handling though physical address is randomized to different place.
Now physical address can be randomized from 16M to 4G, virtual address offset can be
from 16M to 1G.
Leftover problem:
hpa want to see the physical randomization can cover the whole physical memory. I
checked code and found it's hard to do. Because in arch/x86/boot/compressed/head_64.S
an identity mapping of 4G is built and then kaslr and decompressing are done. The #PF
handler solution which he suggested is only available after jump into decompressed
kernel, namely in arch/x86/kernel/head_64.S. I didn't think of a way to do the whole
memory covering for physical address randomization, any suggestion or idea?
Baoquan He (6):
remove a unused function parameter
a bug that relocation can not be handled when kernel is loaded above
2G
Introduce a function to randomize the kernel text mapping address
adapt choose_kernel_location to add the kernel virtual address
randomzation
change the relocations behavior for kaslr on x86_64
extend the upper limit of kernel physical address randomization to 4G
arch/x86/boot/compressed/aslr.c | 69 ++++++++++++++++++++++++++++-------------
arch/x86/boot/compressed/misc.c | 34 +++++++++++++-------
arch/x86/boot/compressed/misc.h | 20 ++++++------
3 files changed, 82 insertions(+), 41 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/6] remove a unused function parameter
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
@ 2015-01-21 3:37 ` Baoquan He
2015-01-21 3:37 ` [PATCH 2/6] a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
` (7 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
Make a clean up to simplify the later change.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index bb13763..9a7210c 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -135,7 +135,7 @@ static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
}
static void mem_avoid_init(unsigned long input, unsigned long input_size,
- unsigned long output, unsigned long output_size)
+ unsigned long output_size)
{
u64 initrd_start, initrd_size;
u64 cmd_line, cmd_line_size;
@@ -317,7 +317,7 @@ unsigned char *choose_kernel_location(unsigned char *input,
/* Record the various known unsafe memory ranges. */
mem_avoid_init((unsigned long)input, input_size,
- (unsigned long)output, output_size);
+ output_size);
/* Walk e820 and find a random address. */
random = find_random_addr(choice, output_size);
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/6] a bug that relocation can not be handled when kernel is loaded above 2G
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
2015-01-21 3:37 ` [PATCH 1/6] remove a unused function parameter Baoquan He
@ 2015-01-21 3:37 ` Baoquan He
2015-01-21 3:37 ` [PATCH 3/6] Introduce a function to randomize the kernel text mapping address Baoquan He
` (6 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
When process 32 bit relocation a local variable extended is defined
to calculate the physical address of relocs entry. However its data
type is int which is enough for i386, but not for x86_64. That's why
relocation can only be handled when kernel is loaded below 2G,
otherwise a overflow will happen and cause system hang.
Here change it to long as 32 bit inverse relocation processing does,
and this change is safe for i386 relocation handling.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/misc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index dcc1c53..324ccb5 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -278,7 +278,7 @@ static void handle_relocations(void *output, unsigned long output_len)
* So we work backwards from the end of the decompressed image.
*/
for (reloc = output + output_len - sizeof(*reloc); *reloc; reloc--) {
- int extended = *reloc;
+ long extended = *reloc;
extended += map;
ptr = (unsigned long)extended;
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/6] Introduce a function to randomize the kernel text mapping address
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
2015-01-21 3:37 ` [PATCH 1/6] remove a unused function parameter Baoquan He
2015-01-21 3:37 ` [PATCH 2/6] a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
@ 2015-01-21 3:37 ` Baoquan He
2015-01-21 3:37 ` [PATCH 4/6] adapt choose_kernel_location to add the kernel virtual address randomzation Baoquan He
` (5 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
Kaslr extended kernel text mapping region size from 512M to 1G,
namely CONFIG_RANDOMIZE_BASE_MAX_OFFSET. This means kernel text
can be mapped to below region:
__START_KERNEL_map + LOAD_PHYSICAL_ADDR, __START_KERNEL_map + 1G]
Introduce a function find_random_virt_offset() to get random value
between LOAD_PHYSICAL_ADDR and CONFIG_RANDOMIZE_BASE_MAX_OFFSET.
This random value will be added to __START_KERNEL_map to get the
starting address which kernel text is mapped from.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 9a7210c..091b118 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -1,4 +1,5 @@
#include "misc.h"
+#include "../string.h"
#include <asm/msr.h>
#include <asm/archrandom.h>
@@ -295,6 +296,27 @@ static unsigned long find_random_addr(unsigned long minimum,
return slots_fetch_random();
}
+static unsigned long find_random_virt_offset(unsigned long size)
+{
+ struct mem_vector region, img;
+
+ memset(slots, 0, sizeof(slots));
+ slot_max = 0;
+
+ region.start = LOAD_PHYSICAL_ADDR;
+ region.size = CONFIG_RANDOMIZE_BASE_MAX_OFFSET - region.start;
+
+ for (img.start = region.start, img.size = size;
+ mem_contains(®ion, &img);
+ img.start += CONFIG_PHYSICAL_ALIGN) {
+ if (mem_avoid_overlap(&img))
+ continue;
+ slots_append(img.start);
+ }
+
+ return slots_fetch_random();
+}
+
unsigned char *choose_kernel_location(unsigned char *input,
unsigned long input_size,
unsigned char *output,
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 4/6] adapt choose_kernel_location to add the kernel virtual address randomzation
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
` (2 preceding siblings ...)
2015-01-21 3:37 ` [PATCH 3/6] Introduce a function to randomize the kernel text mapping address Baoquan He
@ 2015-01-21 3:37 ` Baoquan He
2015-01-21 3:37 ` [PATCH 5/6] change the relocations behavior for kaslr on x86_64 Baoquan He
` (4 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
For kernel virtual address, need set LOAD_PHYSICAL_ADDR to be the default
offset if randomization failed. Because it will be used to check whether
a relocation handling need be done for x86_64 kaslr.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 32 ++++++++++++++++++--------------
arch/x86/boot/compressed/misc.c | 6 ++++--
arch/x86/boot/compressed/misc.h | 20 +++++++++++---------
3 files changed, 33 insertions(+), 25 deletions(-)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 091b118..20a5f23 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -317,12 +317,12 @@ static unsigned long find_random_virt_offset(unsigned long size)
return slots_fetch_random();
}
-unsigned char *choose_kernel_location(unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long output_size)
+void choose_kernel_location(unsigned char *input,
+ unsigned long input_size,
+ unsigned char **output,
+ unsigned long output_size,
+ unsigned char **virt_rand_offset)
{
- unsigned long choice = (unsigned long)output;
unsigned long random;
#ifdef CONFIG_HIBERNATION
@@ -342,17 +342,21 @@ unsigned char *choose_kernel_location(unsigned char *input,
output_size);
/* Walk e820 and find a random address. */
- random = find_random_addr(choice, output_size);
- if (!random) {
+ random = find_random_addr((unsigned long)*output, output_size);
+ if (!random)
debug_putstr("KASLR could not find suitable E820 region...\n");
- goto out;
- }
-
/* Always enforce the minimum. */
- if (random < choice)
- goto out;
+ else if (random > (unsigned long)*output)
+ *output = (unsigned char*)random;
+
+
+ random = find_random_virt_offset(output_size);
+ if (!random)
+ debug_putstr("KASLR could not find suitable kernel mapping region...\n");
+ else if (random > LOAD_PHYSICAL_ADDR)
+ *virt_rand_offset = (unsigned char*)random;
- choice = random;
out:
- return (unsigned char *)choice;
+ if (!random)
+ *virt_rand_offset = (unsigned char*)LOAD_PHYSICAL_ADDR;
}
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 324ccb5..acd4db1 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -373,6 +373,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
unsigned long output_len,
unsigned long run_size)
{
+ unsigned char *virt_rand_offset;
real_mode = rmode;
sanitize_boot_params(real_mode);
@@ -399,9 +400,10 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
* the entire decompressed kernel plus relocation table, or the
* entire decompressed kernel plus .bss and .brk sections.
*/
- output = choose_kernel_location(input_data, input_len, output,
+ choose_kernel_location(input_data, input_len, &output,
output_len > run_size ? output_len
- : run_size);
+ : run_size,
+ &virt_rand_offset);
/* Validate memory location choices. */
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..7278bff 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -56,20 +56,22 @@ int cmdline_find_option_bool(const char *option);
#if CONFIG_RANDOMIZE_BASE
/* aslr.c */
-unsigned char *choose_kernel_location(unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long output_size);
+void choose_kernel_location(unsigned char *input,
+ unsigned long input_size,
+ unsigned char **output,
+ unsigned long output_size,
+ unsigned char **virt_rand_offset);
/* cpuflags.c */
bool has_cpuflag(int flag);
#else
static inline
-unsigned char *choose_kernel_location(unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long output_size)
+void choose_kernel_location(unsigned char *input,
+ unsigned long input_size,
+ unsigned char **output,
+ unsigned long output_size,
+ unsigned char **virt_rand_offset);
{
- return output;
+ return;
}
#endif
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 5/6] change the relocations behavior for kaslr on x86_64
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
` (3 preceding siblings ...)
2015-01-21 3:37 ` [PATCH 4/6] adapt choose_kernel_location to add the kernel virtual address randomzation Baoquan He
@ 2015-01-21 3:37 ` Baoquan He
2015-01-21 3:37 ` [PATCH 6/6] extend the upper limit of kernel physical address randomization to 4G Baoquan He
` (3 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
On x86_64, in old kaslr implementaion only physical address of kernel
loading is randomized. Then calculate the delta of physical address
where vmlinux was linked to load and where it is finally loaded. If
delta is not equal to 0, namely there's a new physical address where
kernel is actually decompressed, relocation handling need be done. Then
delta is added to offset of kernel symbol relocation, this makes the
address of kernel text mapping move delta long.
Here the behavior is changed. We randomize both the physical address
where kernel is decompressed and the virtual address where kernel text
is mapped. And relocation handling only depends on virtual address
randomization. Means if and only if virtual address is randomized to
a different value, we add the delta to the offset of kernel relocs.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/misc.c | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index acd4db1..ca9f28f 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -231,7 +231,8 @@ static void error(char *x)
}
#if CONFIG_X86_NEED_RELOCS
-static void handle_relocations(void *output, unsigned long output_len)
+static void handle_relocations(void *output, unsigned long output_len,
+ unsigned char *virt_rand_offset)
{
int *reloc;
unsigned long delta, map, ptr;
@@ -243,11 +244,6 @@ static void handle_relocations(void *output, unsigned long output_len)
* and where it was actually loaded.
*/
delta = min_addr - LOAD_PHYSICAL_ADDR;
- if (!delta) {
- debug_putstr("No relocation needed... ");
- return;
- }
- debug_putstr("Performing relocations... ");
/*
* The kernel contains a table of relocation addresses. Those
@@ -259,6 +255,19 @@ static void handle_relocations(void *output, unsigned long output_len)
map = delta - __START_KERNEL_map;
/*
+ * For x86_64 calculate the delta between where kernel was linked
+ * and where it was finally mapped.
+ */
+ if (IS_ENABLED(CONFIG_X86_64))
+ delta = (unsigned long)virt_rand_offset - LOAD_PHYSICAL_ADDR;
+
+ if (!delta) {
+ debug_putstr("No relocation needed... ");
+ return;
+ }
+ debug_putstr("Performing relocations... ");
+
+ /*
* Process relocations: 32 bit relocations first then 64 bit after.
* Three sets of binary relocations are added to the end of the kernel
* before compression. Each relocation table entry is the kernel
@@ -311,7 +320,8 @@ static void handle_relocations(void *output, unsigned long output_len)
#endif
}
#else
-static inline void handle_relocations(void *output, unsigned long output_len)
+static inline void handle_relocations(void *output, unsigned long output_len,
+ unsigned char *virt_rand_offset)
{ }
#endif
@@ -423,7 +433,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
debug_putstr("\nDecompressing Linux... ");
decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output);
- handle_relocations(output, output_len);
+ handle_relocations(output, output_len, virt_rand_offset);
debug_putstr("done.\nBooting the kernel.\n");
return output;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 6/6] extend the upper limit of kernel physical address randomization to 4G
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
` (4 preceding siblings ...)
2015-01-21 3:37 ` [PATCH 5/6] change the relocations behavior for kaslr on x86_64 Baoquan He
@ 2015-01-21 3:37 ` Baoquan He
2015-01-21 4:19 ` [PATCH 0/6] randomize kernel physical address and virtual address separately Andy Lutomirski
` (2 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 3:37 UTC (permalink / raw)
To: linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi, Baoquan He
Since now kaslr can separately do randomization of physical and virtual
address, the physical address doesn't have to be
CONFIG_RANDOMIZE_BASE_MAX_OFFSET any more. At this time the identity
mapping only covers [0, 4G], so extend the upper limit of kernel physical
address randomization to 4G.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 20a5f23..b112f90 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -207,14 +207,15 @@ static bool mem_avoid_overlap(struct mem_vector *img)
return false;
}
-static unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
+#define PHYS_RANDOM_UPPER_LIMIT 0x100000000UL
+static unsigned long slots[PHYS_RANDOM_UPPER_LIMIT /
CONFIG_PHYSICAL_ALIGN];
static unsigned long slot_max;
static void slots_append(unsigned long addr)
{
/* Overflowing the slots list should be impossible. */
- if (slot_max >= CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
+ if (slot_max >= PHYS_RANDOM_UPPER_LIMIT /
CONFIG_PHYSICAL_ALIGN)
return;
@@ -241,7 +242,7 @@ static void process_e820_entry(struct e820entry *entry,
return;
/* Ignore entries entirely above our maximum. */
- if (entry->addr >= CONFIG_RANDOMIZE_BASE_MAX_OFFSET)
+ if (entry->addr >= PHYS_RANDOM_UPPER_LIMIT)
return;
/* Ignore entries entirely below our minimum. */
@@ -266,8 +267,8 @@ static void process_e820_entry(struct e820entry *entry,
region.size -= region.start - entry->addr;
/* Reduce maximum size to fit end of image within maximum limit. */
- if (region.start + region.size > CONFIG_RANDOMIZE_BASE_MAX_OFFSET)
- region.size = CONFIG_RANDOMIZE_BASE_MAX_OFFSET - region.start;
+ if (region.start + region.size > PHYS_RANDOM_UPPER_LIMIT)
+ region.size = PHYS_RANDOM_UPPER_LIMIT - region.start;
/* Walk each aligned slot and check for avoided areas. */
for (img.start = region.start, img.size = image_size ;
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
` (5 preceding siblings ...)
2015-01-21 3:37 ` [PATCH 6/6] extend the upper limit of kernel physical address randomization to 4G Baoquan He
@ 2015-01-21 4:19 ` Andy Lutomirski
2015-01-21 4:46 ` Baoquan He
2015-02-01 8:10 ` Baoquan He
2015-01-21 6:18 ` Kees Cook
2015-02-02 16:42 ` H. Peter Anvin
8 siblings, 2 replies; 16+ messages in thread
From: Andy Lutomirski @ 2015-01-21 4:19 UTC (permalink / raw)
To: Baoquan He, linux-kernel; +Cc: hpa, tglx, mingo, x86, keescook, vgoyal, whissi
On 01/20/2015 07:37 PM, Baoquan He wrote:
> Currently kaslr only randomize physical address of kernel loading, then add the delta
> to virtual address of kernel text mapping. Because kernel virtual address can only be
> from __START_KERNEL_map to LOAD_PHYSICAL_ADDR+CONFIG_RANDOMIZE_BASE_MAX_OFFSET, namely
> [0xffffffff80000000, 0xffffffffc0000000], so physical address can only be randomized
> in region [LOAD_PHYSICAL_ADDR, CONFIG_RANDOMIZE_BASE_MAX_OFFSET], namely [16M, 1G].
>
> So hpa and Vivek suggested the randomization should be done separately for both physical
> and virtual address. In this patchset I tried it. And after randomization, relocation
> handling only depends on virtual address changing, means I only check whether virtual
> address is randomized to other position, if yes relocation need be handled, if no just
> skip the relocation handling though physical address is randomized to different place.
> Now physical address can be randomized from 16M to 4G, virtual address offset can be
> from 16M to 1G.
>
> Leftover problem:
> hpa want to see the physical randomization can cover the whole physical memory. I
> checked code and found it's hard to do. Because in arch/x86/boot/compressed/head_64.S
> an identity mapping of 4G is built and then kaslr and decompressing are done. The #PF
> handler solution which he suggested is only available after jump into decompressed
> kernel, namely in arch/x86/kernel/head_64.S. I didn't think of a way to do the whole
> memory covering for physical address randomization, any suggestion or idea?
>
I have no idea what the #PF thing you're referring to is, but I have
code to implement a #PF handler in boot/compressed if it would be
helpful. It's two patches:
https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=89476ea6a2becbaee4f45c3b6689ff31b6aa959a
https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=142d86921e6f271261584016fc8cfa5cdbf455ba
You can't recover from a page fault in my version of this code, but that
would be straightforward to add.
--Andy
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-01-21 4:19 ` [PATCH 0/6] randomize kernel physical address and virtual address separately Andy Lutomirski
@ 2015-01-21 4:46 ` Baoquan He
2015-02-01 8:10 ` Baoquan He
1 sibling, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-01-21 4:46 UTC (permalink / raw)
To: Andy Lutomirski
Cc: linux-kernel, hpa, tglx, mingo, x86, keescook, vgoyal, whissi
On 01/20/15 at 08:19pm, Andy Lutomirski wrote:
> On 01/20/2015 07:37 PM, Baoquan He wrote:
> > Currently kaslr only randomize physical address of kernel loading, then add the delta
> > to virtual address of kernel text mapping. Because kernel virtual address can only be
> > from __START_KERNEL_map to LOAD_PHYSICAL_ADDR+CONFIG_RANDOMIZE_BASE_MAX_OFFSET, namely
> > [0xffffffff80000000, 0xffffffffc0000000], so physical address can only be randomized
> > in region [LOAD_PHYSICAL_ADDR, CONFIG_RANDOMIZE_BASE_MAX_OFFSET], namely [16M, 1G].
> >
> > So hpa and Vivek suggested the randomization should be done separately for both physical
> > and virtual address. In this patchset I tried it. And after randomization, relocation
> > handling only depends on virtual address changing, means I only check whether virtual
> > address is randomized to other position, if yes relocation need be handled, if no just
> > skip the relocation handling though physical address is randomized to different place.
> > Now physical address can be randomized from 16M to 4G, virtual address offset can be
> > from 16M to 1G.
> >
> > Leftover problem:
> > hpa want to see the physical randomization can cover the whole physical memory. I
> > checked code and found it's hard to do. Because in arch/x86/boot/compressed/head_64.S
> > an identity mapping of 4G is built and then kaslr and decompressing are done. The #PF
> > handler solution which he suggested is only available after jump into decompressed
> > kernel, namely in arch/x86/kernel/head_64.S. I didn't think of a way to do the whole
> > memory covering for physical address randomization, any suggestion or idea?
> >
>
> I have no idea what the #PF thing you're referring to is, but I have
> code to implement a #PF handler in boot/compressed if it would be
> helpful. It's two patches:
>
> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=89476ea6a2becbaee4f45c3b6689ff31b6aa959a
It's awesome, I am gonna try it.
Thanks a lot!
Thanks
Baoquan
>
> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=142d86921e6f271261584016fc8cfa5cdbf455ba
>
> You can't recover from a page fault in my version of this code, but that
> would be straightforward to add.
>
> --Andy
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
` (6 preceding siblings ...)
2015-01-21 4:19 ` [PATCH 0/6] randomize kernel physical address and virtual address separately Andy Lutomirski
@ 2015-01-21 6:18 ` Kees Cook
2015-02-02 16:42 ` H. Peter Anvin
8 siblings, 0 replies; 16+ messages in thread
From: Kees Cook @ 2015-01-21 6:18 UTC (permalink / raw)
To: Baoquan He
Cc: LKML, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
x86@kernel.org, Vivek Goyal, Thomas Deutschmann
On Tue, Jan 20, 2015 at 7:37 PM, Baoquan He <bhe@redhat.com> wrote:
> Currently kaslr only randomize physical address of kernel loading, then add the delta
> to virtual address of kernel text mapping. Because kernel virtual address can only be
> from __START_KERNEL_map to LOAD_PHYSICAL_ADDR+CONFIG_RANDOMIZE_BASE_MAX_OFFSET, namely
> [0xffffffff80000000, 0xffffffffc0000000], so physical address can only be randomized
> in region [LOAD_PHYSICAL_ADDR, CONFIG_RANDOMIZE_BASE_MAX_OFFSET], namely [16M, 1G].
>
> So hpa and Vivek suggested the randomization should be done separately for both physical
> and virtual address. In this patchset I tried it. And after randomization, relocation
> handling only depends on virtual address changing, means I only check whether virtual
> address is randomized to other position, if yes relocation need be handled, if no just
> skip the relocation handling though physical address is randomized to different place.
> Now physical address can be randomized from 16M to 4G, virtual address offset can be
> from 16M to 1G.
This looks really great! Thanks for working on it! I'll wait to see
what you find out about the #PF handler, and then I can start doing
some testing too.
-Kees
>
> Leftover problem:
> hpa want to see the physical randomization can cover the whole physical memory. I
> checked code and found it's hard to do. Because in arch/x86/boot/compressed/head_64.S
> an identity mapping of 4G is built and then kaslr and decompressing are done. The #PF
> handler solution which he suggested is only available after jump into decompressed
> kernel, namely in arch/x86/kernel/head_64.S. I didn't think of a way to do the whole
> memory covering for physical address randomization, any suggestion or idea?
>
> Baoquan He (6):
> remove a unused function parameter
> a bug that relocation can not be handled when kernel is loaded above
> 2G
> Introduce a function to randomize the kernel text mapping address
> adapt choose_kernel_location to add the kernel virtual address
> randomzation
> change the relocations behavior for kaslr on x86_64
> extend the upper limit of kernel physical address randomization to 4G
>
> arch/x86/boot/compressed/aslr.c | 69 ++++++++++++++++++++++++++++-------------
> arch/x86/boot/compressed/misc.c | 34 +++++++++++++-------
> arch/x86/boot/compressed/misc.h | 20 ++++++------
> 3 files changed, 82 insertions(+), 41 deletions(-)
>
> --
> 1.9.3
>
--
Kees Cook
Chrome OS Security
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-01-21 4:19 ` [PATCH 0/6] randomize kernel physical address and virtual address separately Andy Lutomirski
2015-01-21 4:46 ` Baoquan He
@ 2015-02-01 8:10 ` Baoquan He
2015-02-01 13:13 ` Andy Lutomirski
1 sibling, 1 reply; 16+ messages in thread
From: Baoquan He @ 2015-02-01 8:10 UTC (permalink / raw)
To: Andy Lutomirski
Cc: linux-kernel, hpa, tglx, mingo, x86, keescook, vgoyal, whissi
On 01/20/15 at 08:19pm, Andy Lutomirski wrote:
> On 01/20/2015 07:37 PM, Baoquan He wrote:
>
> I have no idea what the #PF thing you're referring to is, but I have
> code to implement a #PF handler in boot/compressed if it would be
> helpful. It's two patches:
>
> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=89476ea6a2becbaee4f45c3b6689ff31b6aa959a
>
> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=142d86921e6f271261584016fc8cfa5cdbf455ba
>
> You can't recover from a page fault in my version of this code, but that
> would be straightforward to add.
>
Hi all,
I used Andy's patch to setup idt and implement #PF handler before kernel
decompression, and it works. Then a problem is encountered that when
choose a position above 4G and decompress kernel there system will
reboot to BIOS after kernel decompression. I use hlt command to position
where the asm code will cause that reboot and found it happened after
jumping when adjusted page table is loaded in arch/x86/kernel/head_64.S
/* Setup early boot stage 4 level pagetables. */
addq phys_base(%rip), %rax
movq %rax, %cr3
/* Ensure I am executing from virtual addresses */
movq $1f, %rax
jmp *%rax
1:
/* Check if nx is implemented */
movl $0x80000001, %eax
cpuid
movl %edx,%edi
Now I doubt gdt table is not approporiate when extend identity mapping
to be above 4G in arch/x86/boot/compressed/head_64.S. As far as I
understand, that gdt is a gdt with the 64bit segments using 32bit
descriptor, still has attribute of segment base addr and limit. I wrote
a simple patch to debug this, but still don't know how to make it work,
does anyone can help or point out what I should do to make it work?
>From 40a550ad94ca5927586fb85d3419200dbea9ebd8 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Sun, 1 Feb 2015 07:42:09 +0800
Subject: [PATCH] extend the identity mapping to 8G
This patch add 4 more pages as pmd directory tables to extend the
identity mapping to cover 8G. And hardcode the position to 5G where
kernel will be relocated and decompressed. Meanwhile commented out
the relocation handling calling.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/head_64.S | 8 ++++----
arch/x86/boot/compressed/misc.c | 3 +++
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 6b1766c..74da678 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -123,7 +123,7 @@ ENTRY(startup_32)
/* Initialize Page tables to 0 */
leal pgtable(%ebx), %edi
xorl %eax, %eax
- movl $((4096*6)/4), %ecx
+ movl $((4096*10)/4), %ecx
rep stosl
/* Build Level 4 */
@@ -134,7 +134,7 @@ ENTRY(startup_32)
/* Build Level 3 */
leal pgtable + 0x1000(%ebx), %edi
leal 0x1007(%edi), %eax
- movl $4, %ecx
+ movl $8, %ecx
1: movl %eax, 0x00(%edi)
addl $0x00001000, %eax
addl $8, %edi
@@ -144,7 +144,7 @@ ENTRY(startup_32)
/* Build Level 2 */
leal pgtable + 0x2000(%ebx), %edi
movl $0x00000183, %eax
- movl $2048, %ecx
+ movl $4096, %ecx
1: movl %eax, 0(%edi)
addl $0x00200000, %eax
addl $8, %edi
@@ -476,4 +476,4 @@ boot_stack_end:
.section ".pgtable","a",@nobits
.balign 4096
pgtable:
- .fill 6*4096, 1, 0
+ .fill 10*4096, 1, 0
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index a950864..47c8c80 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -404,6 +404,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
output = choose_kernel_location(input_data, input_len, output,
output_len > run_size ? output_len
: run_size);
+ output = 0x140000000;
/* Validate memory location choices. */
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
@@ -427,8 +428,10 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
* 32-bit always performs relocations. 64-bit relocations are only
* needed if kASLR has chosen a different load address.
*/
+#if 0
if (!IS_ENABLED(CONFIG_X86_64) || output != output_orig)
handle_relocations(output, output_len);
+#endif
debug_putstr("done.\nBooting the kernel.\n");
return output;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-02-01 8:10 ` Baoquan He
@ 2015-02-01 13:13 ` Andy Lutomirski
2015-02-02 9:34 ` Baoquan He
2015-02-02 12:10 ` Baoquan He
0 siblings, 2 replies; 16+ messages in thread
From: Andy Lutomirski @ 2015-02-01 13:13 UTC (permalink / raw)
To: Baoquan He
Cc: linux-kernel@vger.kernel.org, H. Peter Anvin, Thomas Gleixner,
Ingo Molnar, X86 ML, Kees Cook, Vivek Goyal, Thomas Deutschmann
On Sun, Feb 1, 2015 at 12:10 AM, Baoquan He <bhe@redhat.com> wrote:
> On 01/20/15 at 08:19pm, Andy Lutomirski wrote:
>> On 01/20/2015 07:37 PM, Baoquan He wrote:
>>
>> I have no idea what the #PF thing you're referring to is, but I have
>> code to implement a #PF handler in boot/compressed if it would be
>> helpful. It's two patches:
>>
>> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=89476ea6a2becbaee4f45c3b6689ff31b6aa959a
>>
>> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=142d86921e6f271261584016fc8cfa5cdbf455ba
>>
>> You can't recover from a page fault in my version of this code, but that
>> would be straightforward to add.
>>
> Hi all,
>
> I used Andy's patch to setup idt and implement #PF handler before kernel
> decompression, and it works. Then a problem is encountered that when
> choose a position above 4G and decompress kernel there system will
> reboot to BIOS after kernel decompression. I use hlt command to position
> where the asm code will cause that reboot and found it happened after
> jumping when adjusted page table is loaded in arch/x86/kernel/head_64.S
I applied this to Linus' tree today, and I get:
early console in decompress_kernel
KASLR disabled by default...
Decompressing Linux...
XZ-compressed data is corrupt
-- System halted
If I comment out the output = 0x140000000 line, then it boots.
With gzip instead of XZ, it just gets stuck at Decompressing Linux...
Presumably this is because 0x140000000 is an invalid address in my VM.
I added more RAM, and I get a nice reboot loop. QEMU thinks that it's
a page fault causing a triple fault.
If I add in my IDT code and #PF handler, nothing changes. If I
re-enable relocations, I get:
32-bit relocation outside of kernel!
Can you post the whole set of patches you're using or a link to a git tree?
>
> /* Setup early boot stage 4 level pagetables. */
> addq phys_base(%rip), %rax
> movq %rax, %cr3
>
> /* Ensure I am executing from virtual addresses */
> movq $1f, %rax
> jmp *%rax
> 1:
>
> /* Check if nx is implemented */
> movl $0x80000001, %eax
> cpuid
> movl %edx,%edi
>
> Now I doubt gdt table is not approporiate when extend identity mapping
> to be above 4G in arch/x86/boot/compressed/head_64.S. As far as I
> understand, that gdt is a gdt with the 64bit segments using 32bit
> descriptor, still has attribute of segment base addr and limit. I wrote
> a simple patch to debug this, but still don't know how to make it work,
> does anyone can help or point out what I should do to make it work?
>
>
> From 40a550ad94ca5927586fb85d3419200dbea9ebd8 Mon Sep 17 00:00:00 2001
> From: Baoquan He <bhe@redhat.com>
> Date: Sun, 1 Feb 2015 07:42:09 +0800
> Subject: [PATCH] extend the identity mapping to 8G
> This patch add 4 more pages as pmd directory tables to extend the
> identity mapping to cover 8G. And hardcode the position to 5G where
> kernel will be relocated and decompressed. Meanwhile commented out
> the relocation handling calling.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
> arch/x86/boot/compressed/head_64.S | 8 ++++----
> arch/x86/boot/compressed/misc.c | 3 +++
> 2 files changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
> index 6b1766c..74da678 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -123,7 +123,7 @@ ENTRY(startup_32)
> /* Initialize Page tables to 0 */
> leal pgtable(%ebx), %edi
> xorl %eax, %eax
> - movl $((4096*6)/4), %ecx
> + movl $((4096*10)/4), %ecx
> rep stosl
>
> /* Build Level 4 */
> @@ -134,7 +134,7 @@ ENTRY(startup_32)
> /* Build Level 3 */
> leal pgtable + 0x1000(%ebx), %edi
> leal 0x1007(%edi), %eax
> - movl $4, %ecx
> + movl $8, %ecx
> 1: movl %eax, 0x00(%edi)
> addl $0x00001000, %eax
> addl $8, %edi
> @@ -144,7 +144,7 @@ ENTRY(startup_32)
> /* Build Level 2 */
> leal pgtable + 0x2000(%ebx), %edi
> movl $0x00000183, %eax
> - movl $2048, %ecx
> + movl $4096, %ecx
> 1: movl %eax, 0(%edi)
> addl $0x00200000, %eax
> addl $8, %edi
> @@ -476,4 +476,4 @@ boot_stack_end:
> .section ".pgtable","a",@nobits
> .balign 4096
> pgtable:
> - .fill 6*4096, 1, 0
> + .fill 10*4096, 1, 0
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index a950864..47c8c80 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -404,6 +404,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
> output = choose_kernel_location(input_data, input_len, output,
> output_len > run_size ? output_len
> : run_size);
> + output = 0x140000000;
>
> /* Validate memory location choices. */
> if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
> @@ -427,8 +428,10 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
> * 32-bit always performs relocations. 64-bit relocations are only
> * needed if kASLR has chosen a different load address.
> */
> +#if 0
> if (!IS_ENABLED(CONFIG_X86_64) || output != output_orig)
> handle_relocations(output, output_len);
> +#endif
> debug_putstr("done.\nBooting the kernel.\n");
> return output;
> }
> --
> 1.9.3
>
>
>
--
Andy Lutomirski
AMA Capital Management, LLC
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-02-01 13:13 ` Andy Lutomirski
@ 2015-02-02 9:34 ` Baoquan He
2015-02-02 12:10 ` Baoquan He
1 sibling, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-02-02 9:34 UTC (permalink / raw)
To: Andy Lutomirski
Cc: linux-kernel@vger.kernel.org, H. Peter Anvin, Thomas Gleixner,
Ingo Molnar, X86 ML, Kees Cook, Vivek Goyal, Thomas Deutschmann
On 02/01/15 at 05:13am, Andy Lutomirski wrote:
> On Sun, Feb 1, 2015 at 12:10 AM, Baoquan He <bhe@redhat.com> wrote:
> > On 01/20/15 at 08:19pm, Andy Lutomirski wrote:
> >> On 01/20/2015 07:37 PM, Baoquan He wrote:
> >>
> >> I have no idea what the #PF thing you're referring to is, but I have
> >> code to implement a #PF handler in boot/compressed if it would be
> >> helpful. It's two patches:
> >>
> >> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=89476ea6a2becbaee4f45c3b6689ff31b6aa959a
> >>
> >> https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git/commit/?h=sync_rand_seed&id=142d86921e6f271261584016fc8cfa5cdbf455ba
> >>
> >> You can't recover from a page fault in my version of this code, but that
> >> would be straightforward to add.
> >>
> > Hi all,
> >
> > I used Andy's patch to setup idt and implement #PF handler before kernel
> > decompression, and it works. Then a problem is encountered that when
> > choose a position above 4G and decompress kernel there system will
> > reboot to BIOS after kernel decompression. I use hlt command to position
> > where the asm code will cause that reboot and found it happened after
> > jumping when adjusted page table is loaded in arch/x86/kernel/head_64.S
>
> I applied this to Linus' tree today, and I get:
>
> early console in decompress_kernel
> KASLR disabled by default...
>
> Decompressing Linux...
>
> XZ-compressed data is corrupt
>
> -- System halted
>
> If I comment out the output = 0x140000000 line, then it boots.
>
> With gzip instead of XZ, it just gets stuck at Decompressing Linux...
>
> Presumably this is because 0x140000000 is an invalid address in my VM.
> I added more RAM, and I get a nice reboot loop. QEMU thinks that it's
> a page fault causing a triple fault.
Thanks a lot for help, Andy.
Currently in boot/compressed/head_64.S, 6 page tables are used to build
the identity mapping in 0~4G area, 1 page used for pgd table, 1 page
used for pud table, and left 4 pgaes used for pmd table. So I added 4
more pages for pmd table, then it cover 0~8G area. So during kernel
decompressing, if trying to reload kernel to be above 4G, physical
memory have to be larger than 4G. So you can set output=0x100000000,
then you only need 4.5G memory.
Then if you add below 2 lines of code, you will see the output messgae
from screen : "Booting the kernel."
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..1c039a5 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -181,6 +181,10 @@ ENTRY(secondary_startup_64)
movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx
movq %rcx, %cr4
+/*This is used to position where the kernel will reboot to bios*/
+1: hlt
+ jmp 1b
+
/* Setup early boot stage 4 level pagetables. */
addq phys_base(%rip), %rax
movq %rax, %cr3
>
> If I add in my IDT code and #PF handler, nothing changes. If I
> re-enable relocations, I get:
>
> 32-bit relocation outside of kernel!
Here is a code bug which is fixed by my posted patch 2/6:
a bug that relocation can not be handled when kernel is loaded above 2G
So now I use the debug patch in last mail to debug how to make kernel
move to above 4G position and be decompressed there. So I just the
comment out the handle_relocations() calling. Actually the debug patch
can filter unncecessary interference.
>
> Can you post the whole set of patches you're using or a link to a git tree?
I am creating a repo on git hub and push my patchset, then it can be got
publicly. Will send it as soon as it's finished.
Btw, I didn't try xz, just always bzImage.
Thanks
Baoquan
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-02-01 13:13 ` Andy Lutomirski
2015-02-02 9:34 ` Baoquan He
@ 2015-02-02 12:10 ` Baoquan He
1 sibling, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-02-02 12:10 UTC (permalink / raw)
To: Andy Lutomirski
Cc: linux-kernel@vger.kernel.org, H. Peter Anvin, Thomas Gleixner,
Ingo Molnar, X86 ML, Kees Cook, Vivek Goyal, Thomas Deutschmann
On 02/01/15 at 05:13am, Andy Lutomirski wrote:
> I applied this to Linus' tree today, and I get:
>
> early console in decompress_kernel
> KASLR disabled by default...
>
> Decompressing Linux...
>
> XZ-compressed data is corrupt
>
> -- System halted
>
> If I comment out the output = 0x140000000 line, then it boots.
>
> With gzip instead of XZ, it just gets stuck at Decompressing Linux...
>
> Presumably this is because 0x140000000 is an invalid address in my VM.
> I added more RAM, and I get a nice reboot loop. QEMU thinks that it's
> a page fault causing a triple fault.
>
> If I add in my IDT code and #PF handler, nothing changes. If I
> re-enable relocations, I get:
>
> 32-bit relocation outside of kernel!
>
> Can you post the whole set of patches you're using or a link to a git tree?
Hi Andy,
Please check the related code here:
https://github.com/baoquan-he/linux/commits/kaslr-separate-random
Baoquan
Thanks
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
` (7 preceding siblings ...)
2015-01-21 6:18 ` Kees Cook
@ 2015-02-02 16:42 ` H. Peter Anvin
2015-02-03 15:30 ` Baoquan He
8 siblings, 1 reply; 16+ messages in thread
From: H. Peter Anvin @ 2015-02-02 16:42 UTC (permalink / raw)
To: Baoquan He, linux-kernel; +Cc: tglx, mingo, x86, keescook, vgoyal, whissi
On 01/20/2015 07:37 PM, Baoquan He wrote:
>
> Leftover problem:
> hpa want to see the physical randomization can cover the whole physical memory. I
> checked code and found it's hard to do. Because in arch/x86/boot/compressed/head_64.S
> an identity mapping of 4G is built and then kaslr and decompressing are done. The #PF
> handler solution which he suggested is only available after jump into decompressed
> kernel, namely in arch/x86/kernel/head_64.S. I didn't think of a way to do the whole
> memory covering for physical address randomization, any suggestion or idea?
>
Basically, it means adding an IDT and #PF handler to the decompression
code. Not really all that complex.
-hpa
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/6] randomize kernel physical address and virtual address separately
2015-02-02 16:42 ` H. Peter Anvin
@ 2015-02-03 15:30 ` Baoquan He
0 siblings, 0 replies; 16+ messages in thread
From: Baoquan He @ 2015-02-03 15:30 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: linux-kernel, tglx, mingo, x86, keescook, vgoyal, whissi
On 02/02/15 at 08:42am, H. Peter Anvin wrote:
> On 01/20/2015 07:37 PM, Baoquan He wrote:
> >
> >Leftover problem:
> > hpa want to see the physical randomization can cover the whole physical memory. I
> >checked code and found it's hard to do. Because in arch/x86/boot/compressed/head_64.S
> >an identity mapping of 4G is built and then kaslr and decompressing are done. The #PF
> >handler solution which he suggested is only available after jump into decompressed
> >kernel, namely in arch/x86/kernel/head_64.S. I didn't think of a way to do the whole
> >memory covering for physical address randomization, any suggestion or idea?
> >
>
> Basically, it means adding an IDT and #PF handler to the
> decompression code. Not really all that complex.
Hi hpa,
Thanks for suggestion.
Now I am working on this way. Andy provided a patch which add an IDT
in the boot/compressed load stage and only print page fault address.
I applied this patch and implemented the #PF handler function, and
it works to build the identity mapping when page fault happened above
4G. However it always reboot to BIOS because of general protection
fault after kernel is reloaded and decompressed above 4G. I got this
since Andy add 2 idt entries, X86_TRAP_GP, and X86_TRAP_PF.
---------------------------
1:
hlt
jmp 1b
---------------------------
Then I insert above hlt instructions between asm code blocks in
arch/x86/kernel/head_64.S, and found kernel decompression is done and
boot into x86/kernel/head_64.S, then reboot after it execute the jmp
instruction in below code when I hard coded kernel decompression place
to be at 5G.
/* Ensure I am executing from virtual addresses */
movq $1f, %rax
jmp *%rax
1:
/* Check if nx is implemented */
movl $0x80000001, %eax
cpuid
movl %edx,%edi
Now I am blocked here since I am not familiar with the x86 arch
registers setting.
Btw, for simplifying the debugging, I made a patch to try how to make it
work when kenrle is relocated above 4G and decompressed there.
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 6b1766c..74da678 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -123,7 +123,7 @@ ENTRY(startup_32)
/* Initialize Page tables to 0 */
leal pgtable(%ebx), %edi
xorl %eax, %eax
- movl $((4096*6)/4), %ecx
+ movl $((4096*10)/4), %ecx
rep stosl
/* Build Level 4 */
@@ -134,7 +134,7 @@ ENTRY(startup_32)
/* Build Level 3 */
leal pgtable + 0x1000(%ebx), %edi
leal 0x1007(%edi), %eax
- movl $4, %ecx
+ movl $8, %ecx
1: movl %eax, 0x00(%edi)
addl $0x00001000, %eax
addl $8, %edi
@@ -144,7 +144,7 @@ ENTRY(startup_32)
/* Build Level 2 */
leal pgtable + 0x2000(%ebx), %edi
movl $0x00000183, %eax
- movl $2048, %ecx
+ movl $4096, %ecx
1: movl %eax, 0(%edi)
addl $0x00200000, %eax
addl $8, %edi
@@ -476,4 +476,4 @@ boot_stack_end:
.section ".pgtable","a",@nobits
.balign 4096
pgtable:
- .fill 6*4096, 1, 0
+ .fill 10*4096, 1, 0
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index a950864..47c8c80 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -404,6 +404,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
output = choose_kernel_location(input_data, input_len, output,
output_len > run_size ? output_len
: run_size);
+ output = 0x140000000;
/* Validate memory location choices. */
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
@@ -427,8 +428,10 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
* 32-bit always performs relocations. 64-bit relocations are only
* needed if kASLR has chosen a different load address.
*/
+#if 0
if (!IS_ENABLED(CONFIG_X86_64) || output != output_orig)
handle_relocations(output, output_len);
+#endif
debug_putstr("done.\nBooting the kernel.\n");
return output;
}
>
>
^ permalink raw reply related [flat|nested] 16+ messages in thread
end of thread, other threads:[~2015-02-03 15:30 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-21 3:37 [PATCH 0/6] randomize kernel physical address and virtual address separately Baoquan He
2015-01-21 3:37 ` [PATCH 1/6] remove a unused function parameter Baoquan He
2015-01-21 3:37 ` [PATCH 2/6] a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
2015-01-21 3:37 ` [PATCH 3/6] Introduce a function to randomize the kernel text mapping address Baoquan He
2015-01-21 3:37 ` [PATCH 4/6] adapt choose_kernel_location to add the kernel virtual address randomzation Baoquan He
2015-01-21 3:37 ` [PATCH 5/6] change the relocations behavior for kaslr on x86_64 Baoquan He
2015-01-21 3:37 ` [PATCH 6/6] extend the upper limit of kernel physical address randomization to 4G Baoquan He
2015-01-21 4:19 ` [PATCH 0/6] randomize kernel physical address and virtual address separately Andy Lutomirski
2015-01-21 4:46 ` Baoquan He
2015-02-01 8:10 ` Baoquan He
2015-02-01 13:13 ` Andy Lutomirski
2015-02-02 9:34 ` Baoquan He
2015-02-02 12:10 ` Baoquan He
2015-01-21 6:18 ` Kees Cook
2015-02-02 16:42 ` H. Peter Anvin
2015-02-03 15:30 ` Baoquan He
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox