linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Baoquan He <bhe@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: yinghai@kernel.org, keescook@chromium.org, hpa@zytor.com,
	vgoyal@redhat.com, mingo@redhat.com, bp@alien8.de,
	luto@kernel.org, lasse.collin@tukaani.org,
	akpm@linux-foundation.org, dyoung@redhat.com
Subject: [PATCH v3 08/19] x86, kaslr: Consolidate mem_avoid array filling
Date: Sat,  5 Mar 2016 00:25:06 +0800	[thread overview]
Message-ID: <1457108717-12191-9-git-send-email-bhe@redhat.com> (raw)
In-Reply-To: <1457108717-12191-1-git-send-email-bhe@redhat.com>

From: Yinghai Lu <yinghai@kernel.org>

We are going to support kaslr with 64bit above 4G, and new random output
could be anywhere. Array mem_avoid is used for kaslr to search new output
address. Current code only track range that is after output+output_size.
So we need to track all ranges instead of just after output+output_size.

Current code has first entry which is extra bytes before input+input_size,
and it is according to output_size. Other entries are for initrd, cmdline,
and heap/stack for ZO running.

At first, let's check the first entry that should be in the mem_avoid array.
Now ZO sit end of the buffer always, we can find out where is text and
data/bss etc of ZO.

Since init_size >= run_size, and input+input_len >= output+output_len,
here make several assumptions for better presentation by graph:
 - init_size > run_size
 - input+input_len > output+output_len
 - run_size > output_len

0   output                       input             input+input_len          output+init_size
|     |                            |                       |                       |
|-----|-------------------|--------|------------------|----|------------|----------|
                          |                           |                 |
             output+init_size-ZO_INIT_SIZE    output+output_len    output+run_size

[output, output+init_size) is the for decompressing buffer for compressed
kernel.

[output, output+run_size) is for VO run size.
[output, output+output_len) is (VO (vmlinux after objcopy) plus relocs)

[output+init_size-ZO_INIT_SIZE, output+init_size) is copied ZO.
[input, input+input_len) is copied compressed (VO (vmlinux after objcopy)
plus relocs), not the ZO.

[input+input_len, output+init_size) is [_text, _end) for ZO. that could be
first range in mem_avoid. Now the new first entry already includes heap and
stack for ZO running. So no need to put them separately into mem_avoid array.

Also [input, input+input_size) need be put in mem_avoid array. It is adjacent
to new first entry, so merge them.

At last we need to put boot_params into the mem_avoid too. As with 64bit
bootloader could put it anywhere.

After these changes, we have all ranges which need be avoided in mem_avoid
array.

Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
v2->v3:
    Adjust the patch log.

 arch/x86/boot/compressed/aslr.c | 29 +++++++++++++----------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 622aa88..b93be03 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -109,7 +109,7 @@ struct mem_vector {
 	unsigned long size;
 };
 
-#define MEM_AVOID_MAX 5
+#define MEM_AVOID_MAX 4
 static struct mem_vector mem_avoid[MEM_AVOID_MAX];
 
 static bool mem_contains(struct mem_vector *region, struct mem_vector *item)
@@ -135,21 +135,22 @@ static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
 }
 
 static void mem_avoid_init(unsigned long input, unsigned long input_size,
-			   unsigned long output, unsigned long output_size)
+			   unsigned long output)
 {
+	unsigned long init_size = real_mode->hdr.init_size;
 	u64 initrd_start, initrd_size;
 	u64 cmd_line, cmd_line_size;
-	unsigned long unsafe, unsafe_len;
 	char *ptr;
 
 	/*
 	 * Avoid the region that is unsafe to overlap during
-	 * decompression (see calculations at top of misc.c).
+	 * decompression.
+	 * As we already move ZO (arch/x86/boot/compressed/vmlinux)
+	 * to the end of buffer, [input+input_size, output+init_size)
+	 * has [_text, _end) for ZO.
 	 */
-	unsafe_len = (output_size >> 12) + 32768 + 18;
-	unsafe = (unsigned long)input + input_size - unsafe_len;
-	mem_avoid[0].start = unsafe;
-	mem_avoid[0].size = unsafe_len;
+	mem_avoid[0].start = input;
+	mem_avoid[0].size = (output + init_size) - input;
 
 	/* Avoid initrd. */
 	initrd_start  = (u64)real_mode->ext_ramdisk_image << 32;
@@ -169,13 +170,9 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	mem_avoid[2].start = cmd_line;
 	mem_avoid[2].size = cmd_line_size;
 
-	/* Avoid heap memory. */
-	mem_avoid[3].start = (unsigned long)free_mem_ptr;
-	mem_avoid[3].size = BOOT_HEAP_SIZE;
-
-	/* Avoid stack memory. */
-	mem_avoid[4].start = (unsigned long)free_mem_end_ptr;
-	mem_avoid[4].size = BOOT_STACK_SIZE;
+	/* Avoid params */
+	mem_avoid[3].start = (unsigned long)real_mode;
+	mem_avoid[3].size = sizeof(*real_mode);
 }
 
 /* Does this memory vector overlap a known avoided area? */
@@ -319,7 +316,7 @@ unsigned char *choose_kernel_location(unsigned char *input,
 
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init((unsigned long)input, input_size,
-		       (unsigned long)output, output_size);
+		       (unsigned long)output);
 
 	/* Walk e820 and find a random address. */
 	random = find_random_addr(choice, output_size);
-- 
2.5.0

  parent reply	other threads:[~2016-03-04 16:26 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-04 16:24 [PATCH v3 00/19] x86, boot: kaslr cleanup and 64bit kaslr support Baoquan He
2016-03-04 16:24 ` [PATCH v3 01/19] x86, kaslr: Remove not needed parameter for choose_kernel_location Baoquan He
2016-03-07 22:28   ` Kees Cook
2016-03-04 16:25 ` [PATCH v3 02/19] x86, boot: Move compressed kernel to end of buffer before decompressing Baoquan He
2016-03-07 22:35   ` Kees Cook
2016-03-08  4:50     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 03/19] x86, boot: Move z_extract_offset calculation to header.S Baoquan He
2016-03-04 16:25 ` [PATCH v3 04/19] x86, kaskr: Update the description for decompressor worst case Baoquan He
2016-03-04 16:25 ` [PATCH v3 05/19] x86, boot: Fix run_size calculation Baoquan He
2016-03-07 23:10   ` Kees Cook
2016-03-08  4:57     ` Baoquan He
2016-03-08 18:05       ` Kees Cook
2016-03-04 16:25 ` [PATCH v3 06/19] x86, kaslr: Clean up useless code related to run_size Baoquan He
2016-03-07 23:12   ` Kees Cook
2016-03-08  5:00     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 07/19] x86, kaslr: Get correct max_addr for relocs pointer Baoquan He
2016-03-07 23:16   ` Kees Cook
2016-03-08  5:13     ` Baoquan He
2016-03-08 18:16       ` Kees Cook
2016-03-09 13:46         ` Baoquan He
2016-03-04 16:25 ` Baoquan He [this message]
2016-03-07 23:28   ` [PATCH v3 08/19] x86, kaslr: Consolidate mem_avoid array filling Kees Cook
2016-03-08  5:21     ` Baoquan He
2016-03-08 18:17       ` Kees Cook
2016-03-09 13:42         ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 09/19] x86, boot: Split kernel_ident_mapping_init to another file Baoquan He
2016-03-04 16:25 ` [PATCH v3 10/19] x86, 64bit: Set ident_mapping for kaslr Baoquan He
2016-03-07 23:34   ` Kees Cook
2016-03-08  5:25     ` Baoquan He
2016-03-21  7:50     ` Baoquan He
2016-03-21 19:48       ` Kees Cook
2016-03-04 16:25 ` [PATCH v3 11/19] x86, boot: Add checking for memcpy Baoquan He
2016-03-07 23:36   ` Kees Cook
2016-03-08  5:30     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 12/19] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
2016-03-07 23:30   ` Kees Cook
2016-03-08  5:22     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 13/19] x86, kaslr: Introduce struct slot_area to manage randomization slot info Baoquan He
2016-03-04 16:25 ` [PATCH v3 14/19] x86, kaslr: Add two functions which will be used later Baoquan He
2016-03-04 16:25 ` [PATCH v3 15/19] x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel text mapping address Baoquan He
2016-03-04 16:25 ` [PATCH v3 16/19] x86, kaslr: Randomize physical and virtual address of kernel separately Baoquan He
2016-03-07 23:51   ` Kees Cook
2016-03-08  5:34     ` Baoquan He
2016-03-08 18:24       ` Kees Cook
2016-03-09 13:40         ` Baoquan He
2016-03-09 18:07           ` Kees Cook
2016-03-10 15:15             ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 17/19] x86, kaslr: Add support of kernel physical address randomization above 4G Baoquan He
2016-03-04 16:25 ` [PATCH v3 18/19] x86, kaslr: Remove useless codes Baoquan He
2016-03-04 16:25 ` [PATCH v3 19/19] x86, kaslr: Allow random address to be below loaded address Baoquan He
2016-03-05 11:35 ` [PATCH v3 00/19] x86, boot: kaslr cleanup and 64bit kaslr support Baoquan He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1457108717-12191-9-git-send-email-bhe@redhat.com \
    --to=bhe@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dyoung@redhat.com \
    --cc=hpa@zytor.com \
    --cc=keescook@chromium.org \
    --cc=lasse.collin@tukaani.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=vgoyal@redhat.com \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).