linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Baoquan He <bhe@redhat.com>
To: Kees Cook <keescook@chromium.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Yinghai Lu <yinghai@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Vivek Goyal <vgoyal@redhat.com>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	lasse.collin@tukaani.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Dave Young <dyoung@redhat.com>
Subject: Re: [PATCH v3 08/19] x86, kaslr: Consolidate mem_avoid array filling
Date: Tue, 8 Mar 2016 13:21:44 +0800	[thread overview]
Message-ID: <20160308052144.GE2481@x1.redhat.com> (raw)
In-Reply-To: <CAGXu5jLd7wVd7t8HNmYAKaCGgYJp8C_aHNs6F3n3yj=4AF82yQ@mail.gmail.com>

On 03/07/16 at 03:28pm, Kees Cook wrote:
> On Fri, Mar 4, 2016 at 8:25 AM, Baoquan He <bhe@redhat.com> wrote:
> > From: Yinghai Lu <yinghai@kernel.org>
> >
> > We are going to support kaslr with 64bit above 4G, and new random output
> > could be anywhere. Array mem_avoid is used for kaslr to search new output
> > address. Current code only track range that is after output+output_size.
> > So we need to track all ranges instead of just after output+output_size.
> >
> > Current code has first entry which is extra bytes before input+input_size,
> > and it is according to output_size. Other entries are for initrd, cmdline,
> > and heap/stack for ZO running.
> >
> > At first, let's check the first entry that should be in the mem_avoid array.
> > Now ZO sit end of the buffer always, we can find out where is text and
> > data/bss etc of ZO.
> >
> > Since init_size >= run_size, and input+input_len >= output+output_len,
> > here make several assumptions for better presentation by graph:
> >  - init_size > run_size
> >  - input+input_len > output+output_len
> >  - run_size > output_len
> 
> I would like to see each of these assumptions justified. Why is
> init_size > run_size, etc?
> choose_kernel_location's "output_size" is calculated as max(run_size,
> output_len), so run_size may not be > output_len...

Sure. I will add this case in next post. Thanks a lot.
> 
> >
> > 0   output                       input             input+input_len          output+init_size
> > |     |                            |                       |                       |
> > |-----|-------------------|--------|------------------|----|------------|----------|
> >                           |                           |                 |
> >              output+init_size-ZO_INIT_SIZE    output+output_len    output+run_size
> >
> > [output, output+init_size) is the for decompressing buffer for compressed
> > kernel.
> >
> > [output, output+run_size) is for VO run size.
> > [output, output+output_len) is (VO (vmlinux after objcopy) plus relocs)
> >
> > [output+init_size-ZO_INIT_SIZE, output+init_size) is copied ZO.
> > [input, input+input_len) is copied compressed (VO (vmlinux after objcopy)
> > plus relocs), not the ZO.
> >
> > [input+input_len, output+init_size) is [_text, _end) for ZO. that could be
> > first range in mem_avoid. Now the new first entry already includes heap and
> > stack for ZO running. So no need to put them separately into mem_avoid array.
> >
> > Also [input, input+input_size) need be put in mem_avoid array. It is adjacent
> > to new first entry, so merge them.
> 
> I wonder if this diagram and description should live in a comment with the code.

I think it would be very helpful for people interested in this process.
Do you think it's ok to put it where init_size is calculated in
boot/header.S?  Or other suitable places?
> 
> 
> >
> > At last we need to put boot_params into the mem_avoid too. As with 64bit
> > bootloader could put it anywhere.
> >
> > After these changes, we have all ranges which need be avoided in mem_avoid
> > array.
> >
> > Cc: Kees Cook <keescook@chromium.org>
> > Signed-off-by: Yinghai Lu <yinghai@kernel.org>
> > ---
> > v2->v3:
> >     Adjust the patch log.
> >
> >  arch/x86/boot/compressed/aslr.c | 29 +++++++++++++----------------
> >  1 file changed, 13 insertions(+), 16 deletions(-)
> >
> > diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
> > index 622aa88..b93be03 100644
> > --- a/arch/x86/boot/compressed/aslr.c
> > +++ b/arch/x86/boot/compressed/aslr.c
> > @@ -109,7 +109,7 @@ struct mem_vector {
> >         unsigned long size;
> >  };
> >
> > -#define MEM_AVOID_MAX 5
> > +#define MEM_AVOID_MAX 4
> >  static struct mem_vector mem_avoid[MEM_AVOID_MAX];
> >
> >  static bool mem_contains(struct mem_vector *region, struct mem_vector *item)
> > @@ -135,21 +135,22 @@ static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
> >  }
> >
> >  static void mem_avoid_init(unsigned long input, unsigned long input_size,
> > -                          unsigned long output, unsigned long output_size)
> > +                          unsigned long output)
> >  {
> > +       unsigned long init_size = real_mode->hdr.init_size;
> >         u64 initrd_start, initrd_size;
> >         u64 cmd_line, cmd_line_size;
> > -       unsigned long unsafe, unsafe_len;
> >         char *ptr;
> >
> >         /*
> >          * Avoid the region that is unsafe to overlap during
> > -        * decompression (see calculations at top of misc.c).
> > +        * decompression.
> > +        * As we already move ZO (arch/x86/boot/compressed/vmlinux)
> > +        * to the end of buffer, [input+input_size, output+init_size)
> > +        * has [_text, _end) for ZO.
> >          */
> > -       unsafe_len = (output_size >> 12) + 32768 + 18;
> > -       unsafe = (unsigned long)input + input_size - unsafe_len;
> > -       mem_avoid[0].start = unsafe;
> > -       mem_avoid[0].size = unsafe_len;
> > +       mem_avoid[0].start = input;
> > +       mem_avoid[0].size = (output + init_size) - input;
> >
> >         /* Avoid initrd. */
> >         initrd_start  = (u64)real_mode->ext_ramdisk_image << 32;
> > @@ -169,13 +170,9 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
> >         mem_avoid[2].start = cmd_line;
> >         mem_avoid[2].size = cmd_line_size;
> >
> > -       /* Avoid heap memory. */
> > -       mem_avoid[3].start = (unsigned long)free_mem_ptr;
> > -       mem_avoid[3].size = BOOT_HEAP_SIZE;
> > -
> > -       /* Avoid stack memory. */
> > -       mem_avoid[4].start = (unsigned long)free_mem_end_ptr;
> > -       mem_avoid[4].size = BOOT_STACK_SIZE;
> > +       /* Avoid params */
> > +       mem_avoid[3].start = (unsigned long)real_mode;
> > +       mem_avoid[3].size = sizeof(*real_mode);
> >  }
> >
> >  /* Does this memory vector overlap a known avoided area? */
> > @@ -319,7 +316,7 @@ unsigned char *choose_kernel_location(unsigned char *input,
> >
> >         /* Record the various known unsafe memory ranges. */
> >         mem_avoid_init((unsigned long)input, input_size,
> > -                      (unsigned long)output, output_size);
> > +                      (unsigned long)output);
> >
> >         /* Walk e820 and find a random address. */
> >         random = find_random_addr(choice, output_size);
> > --
> > 2.5.0
> >
> 
> 
> 
> -- 
> Kees Cook
> Chrome OS & Brillo Security

  reply	other threads:[~2016-03-08  5:22 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-04 16:24 [PATCH v3 00/19] x86, boot: kaslr cleanup and 64bit kaslr support Baoquan He
2016-03-04 16:24 ` [PATCH v3 01/19] x86, kaslr: Remove not needed parameter for choose_kernel_location Baoquan He
2016-03-07 22:28   ` Kees Cook
2016-03-04 16:25 ` [PATCH v3 02/19] x86, boot: Move compressed kernel to end of buffer before decompressing Baoquan He
2016-03-07 22:35   ` Kees Cook
2016-03-08  4:50     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 03/19] x86, boot: Move z_extract_offset calculation to header.S Baoquan He
2016-03-04 16:25 ` [PATCH v3 04/19] x86, kaskr: Update the description for decompressor worst case Baoquan He
2016-03-04 16:25 ` [PATCH v3 05/19] x86, boot: Fix run_size calculation Baoquan He
2016-03-07 23:10   ` Kees Cook
2016-03-08  4:57     ` Baoquan He
2016-03-08 18:05       ` Kees Cook
2016-03-04 16:25 ` [PATCH v3 06/19] x86, kaslr: Clean up useless code related to run_size Baoquan He
2016-03-07 23:12   ` Kees Cook
2016-03-08  5:00     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 07/19] x86, kaslr: Get correct max_addr for relocs pointer Baoquan He
2016-03-07 23:16   ` Kees Cook
2016-03-08  5:13     ` Baoquan He
2016-03-08 18:16       ` Kees Cook
2016-03-09 13:46         ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 08/19] x86, kaslr: Consolidate mem_avoid array filling Baoquan He
2016-03-07 23:28   ` Kees Cook
2016-03-08  5:21     ` Baoquan He [this message]
2016-03-08 18:17       ` Kees Cook
2016-03-09 13:42         ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 09/19] x86, boot: Split kernel_ident_mapping_init to another file Baoquan He
2016-03-04 16:25 ` [PATCH v3 10/19] x86, 64bit: Set ident_mapping for kaslr Baoquan He
2016-03-07 23:34   ` Kees Cook
2016-03-08  5:25     ` Baoquan He
2016-03-21  7:50     ` Baoquan He
2016-03-21 19:48       ` Kees Cook
2016-03-04 16:25 ` [PATCH v3 11/19] x86, boot: Add checking for memcpy Baoquan He
2016-03-07 23:36   ` Kees Cook
2016-03-08  5:30     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 12/19] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
2016-03-07 23:30   ` Kees Cook
2016-03-08  5:22     ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 13/19] x86, kaslr: Introduce struct slot_area to manage randomization slot info Baoquan He
2016-03-04 16:25 ` [PATCH v3 14/19] x86, kaslr: Add two functions which will be used later Baoquan He
2016-03-04 16:25 ` [PATCH v3 15/19] x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel text mapping address Baoquan He
2016-03-04 16:25 ` [PATCH v3 16/19] x86, kaslr: Randomize physical and virtual address of kernel separately Baoquan He
2016-03-07 23:51   ` Kees Cook
2016-03-08  5:34     ` Baoquan He
2016-03-08 18:24       ` Kees Cook
2016-03-09 13:40         ` Baoquan He
2016-03-09 18:07           ` Kees Cook
2016-03-10 15:15             ` Baoquan He
2016-03-04 16:25 ` [PATCH v3 17/19] x86, kaslr: Add support of kernel physical address randomization above 4G Baoquan He
2016-03-04 16:25 ` [PATCH v3 18/19] x86, kaslr: Remove useless codes Baoquan He
2016-03-04 16:25 ` [PATCH v3 19/19] x86, kaslr: Allow random address to be below loaded address Baoquan He
2016-03-05 11:35 ` [PATCH v3 00/19] x86, boot: kaslr cleanup and 64bit kaslr support Baoquan He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160308052144.GE2481@x1.redhat.com \
    --to=bhe@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dyoung@redhat.com \
    --cc=hpa@zytor.com \
    --cc=keescook@chromium.org \
    --cc=lasse.collin@tukaani.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=vgoyal@redhat.com \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).