linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Baoquan He <bhe@redhat.com>, Andy Lutomirski <luto@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-doc@vger.kernel.org, tglx@linutronix.de,
	thgarnie@google.com, corbet@lwn.net,
	Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 4/3 v2] x86/mm/doc: Enhance the x86-64 virtual memory layout descriptions
Date: Sat, 6 Oct 2018 19:03:17 +0200	[thread overview]
Message-ID: <20181006170317.GA21297@gmail.com> (raw)
In-Reply-To: <20181006143821.GA72401@gmail.com>


There's one PTI related layout asymmetry I noticed between 4-level and 5-level kernels:

  47-bit:
> +                                                            |
> +                                                            | Kernel-space virtual memory, shared between all processes:
> +____________________________________________________________|___________________________________________________________
> +                  |            |                  |         |
> + ffff800000000000 | -128    TB | ffff87ffffffffff |    8 TB | ... guard hole, also reserved for hypervisor
> + ffff880000000000 | -120    TB | ffffc7ffffffffff |   64 TB | direct mapping of all physical memory (page_offset_base)
> + ffffc80000000000 |  -56    TB | ffffc8ffffffffff |    1 TB | ... unused hole
> + ffffc90000000000 |  -55    TB | ffffe8ffffffffff |   32 TB | vmalloc/ioremap space (vmalloc_base)
> + ffffe90000000000 |  -23    TB | ffffe9ffffffffff |    1 TB | ... unused hole
> + ffffea0000000000 |  -22    TB | ffffeaffffffffff |    1 TB | virtual memory map (vmemmap_base)
> + ffffeb0000000000 |  -21    TB | ffffebffffffffff |    1 TB | ... unused hole
> + ffffec0000000000 |  -20    TB | fffffbffffffffff |   16 TB | KASAN shadow memory
> + fffffc0000000000 |   -4    TB | fffffdffffffffff |    2 TB | ... unused hole
> +                  |            |                  |         | vaddr_end for KASLR
> + fffffe0000000000 |   -2    TB | fffffe7fffffffff |  0.5 TB | cpu_entry_area mapping
> + fffffe8000000000 |   -1.5  TB | fffffeffffffffff |  0.5 TB | LDT remap for PTI
> + ffffff0000000000 |   -1    TB | ffffff7fffffffff |  0.5 TB | %esp fixup stacks
> +__________________|____________|__________________|_________|____________________________________________________________
> +                                                            |

  56-bit:
> +                                                            |
> +                                                            | Kernel-space virtual memory, shared between all processes:
> +____________________________________________________________|___________________________________________________________
> +                  |            |                  |         |
> + ff00000000000000 |  -64    PB | ff0fffffffffffff |    4 PB | ... guard hole, also reserved for hypervisor
> + ff10000000000000 |  -60    PB | ff8fffffffffffff |   32 PB | direct mapping of all physical memory (page_offset_base)
> + ff90000000000000 |  -28    PB | ff9fffffffffffff |    4 PB | LDT remap for PTI
> + ffa0000000000000 |  -24    PB | ffd1ffffffffffff | 12.5 PB | vmalloc/ioremap space (vmalloc_base)
> + ffd2000000000000 |  -11.5  PB | ffd3ffffffffffff |  0.5 PB | ... unused hole
> + ffd4000000000000 |  -11    PB | ffd5ffffffffffff |  0.5 PB | virtual memory map (vmemmap_base)
> + ffd6000000000000 |  -10.5  PB | ffdeffffffffffff | 2.25 PB | ... unused hole
> + ffdf000000000000 |   -8.25 PB | fffffdffffffffff |   ~8 PB | KASAN shadow memory
> + fffffc0000000000 |   -4    TB | fffffdffffffffff |    2 TB | ... unused hole
> +                  |            |                  |         | vaddr_end for KASLR
> + fffffe0000000000 |   -2    TB | fffffe7fffffffff |  0.5 TB | cpu_entry_area mapping
> + fffffe8000000000 |   -1.5  TB | fffffeffffffffff |  0.5 TB | ... unused hole
> + ffffff0000000000 |   -1    TB | ffffff7fffffffff |  0.5 TB | %esp fixup stacks

The two layouts are very similar beyond the shift in the offset and the region sizes, except 
one big asymmetry: is the placement of the LDT remap for PTI.

Is there any fundamental reason why the LDT area is mapped into a 4 petabyte (!) area on 56-bit 
kernels, instead of being at the -1.5 TB offset like on 47-bit kernels?

The only reason I can see is that this way is that it's currently coded at the PGD level only:

static void map_ldt_struct_to_user(struct mm_struct *mm)
{
        pgd_t *pgd = pgd_offset(mm, LDT_BASE_ADDR); 

        if (static_cpu_has(X86_FEATURE_PTI) && !mm->context.ldt)
                set_pgd(kernel_to_user_pgdp(pgd), *pgd);
}

( BTW., the 4 petabyte size of the area is misleading: a 5-level PGD entry covers 256 TB of 
  virtual memory, i.e 0.25 PB, not 4 PB. So in reality we have a 0.25 PB area there, used up
  by the LDT mapping in a single PGD entry, plus a 3.75 PB hole after that. )

... but unless I'm missing something it's not really fundamental for it to be at the PGD level 
- it could be two levels lower as well, and it could move back to the same place where it's on 
the 47-bit kernel.

The LDT mapping operation is pretty heavy already, and the actual use of the LDT is not 
impacted by where it's mapped, as the LDT is per mm so no remapping is required on context 
switch.

I.e. could we move the LDT over to the same place? This would make an even larger area of the 
address space identical between 47-bit and 56-bit kernels:

                                                            |
                                                            | Identical layout to the 47-bit one from here on:
____________________________________________________________|____________________________________________________________
                  |            |                  |         |
 fffffc0000000000 |   -4    TB | fffffdffffffffff |    2 TB | ... unused hole
                  |            |                  |         | vaddr_end for KASLR
 fffffe0000000000 |   -2    TB | fffffe7fffffffff |  0.5 TB | cpu_entry_area mapping
 fffffe8000000000 |   -1.5  TB | fffffeffffffffff |  0.5 TB | LDT remap for PTI
 ffffff0000000000 |   -1    TB | ffffff7fffffffff |  0.5 TB | %esp fixup stacks
 ffffff8000000000 | -512    GB | ffffffeeffffffff |  444 GB | ... unused hole
 ffffffef00000000 |  -68    GB | fffffffeffffffff |   64 GB | EFI region mapping space
 ffffffff00000000 |   -4    GB | ffffffff7fffffff |    2 GB | ... unused hole
 ffffffff80000000 |   -2    GB | ffffffff9fffffff |  512 MB | kernel text mapping, mapped to physical address 0
 ffffffff80000000 |-2048    MB |                  |         |
 ffffffffa0000000 |-1536    MB | fffffffffeffffff | 1520 MB | module mapping space
 ffffffffff000000 |  -16    MB |                  |         |
    FIXADDR_START | ~-11    MB | ffffffffff5fffff | ~0.5 MB | kernel-internal fixmap range, variable size and offset
 ffffffffff600000 |  -10    MB | ffffffffff600fff |    4 kB | legacy vsyscall ABI
 ffffffffffe00000 |   -2    MB | ffffffffffffffff |    2 MB | ... unused hole
__________________|____________|__________________|_________|___________________________________________________________

And the rest would basically just be 4 areas: the direct-mapping, vmalloc, vmemmap and KASAN 
areas - which are scaled according to whether it's a 47-bit or 56-bit kernel.

Thoughts?

Thanks,

	Ingo

  parent reply	other threads:[~2018-10-06 17:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-06  8:43 [PATCH 0/3] x86/mm/doc: Clean up mm.txt Baoquan He
2018-10-06  8:43 ` [PATCH 1/3] x86/KASLR: Update KERNEL_IMAGE_SIZE description Baoquan He
2018-10-06  8:43 ` [PATCH 2/3] x86/mm/doc: Clean up the memory region layout descriptions Baoquan He
2018-10-06  8:43 ` [PATCH 3/3] x86/doc/kaslr.txt: Create a separate part of document abourt KASLR at the end of file Baoquan He
2018-10-06 11:28 ` [PATCH 0/3] x86/mm/doc: Clean up mm.txt Baoquan He
2018-10-06 12:21 ` Ingo Molnar
2018-10-06 12:22 ` [PATCH 4/3] x86/mm/doc: Enhance the x86-64 virtual memory layout descriptions Ingo Molnar
2018-10-06 12:33   ` Ingo Molnar
2018-10-06 14:41     ` Baoquan He
2018-10-06 14:38   ` [PATCH 4/3 v2] " Ingo Molnar
2018-10-06 15:02     ` Baoquan He
2018-10-06 17:03     ` Ingo Molnar [this message]
2018-10-06 22:17       ` Andy Lutomirski
2018-10-09  0:35         ` Baoquan He
2018-10-09  4:48           ` Baoquan He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181006170317.GA21297@gmail.com \
    --to=mingo@kernel.org \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=thgarnie@google.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).