xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH] x86/xpti: Hide almost all of .text and all .data/.rodata/.bss mappings
Date: Wed, 14 Feb 2018 12:36:29 +0000	[thread overview]
Message-ID: <d0ddadec-4fcf-72fa-5ba9-8109f0eb8c1b@citrix.com> (raw)
In-Reply-To: <1ed6a169-06ed-d4ed-41ed-7e1f41602f8a@suse.com>

On 14/02/18 12:27, Juergen Gross wrote:
> On 14/02/18 13:19, Andrew Cooper wrote:
>> On 14/02/18 12:15, Juergen Gross wrote:
>>> On 14/02/18 13:03, Juergen Gross wrote:
>>>> On 14/02/18 12:48, Andrew Cooper wrote:
>>>>> On 14/02/18 07:54, Juergen Gross wrote:
>>>>>> On 13/02/18 20:45, Andrew Cooper wrote:
>>>>>>> The current XPTI implementation isolates the directmap (and therefore a lot of
>>>>>>> guest data), but a large quantity of CPU0's state (including its stack)
>>>>>>> remains visible.
>>>>>>>
>>>>>>> Furthermore, an attacker able to read .text is in a vastly superior position
>>>>>>> to normal when it comes to fingerprinting Xen for known vulnerabilities, or
>>>>>>> scanning for ROP/Spectre gadgets.
>>>>>>>
>>>>>>> Collect together the entrypoints in .text.entry (currently 3x4k frames, but
>>>>>>> can almost certainly be slimmed down), and create a common mapping which is
>>>>>>> inserted into each per-cpu shadow.  The stubs are also inserted into this
>>>>>>> mapping by pointing at the in-use L2.  This allows stubs allocated later (SMP
>>>>>>> boot, or CPU hotplug) to work without further changes to the common mappings.
>>>>>>>
>>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>> ---
>>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>>> CC: Wei Liu <wei.liu2@citrix.com>
>>>>>>> CC: Juergen Gross <jgross@suse.com>
>>>>>>>
>>>>>>> RFC, because I don't think the stubs handling is particularly sensible.
>>>>>>>
>>>>>>> We allocate 4k of virtual address space per CPU, but squash loads of CPUs
>>>>>>> together onto a single MFN.  The stubs ought to be isolated as well (as they
>>>>>>> leak the virtual addresses of each stack), which can be done by allocating an
>>>>>>> MFN per CPU (and simplifies cpu_smpboot_alloc() somewhat).  At this point, we
>>>>>>> can't use a common set of mappings, and will have to clone the single stub and
>>>>>>> .entry.text into each PCPUs copy of the pagetables.
>>>>>>>
>>>>>>> Also, my plan to cause .text.entry to straddle a 512TB boundary (and therefore
>>>>>>> avoid any further pagetable allocations) has come a little unstuck because of
>>>>>>> CONFIG_BIGMEM.  I'm still working out whether there is a sensible way to
>>>>>>> rearrange the virtual layout for this plan to work.
>>>>>>> ---
>>>>>>>  xen/arch/x86/smpboot.c             | 37 ++++++++++++++++++++++++++++++++-----
>>>>>>>  xen/arch/x86/x86_64/compat/entry.S |  2 ++
>>>>>>>  xen/arch/x86/x86_64/entry.S        |  4 +++-
>>>>>>>  xen/arch/x86/xen.lds.S             |  7 +++++++
>>>>>>>  4 files changed, 44 insertions(+), 6 deletions(-)
>>>>>>>
>>>>>>> diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
>>>>>>> index 2ebef03..2519141 100644
>>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>>> @@ -622,6 +622,9 @@ unsigned long alloc_stub_page(unsigned int cpu, unsigned long *mfn)
>>>>>>>          unmap_domain_page(memset(__map_domain_page(pg), 0xcc, PAGE_SIZE));
>>>>>>>      }
>>>>>>>  
>>>>>>> +    /* Confirm that all stubs fit in a single L2 pagetable. */
>>>>>>> +    BUILD_BUG_ON(NR_CPUS * PAGE_SIZE > (1u << L2_PAGETABLE_SHIFT));
>>>>>> So we limit NR_CPUS to be max 512 now?
>>>>> Not intentionally.  The PAGE_SIZE should be dropped.  (One full L2
>>>>> pagetable allows us to map 512*512 pages).
>>>> L2_PAGETABLE_SHIFT is 21. So I still don't get why dropping PAGE_SIZE
>>>> will correct it. OTOH I'm quite sure the BUILD_BUG_ON() won't trigger
>>>> any more with PAGE_SIZE being dropped. :-)
>>>>
>>>>>> Maybe you should use STUB_BUF_SIZE instead of PAGE_SIZE?
>>>>> No - that would be incorrect because of the physical packing of stubs
>>>>> which occurs.
>>>>>
>>>>>> BTW: Is there any reason we don't use a common stub page mapped to each
>>>>>> per-cpu stack area? The stack address can easily be obtained via %rip
>>>>>> relative addressing then (see my patch for the per-vcpu stacks:
>>>>>> https://lists.xen.org/archives/html/xen-devel/2018-02/msg00773.html )
>>>>> I don't understand what you are asking here.  We cannot access the
>>>>> per-cpu area with plain rip-retaliative addressing without using gs base
>>>>> (and we really don't want to go down that route), or without per-cpu
>>>>> pagetables (which would have to be a compile time choice).
>>>> The stub-page of a cpu is currently mapped as the 3rd page of the
>>>> stack area. So the distance to the primary stack would be the same
>>>> for all cpus (a little bit less than 20kB).
>>>>
>>>>> As for why the per-cpu areas aren't mapped, that's because they aren't
>>>>> needed at the moment.  Any decision to change this needs to weigh the
>>>>> utility of mapping the areas vs the additional data leakage, which is
>>>>> substantial.
>>>> The stack area is mapped. And that's where the stub is living.
>>> Oh, did I mix up something? I followed the comments in current.h. The
>>> code suggests the syscall trampoline page isn't used at the moment for
>>> the stubs...
>> That will be stale from the work Jan did to make the stack fully NX. 
>> The syscall stubs used to be on the stack, but are no longer.
> Changing this to make the syscall stub RO and executable again isn't
> impossible, I guess. Especially when this will make life easier at
> other places we should at least consider that option.

The stubs are RO/NX in the XEN_VIRT_* range, and it is these mappings
that I clone.

The emulation stubs are written by using map_domain_page() to make a
writeable alias.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-02-14 12:36 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-13 19:45 [PATCH] x86/xpti: Hide almost all of .text and all .data/.rodata/.bss mappings Andrew Cooper
2018-02-14  7:54 ` Juergen Gross
2018-02-14 11:48   ` Andrew Cooper
2018-02-14 12:03     ` Juergen Gross
2018-02-14 12:15       ` Juergen Gross
2018-02-14 12:19         ` Andrew Cooper
2018-02-14 12:27           ` Juergen Gross
2018-02-14 12:36             ` Andrew Cooper [this message]
2018-02-14  9:14 ` Jan Beulich
2018-02-14 12:34   ` Andrew Cooper
2018-02-14 12:58     ` Jan Beulich
  -- strict thread matches above, loose matches on Subject: below --
2018-02-14 16:23 Andrew Cooper
2018-02-15 10:18 ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0ddadec-4fcf-72fa-5ba9-8109f0eb8c1b@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).