xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Vrabel <david.vrabel@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [RFC] x86: PV SMAP for 64-bit guests
Date: Wed, 29 Jan 2014 18:00:31 +0000	[thread overview]
Message-ID: <52E941BF.3070308@citrix.com> (raw)
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>

On 29/01/14 15:33, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.
> 
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.
> 
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.

If I'm understanding this correctly, in upstream Linux this would
require two new pv-ops for clac and stac?  This might make upstreaming
support for this in Linux tricky, but I wouldn't suggest blocking a Xen
feature for this reason.

Each copy_from_user() and copy_to_user() and get_user()/put_user() would
thus require two hypercalls, at least one of which would do a TLB flush?

This does sound rather expensive and thus not something we (XenServer)
would be especially interested in using.

Do you have any figures for the performance impact on guests not using
this feature?

David

  reply	other threads:[~2014-01-29 18:00 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-29 15:33 [RFC] x86: PV SMAP for 64-bit guests Jan Beulich
2014-01-29 18:00 ` David Vrabel [this message]
2014-01-30  7:37   ` Jan Beulich
2014-01-29 18:04 ` Andrew Cooper
2014-01-30  7:43   ` Jan Beulich
2014-01-31 16:56 ` Konrad Rzeszutek Wilk
2014-02-03  7:50   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52E941BF.3070308@citrix.com \
    --to=david.vrabel@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).