xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: IanCampbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH] x86/hvm: Allow the guest to permit the use of userspace hypercalls
Date: Wed, 13 Jan 2016 12:14:05 +0100	[thread overview]
Message-ID: <5696317D.2030209@suse.com> (raw)
In-Reply-To: <alpine.DEB.2.02.1601131034290.13564@kaball.uk.xensource.com>

On 13/01/16 11:41, Stefano Stabellini wrote:
> On Wed, 13 Jan 2016, Juergen Gross wrote:
>> On 12/01/16 18:23, Stefano Stabellini wrote:
>>> On Tue, 12 Jan 2016, Juergen Gross wrote:
>>>> On 12/01/16 18:05, Stefano Stabellini wrote:
>>>>> On Tue, 12 Jan 2016, Jan Beulich wrote:
>>>>>>>>> On 12.01.16 at 13:07, <stefano.stabellini@eu.citrix.com> wrote:
>>>>>>> On Mon, 11 Jan 2016, David Vrabel wrote:
>>>>>>>> On 11/01/16 17:17, Andrew Cooper wrote:
>>>>>>>>> So from one point of view, sufficient justification for this change is
>>>>>>>>> "because the Linux way isn't the only valid way to do this".
>>>>>>>>
>>>>>>>> "Because we can" isn't a good justification for adding something new.
>>>>>>>> Particularly something that is trivially easy to (accidentally) misuse
>>>>>>>> and open a big security hole between userspace and kernel.
>>>>>>>>
>>>>>>>> The vague idea for a userspace netfront that's floating around
>>>>>>>> internally is also not a good reason for pushing this feature at this time.
>>>>>>>
>>>>>>> I agree with David, but I might have another good use case for this.
>>>>>>>
>>>>>>> Consider the following scenario: we have a Xen HVM guest, with Xen
>>>>>>> installed inside of it (nested virtualization). I'll refer to Xen
>>>>>>> running on the host as L0 Xen and Xen running inside the VM as L1 Xen.
>>>>>>> Similarly we have two dom0 running, the one with access to the physical
>>>>>>> hardware, L0 Dom0, and the one running inside the VM, L1 Dom0.
>>>>>>>
>>>>>>> Let's suppose that we want to lay the groundwork for L1 Dom0 to use PV
>>>>>>> frontend drivers, netfront and blkfront to speed up execution. In order
>>>>>>> to do that, the first thing it needs to do is making an hypercall to L0
>>>>>>> Xen. That's because netfront and blkfront needs to communicate with
>>>>>>> netback and blkback in L0 Dom0: event channels and grant tables are the
>>>>>>> ones provided by L0 Xen.
>>>>>>
>>>>>> That's again a layering violation (bypassing the L1 hypervisor).
>>>>>
>>>>> True, but in this scenario it might be necessary for performance
>>>>> reasons: otherwise every hypercall would need to bounce off L1 Xen,
>>>>> possibly cancelling the benefits of running netfront and blkfront in the
>>>>> first place. I don't have numbers though.
>>>>
>>>> How is this supposed to work? How can dom0 make hypercalls to L1 _or_ L0
>>>> hypervisor? How can it select the hypervisor it is talking to?
>>>
>>> >From L0 Xen point of view, the guest is just a normal PV on HVM guest,
>>> it doesn't matter what's inside, so L1 Dom0 is going to make hypercalls
>>> to L0 Xen like any other PV on HVM guests: mapping the hypercall page by
>>> writing to the right MSR, retrieved via cpuid, then calling into the
>>
>> But how to specify that cpuid/MSR should target the L0 hypervisor
>> instead of L1?
> 
> Keeping in mind that L1 Dom0 is a PV guest from L1 Xen point of view,
> but a PV on HVM guest from L0 Xen point of view, it is true that the
> cpuid could be an issue because the cpuid would be generated by L0 Xen,
> but then would get filtered by L1 Xen. However the MSR should be OK,
> assuming that L1 Xen allows access to it: from inside the VM it would
> look like a regular machine MSR, it couldn't get confused with anything
> causing hypercalls to L1 Xen.

L1 Xen wouldn't allow access to it. Otherwise it couldn't ever setup
a hypercall page for one of it's guests.

>> And even if this would be working, just by mapping
>> the correct page the instructions doing the transition to the
>> hypervisor would still result in entering the L1 hypervisor, as
>> those instructions must be handled by L1 first in order to make
>> nested virtualization work.
> 
> This is wrong. The hypercall page populated by L0 Xen would contain
> vmcall instructions. When L1 Dom0 calls into the hypercall page, it
> would end up making a vmcall, which brings it directly to L0 Xen,
> skipping L1 Xen.

Sure. And L0 Xen will see that this guest is subject to nested
virtualization and is reflecting the vmcall to L1 Xen (see e.g.
xen/arch/x86/hvm/svm/nestedsvm.c, nestedsvm_check_intercepts() ).
How else would L1 Xen ever get a vmcall of one of it's guests?


Juergen

  reply	other threads:[~2016-01-13 11:14 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-11 13:59 [PATCH] x86/hvm: Allow the guest to permit the use of userspace hypercalls Andrew Cooper
2016-01-11 14:32 ` Paul Durrant
2016-01-11 14:44 ` Jan Beulich
2016-01-11 17:17   ` Andrew Cooper
2016-01-11 18:26     ` David Vrabel
2016-01-11 18:32       ` Andrew Cooper
2016-01-11 18:40         ` David Vrabel
2016-01-11 18:50           ` Andrew Cooper
2016-01-12 12:07       ` Stefano Stabellini
2016-01-12 15:06         ` Jan Beulich
2016-01-12 17:05           ` Stefano Stabellini
2016-01-12 17:10             ` Juergen Gross
2016-01-12 17:23               ` Stefano Stabellini
2016-01-13  5:12                 ` Juergen Gross
2016-01-13 10:41                   ` Stefano Stabellini
2016-01-13 11:14                     ` Juergen Gross [this message]
2016-01-13 11:26                       ` Stefano Stabellini
2016-01-13 11:32                         ` Juergen Gross
2016-01-13 11:42         ` David Vrabel
2016-01-13 12:51           ` Stefano Stabellini
2016-01-12  7:33     ` Jan Beulich
2016-01-12 10:57       ` Andrew Cooper
2016-01-12 11:03         ` George Dunlap
2016-01-14 10:50 ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5696317D.2030209@suse.com \
    --to=jgross@suse.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=david.vrabel@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=stefano.stabellini@citrix.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).