qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* SEV guest attestation
@ 2021-11-24 16:34 Tyler Fanelli
  2021-11-24 17:27 ` Tyler Fanelli
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Tyler Fanelli @ 2021-11-24 16:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: John Ferlan, Daniel P. Berrange, Dr. David Alan Gilbert

Hi,

We recently discussed a way for remote SEV guest attestation through 
QEMU. My initial approach was to get data needed for attestation through 
different QMP commands (all of which are already available, so no 
changes required there), deriving hashes and certificate data; and 
collecting all of this into a new QMP struct (SevLaunchStart, which 
would include the VM's policy, secret, and GPA) which would need to be 
upstreamed into QEMU. Once this is provided, QEMU would then need to 
have support for attestation before a VM is started. Upon speaking to 
Dave about this proposal, he mentioned that this may not be the best 
approach, as some situations would render the attestation unavailable, 
such as the instance where a VM is running in a cloud, and a guest owner 
would like to perform attestation via QMP (a likely scenario), yet a 
cloud provider cannot simply let anyone pass arbitrary QMP commands, as 
this could be an issue.

So I ask, does anyone involved in QEMU's SEV implementation have any 
input on a quality way to perform guest attestation? If so, I'd be 
interested. Thanks.


Tyler.

-- 
Tyler Fanelli (tfanelli)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 16:34 SEV guest attestation Tyler Fanelli
@ 2021-11-24 17:27 ` Tyler Fanelli
  2021-11-24 17:49 ` Dr. David Alan Gilbert
  2021-11-24 17:57 ` Daniel P. Berrangé
  2 siblings, 0 replies; 26+ messages in thread
From: Tyler Fanelli @ 2021-11-24 17:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: John Ferlan, Daniel P. Berrange, Dr. David Alan Gilbert

On 11/24/21 11:34 AM, Tyler Fanelli wrote:
> We recently discussed a way for remote SEV guest attestation through QEMU.

For those interested, here is where some of the discussion took place 
before.

[1] https://listman.redhat.com/archives/libvir-list/2021-May/msg00196.html

[2] 
https://listman.redhat.com/archives/libvir-list/2021-October/msg01052.html


Tyler.

-- 
Tyler Fanelli (tfanelli)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 16:34 SEV guest attestation Tyler Fanelli
  2021-11-24 17:27 ` Tyler Fanelli
@ 2021-11-24 17:49 ` Dr. David Alan Gilbert
  2021-11-24 18:29   ` Tyler Fanelli
  2021-11-24 17:57 ` Daniel P. Berrangé
  2 siblings, 1 reply; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-24 17:49 UTC (permalink / raw)
  To: Tyler Fanelli, dovmurik; +Cc: John Ferlan, Daniel P. Berrange, qemu-devel

* Tyler Fanelli (tfanelli@redhat.com) wrote:
> Hi,
> 
> We recently discussed a way for remote SEV guest attestation through QEMU.
> My initial approach was to get data needed for attestation through different
> QMP commands (all of which are already available, so no changes required
> there), deriving hashes and certificate data; and collecting all of this
> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> provided, QEMU would then need to have support for attestation before a VM
> is started. Upon speaking to Dave about this proposal, he mentioned that
> this may not be the best approach, as some situations would render the
> attestation unavailable, such as the instance where a VM is running in a
> cloud, and a guest owner would like to perform attestation via QMP (a likely
> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> commands, as this could be an issue.
> 
> So I ask, does anyone involved in QEMU's SEV implementation have any input
> on a quality way to perform guest attestation? If so, I'd be interested.
> Thanks.

QMP is the right way to talk to QEMU; the question is whether something
sits between qemu and the attestation program - e.g. libvirt or possibly
subsequently something even higher level.

Can we start by you putting down what your interfaces look like at the
moment?

Dave

> 
> Tyler.
> 
> -- 
> Tyler Fanelli (tfanelli)
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 16:34 SEV guest attestation Tyler Fanelli
  2021-11-24 17:27 ` Tyler Fanelli
  2021-11-24 17:49 ` Dr. David Alan Gilbert
@ 2021-11-24 17:57 ` Daniel P. Berrangé
  2021-11-24 18:29   ` Dr. David Alan Gilbert
  2 siblings, 1 reply; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-24 17:57 UTC (permalink / raw)
  To: Tyler Fanelli; +Cc: John Ferlan, qemu-devel, Dr. David Alan Gilbert

On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> Hi,
> 
> We recently discussed a way for remote SEV guest attestation through QEMU.
> My initial approach was to get data needed for attestation through different
> QMP commands (all of which are already available, so no changes required
> there), deriving hashes and certificate data; and collecting all of this
> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> provided, QEMU would then need to have support for attestation before a VM
> is started. Upon speaking to Dave about this proposal, he mentioned that
> this may not be the best approach, as some situations would render the
> attestation unavailable, such as the instance where a VM is running in a
> cloud, and a guest owner would like to perform attestation via QMP (a likely
> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> commands, as this could be an issue.

As a general point, QMP is a low level QEMU implementation detail,
which is generally expected to be consumed exclusively on the host
by a privileged mgmt layer, which will in turn expose its own higher
level APIs to users or other apps. I would not expect to see QMP
exposed to anything outside of the privileged host layer.

We also use the QAPI protocol for QEMU guest agent commmunication,
however, that is a distinct service from QMP on the host. It shares
most infra with QMP but has a completely diffent command set. On the
host it is not consumed inside QEMU, but instead consumed by a
mgmt app like libvirt. 

> So I ask, does anyone involved in QEMU's SEV implementation have any input
> on a quality way to perform guest attestation? If so, I'd be interested.

I think what's missing is some clearer illustrations of how this
feature is expected to be consumed in some real world application
and the use cases we're trying to solve.

I'd like to understand how it should fit in with common libvirt
applications across the different virtualization management
scenarios - eg virsh (command line),  virt-manger (local desktop
GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
And of course any non-traditional virt use cases that might be
relevant such as Kata.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 17:57 ` Daniel P. Berrangé
@ 2021-11-24 18:29   ` Dr. David Alan Gilbert
  2021-11-25  7:14     ` Sergio Lopez
  2021-11-25 13:27     ` Daniel P. Berrangé
  0 siblings, 2 replies; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-24 18:29 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: slp, afrosi, qemu-devel, dovmurik, Tyler Fanelli, dinechin,
	John Ferlan

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > Hi,
> > 
> > We recently discussed a way for remote SEV guest attestation through QEMU.
> > My initial approach was to get data needed for attestation through different
> > QMP commands (all of which are already available, so no changes required
> > there), deriving hashes and certificate data; and collecting all of this
> > into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > provided, QEMU would then need to have support for attestation before a VM
> > is started. Upon speaking to Dave about this proposal, he mentioned that
> > this may not be the best approach, as some situations would render the
> > attestation unavailable, such as the instance where a VM is running in a
> > cloud, and a guest owner would like to perform attestation via QMP (a likely
> > scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > commands, as this could be an issue.
> 
> As a general point, QMP is a low level QEMU implementation detail,
> which is generally expected to be consumed exclusively on the host
> by a privileged mgmt layer, which will in turn expose its own higher
> level APIs to users or other apps. I would not expect to see QMP
> exposed to anything outside of the privileged host layer.
> 
> We also use the QAPI protocol for QEMU guest agent commmunication,
> however, that is a distinct service from QMP on the host. It shares
> most infra with QMP but has a completely diffent command set. On the
> host it is not consumed inside QEMU, but instead consumed by a
> mgmt app like libvirt. 
> 
> > So I ask, does anyone involved in QEMU's SEV implementation have any input
> > on a quality way to perform guest attestation? If so, I'd be interested.
> 
> I think what's missing is some clearer illustrations of how this
> feature is expected to be consumed in some real world application
> and the use cases we're trying to solve.
> 
> I'd like to understand how it should fit in with common libvirt
> applications across the different virtualization management
> scenarios - eg virsh (command line),  virt-manger (local desktop
> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> And of course any non-traditional virt use cases that might be
> relevant such as Kata.

That's still not that clear; I know Alice and Sergio have some ideas
(cc'd).
There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
) - that I can't claim to fully understand.
However, there are some themes that are emerging:

  a) One use is to only allow a VM to access some private data once we
prove it's the VM we expect running in a secure/confidential system
  b) (a) normally involves requesting some proof from the VM and then
providing it some confidential data/a key if it's OK
  c) RATs splits the problem up:
    https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
    I don't fully understand the split yet, but in principal there are
at least a few different things:

  d) The comms layer
  e) Something that validates the attestation message (i.e. the
signatures are valid, the hashes all add up etc)
  f) Something that knows what hashes to expect (i.e. oh that's a RHEL
8.4 kernel, or that's a valid kernel command line)
  g) Something that holds some secrets that can be handed out if e & f
are happy.

  There have also been proposals (e.g. Intel HTTPA) for an attestable
connection after a VM is running; that's probably quite different from
(g) but still involves (e) & (f).

In the simpler setups d,e,f,g probably live in one place; but it's not
clear where they live - for example one scenario says that your cloud
management layer holds some of them, another says you don't trust your
cloud management layer and you keep them separate.

So I think all we're actually interested in at the moment, is (d) and
(e) and the way for (g) to get the secret back to the guest.

Unfortunately the comms and the contents of them varies heavily with
technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
SEV-ES in some cases).

So my expectation at the moment is libvirt needs to provide a transport
layer for the comms, to enable an external validator to retrieve the
measurements from the guest/hypervisor and provide data back if
necessary.  Once this shakes out a bit, we might want libvirt to be
able to invoke the validator; however I expect (f) and (g) to be much
more complex things that don't feel like they belong in libvirt.

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 17:49 ` Dr. David Alan Gilbert
@ 2021-11-24 18:29   ` Tyler Fanelli
  0 siblings, 0 replies; 26+ messages in thread
From: Tyler Fanelli @ 2021-11-24 18:29 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, dovmurik
  Cc: John Ferlan, Daniel P. Berrange, qemu-devel

On 11/24/21 12:49 PM, Dr. David Alan Gilbert wrote:
> * Tyler Fanelli (tfanelli@redhat.com) wrote:
>> Hi,
>>
>> We recently discussed a way for remote SEV guest attestation through QEMU.
>> My initial approach was to get data needed for attestation through different
>> QMP commands (all of which are already available, so no changes required
>> there), deriving hashes and certificate data; and collecting all of this
>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
>> provided, QEMU would then need to have support for attestation before a VM
>> is started. Upon speaking to Dave about this proposal, he mentioned that
>> this may not be the best approach, as some situations would render the
>> attestation unavailable, such as the instance where a VM is running in a
>> cloud, and a guest owner would like to perform attestation via QMP (a likely
>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
>> commands, as this could be an issue.
>>
>> So I ask, does anyone involved in QEMU's SEV implementation have any input
>> on a quality way to perform guest attestation? If so, I'd be interested.
>> Thanks.
> QMP is the right way to talk to QEMU; the question is whether something
> sits between qemu and the attestation program - e.g. libvirt or possibly
> subsequently something even higher level.
>
> Can we start by you putting down what your interfaces look like at the
> moment?

Basically, I just establish a connection with a QMP socket at the 
beginning, serialize different QMP structs to get the data I need 
(query-sev, query-sev-capabilities, etc..), get the results and 
deserialize that data. In the original attempt, I would keep this 
protocol for issuing "sev-launch-start", "sev-inject-secret", and 
others. From a mgmt app perspective (in my case, I'm looking at it from 
a sevctl perspective), it's relatively straightforward. Any work 
required for getting certificates, sessions, measurements, and OVMF data 
is handled by sevctl.

> Dave

Tyler.

-- 
Tyler Fanelli (tfanelli)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 18:29   ` Dr. David Alan Gilbert
@ 2021-11-25  7:14     ` Sergio Lopez
  2021-11-25 12:44       ` Dov Murik
                         ` (3 more replies)
  2021-11-25 13:27     ` Daniel P. Berrangé
  1 sibling, 4 replies; 26+ messages in thread
From: Sergio Lopez @ 2021-11-25  7:14 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Daniel P. Berrangé, afrosi, qemu-devel, dovmurik,
	Tyler Fanelli, dinechin, John Ferlan

[-- Attachment #1: Type: text/plain, Size: 8290 bytes --]

On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > > Hi,
> > > 
> > > We recently discussed a way for remote SEV guest attestation through QEMU.
> > > My initial approach was to get data needed for attestation through different
> > > QMP commands (all of which are already available, so no changes required
> > > there), deriving hashes and certificate data; and collecting all of this
> > > into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > > secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > > provided, QEMU would then need to have support for attestation before a VM
> > > is started. Upon speaking to Dave about this proposal, he mentioned that
> > > this may not be the best approach, as some situations would render the
> > > attestation unavailable, such as the instance where a VM is running in a
> > > cloud, and a guest owner would like to perform attestation via QMP (a likely
> > > scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > > commands, as this could be an issue.
> > 
> > As a general point, QMP is a low level QEMU implementation detail,
> > which is generally expected to be consumed exclusively on the host
> > by a privileged mgmt layer, which will in turn expose its own higher
> > level APIs to users or other apps. I would not expect to see QMP
> > exposed to anything outside of the privileged host layer.
> > 
> > We also use the QAPI protocol for QEMU guest agent commmunication,
> > however, that is a distinct service from QMP on the host. It shares
> > most infra with QMP but has a completely diffent command set. On the
> > host it is not consumed inside QEMU, but instead consumed by a
> > mgmt app like libvirt. 
> > 
> > > So I ask, does anyone involved in QEMU's SEV implementation have any input
> > > on a quality way to perform guest attestation? If so, I'd be interested.
> > 
> > I think what's missing is some clearer illustrations of how this
> > feature is expected to be consumed in some real world application
> > and the use cases we're trying to solve.
> > 
> > I'd like to understand how it should fit in with common libvirt
> > applications across the different virtualization management
> > scenarios - eg virsh (command line),  virt-manger (local desktop
> > GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > And of course any non-traditional virt use cases that might be
> > relevant such as Kata.
> 
> That's still not that clear; I know Alice and Sergio have some ideas
> (cc'd).
> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> ) - that I can't claim to fully understand.
> However, there are some themes that are emerging:
> 
>   a) One use is to only allow a VM to access some private data once we
> prove it's the VM we expect running in a secure/confidential system
>   b) (a) normally involves requesting some proof from the VM and then
> providing it some confidential data/a key if it's OK
>   c) RATs splits the problem up:
>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
>     I don't fully understand the split yet, but in principal there are
> at least a few different things:
> 
>   d) The comms layer
>   e) Something that validates the attestation message (i.e. the
> signatures are valid, the hashes all add up etc)
>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> 8.4 kernel, or that's a valid kernel command line)
>   g) Something that holds some secrets that can be handed out if e & f
> are happy.
> 
>   There have also been proposals (e.g. Intel HTTPA) for an attestable
> connection after a VM is running; that's probably quite different from
> (g) but still involves (e) & (f).
> 
> In the simpler setups d,e,f,g probably live in one place; but it's not
> clear where they live - for example one scenario says that your cloud
> management layer holds some of them, another says you don't trust your
> cloud management layer and you keep them separate.
> 
> So I think all we're actually interested in at the moment, is (d) and
> (e) and the way for (g) to get the secret back to the guest.
> 
> Unfortunately the comms and the contents of them varies heavily with
> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> SEV-ES in some cases).
> 
> So my expectation at the moment is libvirt needs to provide a transport
> layer for the comms, to enable an external validator to retrieve the
> measurements from the guest/hypervisor and provide data back if
> necessary.  Once this shakes out a bit, we might want libvirt to be
> able to invoke the validator; however I expect (f) and (g) to be much
> more complex things that don't feel like they belong in libvirt.

We experimented with the attestation flow quite a bit while working on
SEV-ES support for libkrun-tee. One important aspect we noticed quite
early, is that there's more data that's needed to be exchange of top
of the attestation itself.

For instance, even before you start the VM, the management layer in
charge of coordinating the confidential VM launch needs to obtain the
Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
vs. TDX) and the platform version, to know which features are
available and whether that host is a candidate for running the VM at
all.

With that information, the mgmt layer can build a guest policy (this
is SEV's terminology, but I guess we'll have something similar in
TDX) and feed it to component launching the VMM (libvirt, in this
case).

For SEV-SNP, this is pretty much the end of the story, because the
attestation exchange is driven by an agent inside the guest. Well,
there's also the need to have in the VM a well-known vNIC bridged to a
network that's routed to the Attestation Server, that everyone seems
to consider a given, but to me, from a CSP perspective, looks like
quite a headache. In fact, I'd go as far as to suggest this
communication should happen through an alternative channel, such as
vsock, having a proxy on the Host, but I guess that depends on the CSP
infrastructure.

For SEV/SEV-ES, as the attestation happens at the VMM level, there's
still the need to have some interactions with it. As Tyler pointed
out, we basically need to retrieve the measurement and, if valid,
inject the secret. If the measurement isn't valid, the VM must be shut
down immediately.

In libkrun-tee, this operation is driven by the VMM in libkrun, which
contacts the Attestation Server with the measurement and receives the
secret in exchange. I guess for QEMU/libvirt we expect this to be
driven by the upper management layer through a delegated component in
the Host, such as NOVA. In this case, NOVA would need to:

 - Based on the upper management layer info and the Host properties,
   generate a guest policy and use it while generating the compute
   instance XML.

 - Ask libvirt to launch the VM.

 - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.

 - Retrieve the measurement *.

 - Contact the Attestation Server and provide it with some kind of
   information to uniquely identify the VM (needed to determine what's
   the expected measurement) and the measurement itself.

   * If the measurement if valid, inject the secret *.

     + The secret is pre-encrypted with a key that only the PSP has,
       so there's no need to do any special handling of it.

 - Ask libvirt to either destroy the VM (if the measurement wasn't
   valid or there was some kind of communication error with the
   Attestation Server) or continue the execution of the VM (this will
   be the first time kvm_vcpu_run() is entered).

The operations marked with (*) are the ones that I'm not sure if
NOVA should communicate with libvirt or talk directly to QEMU.

Sergio.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25  7:14     ` Sergio Lopez
@ 2021-11-25 12:44       ` Dov Murik
  2021-11-25 13:42         ` Daniel P. Berrangé
  2021-11-25 15:11         ` Sergio Lopez
  2021-11-25 13:20       ` Dr. David Alan Gilbert
                         ` (2 subsequent siblings)
  3 siblings, 2 replies; 26+ messages in thread
From: Dov Murik @ 2021-11-25 12:44 UTC (permalink / raw)
  To: Sergio Lopez, Dr. David Alan Gilbert, Tyler Fanelli
  Cc: Daniel P. Berrangé, afrosi, James Bottomley, qemu-devel,
	Dov Murik, Hubertus Franke, Tobin Feldman-Fitzthum, Jim Cadden,
	dinechin, John Ferlan

[+cc jejb, tobin, jim, hubertus]


On 25/11/2021 9:14, Sergio Lopez wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
>> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
>>>> Hi,
>>>>
>>>> We recently discussed a way for remote SEV guest attestation through QEMU.
>>>> My initial approach was to get data needed for attestation through different
>>>> QMP commands (all of which are already available, so no changes required
>>>> there), deriving hashes and certificate data; and collecting all of this
>>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
>>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
>>>> provided, QEMU would then need to have support for attestation before a VM
>>>> is started. Upon speaking to Dave about this proposal, he mentioned that
>>>> this may not be the best approach, as some situations would render the
>>>> attestation unavailable, such as the instance where a VM is running in a
>>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
>>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
>>>> commands, as this could be an issue.
>>>
>>> As a general point, QMP is a low level QEMU implementation detail,
>>> which is generally expected to be consumed exclusively on the host
>>> by a privileged mgmt layer, which will in turn expose its own higher
>>> level APIs to users or other apps. I would not expect to see QMP
>>> exposed to anything outside of the privileged host layer.
>>>
>>> We also use the QAPI protocol for QEMU guest agent commmunication,
>>> however, that is a distinct service from QMP on the host. It shares
>>> most infra with QMP but has a completely diffent command set. On the
>>> host it is not consumed inside QEMU, but instead consumed by a
>>> mgmt app like libvirt. 
>>>
>>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
>>>> on a quality way to perform guest attestation? If so, I'd be interested.
>>>
>>> I think what's missing is some clearer illustrations of how this
>>> feature is expected to be consumed in some real world application
>>> and the use cases we're trying to solve.
>>>
>>> I'd like to understand how it should fit in with common libvirt
>>> applications across the different virtualization management
>>> scenarios - eg virsh (command line),  virt-manger (local desktop
>>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
>>> And of course any non-traditional virt use cases that might be
>>> relevant such as Kata.
>>
>> That's still not that clear; I know Alice and Sergio have some ideas
>> (cc'd).
>> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
>> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
>> ) - that I can't claim to fully understand.
>> However, there are some themes that are emerging:
>>
>>   a) One use is to only allow a VM to access some private data once we
>> prove it's the VM we expect running in a secure/confidential system
>>   b) (a) normally involves requesting some proof from the VM and then
>> providing it some confidential data/a key if it's OK
>>   c) RATs splits the problem up:
>>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
>>     I don't fully understand the split yet, but in principal there are
>> at least a few different things:
>>
>>   d) The comms layer
>>   e) Something that validates the attestation message (i.e. the
>> signatures are valid, the hashes all add up etc)
>>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
>> 8.4 kernel, or that's a valid kernel command line)
>>   g) Something that holds some secrets that can be handed out if e & f
>> are happy.
>>
>>   There have also been proposals (e.g. Intel HTTPA) for an attestable
>> connection after a VM is running; that's probably quite different from
>> (g) but still involves (e) & (f).
>>
>> In the simpler setups d,e,f,g probably live in one place; but it's not
>> clear where they live - for example one scenario says that your cloud
>> management layer holds some of them, another says you don't trust your
>> cloud management layer and you keep them separate.
>>
>> So I think all we're actually interested in at the moment, is (d) and
>> (e) and the way for (g) to get the secret back to the guest.
>>
>> Unfortunately the comms and the contents of them varies heavily with
>> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
>> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
>> SEV-ES in some cases).

SEV-ES has pre-launch measurement and secret injection, just like SEV
(except that the measurement includes the initial states of all vcpus,
that is, their VMSAs.  BTW that means that in order to calculate the
measurement the Attestation Server must know exactly how many vcpus are
in the VM).


>>
>> So my expectation at the moment is libvirt needs to provide a transport
>> layer for the comms, to enable an external validator to retrieve the
>> measurements from the guest/hypervisor and provide data back if
>> necessary.  Once this shakes out a bit, we might want libvirt to be
>> able to invoke the validator; however I expect (f) and (g) to be much
>> more complex things that don't feel like they belong in libvirt.
> 
> We experimented with the attestation flow quite a bit while working on
> SEV-ES support for libkrun-tee. One important aspect we noticed quite
> early, is that there's more data that's needed to be exchange of top
> of the attestation itself.
> 
> For instance, even before you start the VM, the management layer in
> charge of coordinating the confidential VM launch needs to obtain the
> Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
> vs. TDX) and the platform version, to know which features are
> available and whether that host is a candidate for running the VM at
> all.
> 
> With that information, the mgmt layer can build a guest policy (this
> is SEV's terminology, but I guess we'll have something similar in
> TDX) and feed it to component launching the VMM (libvirt, in this
> case).
> 
> For SEV-SNP, this is pretty much the end of the story, because the
> attestation exchange is driven by an agent inside the guest. Well,
> there's also the need to have in the VM a well-known vNIC bridged to a
> network that's routed to the Attestation Server, that everyone seems
> to consider a given, but to me, from a CSP perspective, looks like
> quite a headache. In fact, I'd go as far as to suggest this
> communication should happen through an alternative channel, such as
> vsock, having a proxy on the Host, but I guess that depends on the CSP
> infrastructure.

If we have an alternative channel (vsock?) and a proxy on the host,
maybe we can share parts of the solution between SEV and SNP.


> 
> For SEV/SEV-ES, as the attestation happens at the VMM level, there's
> still the need to have some interactions with it. As Tyler pointed
> out, we basically need to retrieve the measurement and, if valid,
> inject the secret. If the measurement isn't valid, the VM must be shut
> down immediately.
> 
> In libkrun-tee, this operation is driven by the VMM in libkrun, which
> contacts the Attestation Server with the measurement and receives the
> secret in exchange. I guess for QEMU/libvirt we expect this to be
> driven by the upper management layer through a delegated component in
> the Host, such as NOVA. In this case, NOVA would need to:
> 
>  - Based on the upper management layer info and the Host properties,
>    generate a guest policy and use it while generating the compute
>    instance XML.
> 
>  - Ask libvirt to launch the VM.

Launch the VM with -S (suspended; so it doesn't actually begin running
guest instructions).


> 
>  - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
> 
>  - Retrieve the measurement *.

Note that libvirt holds the QMP socket to QEMU.  So whoever fetches the
measurement needs either (a) to ask libvirt to it; or (b) to connect to
another QMP listening socket for getting the measurement and injecting
the secret.

In Kata, Jim Cadden (cc'd) worked on adding this second QMP socket (if
I'm not mistaken) to the kata-runtime (which is the process that starts
QEMU and later controls it with QMP).


> 
>  - Contact the Attestation Server and provide it with some kind of
>    information to uniquely identify the VM (needed to determine what's
>    the expected measurement) and the measurement itself.
> 
>    * If the measurement if valid, inject the secret *.
> 
>      + The secret is pre-encrypted with a key that only the PSP has,
>        so there's no need to do any special handling of it.
> 
>  - Ask libvirt to either destroy the VM (if the measurement wasn't
>    valid or there was some kind of communication error with the
>    Attestation Server) or continue the execution of the VM (this will
>    be the first time kvm_vcpu_run() is entered).
> 
> The operations marked with (*) are the ones that I'm not sure if
> NOVA should communicate with libvirt or talk directly to QEMU.
> 
> Sergio.
> 

On top of what's written above, note that with direct boot (with
-kernel/-initrd/-append) the hashes of these 3 elements may be inserted
into the guest measurement (with kernel-hashes=on option on object
sev-guest; upcoming in QEMU 6.2).  This means that the Attestation
Server must know the OVMF hash as well as the hashes of kernel, initrd,
and cmdline in order to construct the expected measurement.


-Dov


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25  7:14     ` Sergio Lopez
  2021-11-25 12:44       ` Dov Murik
@ 2021-11-25 13:20       ` Dr. David Alan Gilbert
  2021-11-25 13:36       ` Daniel P. Berrangé
  2021-11-25 13:52       ` Daniel P. Berrangé
  3 siblings, 0 replies; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 13:20 UTC (permalink / raw)
  To: Sergio Lopez
  Cc: Daniel P. Berrangé, afrosi, qemu-devel, dovmurik,
	Tyler Fanelli, dinechin, John Ferlan

* Sergio Lopez (slp@redhat.com) wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > > > Hi,
> > > > 
> > > > We recently discussed a way for remote SEV guest attestation through QEMU.
> > > > My initial approach was to get data needed for attestation through different
> > > > QMP commands (all of which are already available, so no changes required
> > > > there), deriving hashes and certificate data; and collecting all of this
> > > > into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > > > secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > > > provided, QEMU would then need to have support for attestation before a VM
> > > > is started. Upon speaking to Dave about this proposal, he mentioned that
> > > > this may not be the best approach, as some situations would render the
> > > > attestation unavailable, such as the instance where a VM is running in a
> > > > cloud, and a guest owner would like to perform attestation via QMP (a likely
> > > > scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > > > commands, as this could be an issue.
> > > 
> > > As a general point, QMP is a low level QEMU implementation detail,
> > > which is generally expected to be consumed exclusively on the host
> > > by a privileged mgmt layer, which will in turn expose its own higher
> > > level APIs to users or other apps. I would not expect to see QMP
> > > exposed to anything outside of the privileged host layer.
> > > 
> > > We also use the QAPI protocol for QEMU guest agent commmunication,
> > > however, that is a distinct service from QMP on the host. It shares
> > > most infra with QMP but has a completely diffent command set. On the
> > > host it is not consumed inside QEMU, but instead consumed by a
> > > mgmt app like libvirt. 
> > > 
> > > > So I ask, does anyone involved in QEMU's SEV implementation have any input
> > > > on a quality way to perform guest attestation? If so, I'd be interested.
> > > 
> > > I think what's missing is some clearer illustrations of how this
> > > feature is expected to be consumed in some real world application
> > > and the use cases we're trying to solve.
> > > 
> > > I'd like to understand how it should fit in with common libvirt
> > > applications across the different virtualization management
> > > scenarios - eg virsh (command line),  virt-manger (local desktop
> > > GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > > And of course any non-traditional virt use cases that might be
> > > relevant such as Kata.
> > 
> > That's still not that clear; I know Alice and Sergio have some ideas
> > (cc'd).
> > There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> > and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> > ) - that I can't claim to fully understand.
> > However, there are some themes that are emerging:
> > 
> >   a) One use is to only allow a VM to access some private data once we
> > prove it's the VM we expect running in a secure/confidential system
> >   b) (a) normally involves requesting some proof from the VM and then
> > providing it some confidential data/a key if it's OK
> >   c) RATs splits the problem up:
> >     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> >     I don't fully understand the split yet, but in principal there are
> > at least a few different things:
> > 
> >   d) The comms layer
> >   e) Something that validates the attestation message (i.e. the
> > signatures are valid, the hashes all add up etc)
> >   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> > 8.4 kernel, or that's a valid kernel command line)
> >   g) Something that holds some secrets that can be handed out if e & f
> > are happy.
> > 
> >   There have also been proposals (e.g. Intel HTTPA) for an attestable
> > connection after a VM is running; that's probably quite different from
> > (g) but still involves (e) & (f).
> > 
> > In the simpler setups d,e,f,g probably live in one place; but it's not
> > clear where they live - for example one scenario says that your cloud
> > management layer holds some of them, another says you don't trust your
> > cloud management layer and you keep them separate.
> > 
> > So I think all we're actually interested in at the moment, is (d) and
> > (e) and the way for (g) to get the secret back to the guest.
> > 
> > Unfortunately the comms and the contents of them varies heavily with
> > technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> > while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> > SEV-ES in some cases).
> > 
> > So my expectation at the moment is libvirt needs to provide a transport
> > layer for the comms, to enable an external validator to retrieve the
> > measurements from the guest/hypervisor and provide data back if
> > necessary.  Once this shakes out a bit, we might want libvirt to be
> > able to invoke the validator; however I expect (f) and (g) to be much
> > more complex things that don't feel like they belong in libvirt.
> 
> We experimented with the attestation flow quite a bit while working on
> SEV-ES support for libkrun-tee. One important aspect we noticed quite
> early, is that there's more data that's needed to be exchange of top
> of the attestation itself.
> 
> For instance, even before you start the VM, the management layer in
> charge of coordinating the confidential VM launch needs to obtain the
> Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
> vs. TDX) and the platform version, to know which features are
> available and whether that host is a candidate for running the VM at
> all.

> With that information, the mgmt layer can build a guest policy (this
> is SEV's terminology, but I guess we'll have something similar in
> TDX) and feed it to component launching the VMM (libvirt, in this
> case).

That's normal day-to-day business for something like libvirt?

> 
> For SEV-SNP, this is pretty much the end of the story, because the
> attestation exchange is driven by an agent inside the guest. Well,
> there's also the need to have in the VM a well-known vNIC bridged to a
> network that's routed to the Attestation Server, that everyone seems
> to consider a given, but to me, from a CSP perspective, looks like
> quite a headache. In fact, I'd go as far as to suggest this
> communication should happen through an alternative channel, such as
> vsock, having a proxy on the Host, but I guess that depends on the CSP
> infrastructure.

Do we know if TDX describe the plans for this anywhere?
Again, maybe libvirt could be taught to wire that socket upto a proxy.
Also, which direction is the connection here - does the VM wait for the
attestor or does it ask to be attested?

> For SEV/SEV-ES, as the attestation happens at the VMM level, there's
> still the need to have some interactions with it. As Tyler pointed
> out, we basically need to retrieve the measurement and, if valid,
> inject the secret. If the measurement isn't valid, the VM must be shut
> down immediately.
> 
> In libkrun-tee, this operation is driven by the VMM in libkrun, which
> contacts the Attestation Server with the measurement and receives the
> secret in exchange. I guess for QEMU/libvirt we expect this to be
> driven by the upper management layer through a delegated component in
> the Host, such as NOVA. In this case, NOVA would need to:
> 
>  - Based on the upper management layer info and the Host properties,
>    generate a guest policy and use it while generating the compute
>    instance XML.
> 
>  - Ask libvirt to launch the VM.
> 
>  - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
> 
>  - Retrieve the measurement *.
> 
>  - Contact the Attestation Server and provide it with some kind of
>    information to uniquely identify the VM (needed to determine what's
>    the expected measurement) and the measurement itself.
> 
>    * If the measurement if valid, inject the secret *.
> 
>      + The secret is pre-encrypted with a key that only the PSP has,
>        so there's no need to do any special handling of it.
> 
>  - Ask libvirt to either destroy the VM (if the measurement wasn't
>    valid or there was some kind of communication error with the
>    Attestation Server) or continue the execution of the VM (this will
>    be the first time kvm_vcpu_run() is entered).
> 
> The operations marked with (*) are the ones that I'm not sure if
> NOVA should communicate with libvirt or talk directly to QEMU.

My preference is for there to be a way to go via libvirt

Dave

> Sergio.


-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-24 18:29   ` Dr. David Alan Gilbert
  2021-11-25  7:14     ` Sergio Lopez
@ 2021-11-25 13:27     ` Daniel P. Berrangé
  2021-11-25 13:50       ` Dov Murik
  2021-11-25 15:19       ` Dr. David Alan Gilbert
  1 sibling, 2 replies; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-25 13:27 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: slp, afrosi, qemu-devel, dovmurik, Tyler Fanelli, dinechin,
	John Ferlan

On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > > Hi,
> > > 
> > > We recently discussed a way for remote SEV guest attestation through QEMU.
> > > My initial approach was to get data needed for attestation through different
> > > QMP commands (all of which are already available, so no changes required
> > > there), deriving hashes and certificate data; and collecting all of this
> > > into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > > secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > > provided, QEMU would then need to have support for attestation before a VM
> > > is started. Upon speaking to Dave about this proposal, he mentioned that
> > > this may not be the best approach, as some situations would render the
> > > attestation unavailable, such as the instance where a VM is running in a
> > > cloud, and a guest owner would like to perform attestation via QMP (a likely
> > > scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > > commands, as this could be an issue.
> > 
> > As a general point, QMP is a low level QEMU implementation detail,
> > which is generally expected to be consumed exclusively on the host
> > by a privileged mgmt layer, which will in turn expose its own higher
> > level APIs to users or other apps. I would not expect to see QMP
> > exposed to anything outside of the privileged host layer.
> > 
> > We also use the QAPI protocol for QEMU guest agent commmunication,
> > however, that is a distinct service from QMP on the host. It shares
> > most infra with QMP but has a completely diffent command set. On the
> > host it is not consumed inside QEMU, but instead consumed by a
> > mgmt app like libvirt. 
> > 
> > > So I ask, does anyone involved in QEMU's SEV implementation have any input
> > > on a quality way to perform guest attestation? If so, I'd be interested.
> > 
> > I think what's missing is some clearer illustrations of how this
> > feature is expected to be consumed in some real world application
> > and the use cases we're trying to solve.
> > 
> > I'd like to understand how it should fit in with common libvirt
> > applications across the different virtualization management
> > scenarios - eg virsh (command line),  virt-manger (local desktop
> > GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > And of course any non-traditional virt use cases that might be
> > relevant such as Kata.
> 
> That's still not that clear; I know Alice and Sergio have some ideas
> (cc'd).
> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> ) - that I can't claim to fully understand.
> However, there are some themes that are emerging:
> 
>   a) One use is to only allow a VM to access some private data once we
> prove it's the VM we expect running in a secure/confidential system
>   b) (a) normally involves requesting some proof from the VM and then
> providing it some confidential data/a key if it's OK

I guess I'm wondering what the threat we're protecting against is,
and / or which pieces of the stack we can trust ?

eg, if the host has 2 VMs running, we verify the 1st and provide
its confidental data back to the host, what stops the host giving
that dat to the 2nd non-verified VM ? 

Presumably the data has to be encrypted with a key that is uniquely
tied to this specific boot attempt of the verified VM, and not
accessible to any other VM, or to future boots of this VM ?


>   c) RATs splits the problem up:
>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
>     I don't fully understand the split yet, but in principal there are
> at least a few different things:
> 
>   d) The comms layer
>   e) Something that validates the attestation message (i.e. the
> signatures are valid, the hashes all add up etc)
>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> 8.4 kernel, or that's a valid kernel command line)
>   g) Something that holds some secrets that can be handed out if e & f
> are happy.
> 
>   There have also been proposals (e.g. Intel HTTPA) for an attestable
> connection after a VM is running; that's probably quite different from
> (g) but still involves (e) & (f).
> 
> In the simpler setups d,e,f,g probably live in one place; but it's not
> clear where they live - for example one scenario says that your cloud
> management layer holds some of them, another says you don't trust your
> cloud management layer and you keep them separate.

Yep, again I'm wondering what the specific threats are that we're
trying to mitigate. Whether we trust the cloud mgmt APIs, but don't
trust the compute hosts, or whether we trust neither the cloud
mgmt APIs or the compute hosts.

If we don't trust the compute hosts, does that include the part
of the cloud mgmt API that is  running on the compute host, or
does that just mean the execution environment of the VM, or something
else?

> So I think all we're actually interested in at the moment, is (d) and
> (e) and the way for (g) to get the secret back to the guest.
> 
> Unfortunately the comms and the contents of them varies heavily with
> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> SEV-ES in some cases).
> 
> So my expectation at the moment is libvirt needs to provide a transport
> layer for the comms, to enable an external validator to retrieve the
> measurements from the guest/hypervisor and provide data back if
> necessary.  Once this shakes out a bit, we might want libvirt to be
> able to invoke the validator; however I expect (f) and (g) to be much
> more complex things that don't feel like they belong in libvirt.

Yep, I don't think (f) & (g) belong in libvirt, since libvirt is
deployed per compute host, while (f) / (g) are something that is
likely to be deployed in a separate trusted host, at least for
data center / cloud deployments. May be there's a case where they
can all be same-host for more specialized use cases.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25  7:14     ` Sergio Lopez
  2021-11-25 12:44       ` Dov Murik
  2021-11-25 13:20       ` Dr. David Alan Gilbert
@ 2021-11-25 13:36       ` Daniel P. Berrangé
  2021-11-25 13:52       ` Daniel P. Berrangé
  3 siblings, 0 replies; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-25 13:36 UTC (permalink / raw)
  To: Sergio Lopez
  Cc: afrosi, Dr. David Alan Gilbert, qemu-devel, dovmurik,
	Tyler Fanelli, dinechin, John Ferlan

On Thu, Nov 25, 2021 at 08:14:28AM +0100, Sergio Lopez wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > > > Hi,
> > > > 
> > > > We recently discussed a way for remote SEV guest attestation through QEMU.
> > > > My initial approach was to get data needed for attestation through different
> > > > QMP commands (all of which are already available, so no changes required
> > > > there), deriving hashes and certificate data; and collecting all of this
> > > > into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > > > secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > > > provided, QEMU would then need to have support for attestation before a VM
> > > > is started. Upon speaking to Dave about this proposal, he mentioned that
> > > > this may not be the best approach, as some situations would render the
> > > > attestation unavailable, such as the instance where a VM is running in a
> > > > cloud, and a guest owner would like to perform attestation via QMP (a likely
> > > > scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > > > commands, as this could be an issue.
> > > 
> > > As a general point, QMP is a low level QEMU implementation detail,
> > > which is generally expected to be consumed exclusively on the host
> > > by a privileged mgmt layer, which will in turn expose its own higher
> > > level APIs to users or other apps. I would not expect to see QMP
> > > exposed to anything outside of the privileged host layer.
> > > 
> > > We also use the QAPI protocol for QEMU guest agent commmunication,
> > > however, that is a distinct service from QMP on the host. It shares
> > > most infra with QMP but has a completely diffent command set. On the
> > > host it is not consumed inside QEMU, but instead consumed by a
> > > mgmt app like libvirt. 
> > > 
> > > > So I ask, does anyone involved in QEMU's SEV implementation have any input
> > > > on a quality way to perform guest attestation? If so, I'd be interested.
> > > 
> > > I think what's missing is some clearer illustrations of how this
> > > feature is expected to be consumed in some real world application
> > > and the use cases we're trying to solve.
> > > 
> > > I'd like to understand how it should fit in with common libvirt
> > > applications across the different virtualization management
> > > scenarios - eg virsh (command line),  virt-manger (local desktop
> > > GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > > And of course any non-traditional virt use cases that might be
> > > relevant such as Kata.
> > 
> > That's still not that clear; I know Alice and Sergio have some ideas
> > (cc'd).
> > There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> > and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> > ) - that I can't claim to fully understand.
> > However, there are some themes that are emerging:
> > 
> >   a) One use is to only allow a VM to access some private data once we
> > prove it's the VM we expect running in a secure/confidential system
> >   b) (a) normally involves requesting some proof from the VM and then
> > providing it some confidential data/a key if it's OK
> >   c) RATs splits the problem up:
> >     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> >     I don't fully understand the split yet, but in principal there are
> > at least a few different things:
> > 
> >   d) The comms layer
> >   e) Something that validates the attestation message (i.e. the
> > signatures are valid, the hashes all add up etc)
> >   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> > 8.4 kernel, or that's a valid kernel command line)
> >   g) Something that holds some secrets that can be handed out if e & f
> > are happy.
> > 
> >   There have also been proposals (e.g. Intel HTTPA) for an attestable
> > connection after a VM is running; that's probably quite different from
> > (g) but still involves (e) & (f).
> > 
> > In the simpler setups d,e,f,g probably live in one place; but it's not
> > clear where they live - for example one scenario says that your cloud
> > management layer holds some of them, another says you don't trust your
> > cloud management layer and you keep them separate.
> > 
> > So I think all we're actually interested in at the moment, is (d) and
> > (e) and the way for (g) to get the secret back to the guest.
> > 
> > Unfortunately the comms and the contents of them varies heavily with
> > technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> > while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> > SEV-ES in some cases).
> > 
> > So my expectation at the moment is libvirt needs to provide a transport
> > layer for the comms, to enable an external validator to retrieve the
> > measurements from the guest/hypervisor and provide data back if
> > necessary.  Once this shakes out a bit, we might want libvirt to be
> > able to invoke the validator; however I expect (f) and (g) to be much
> > more complex things that don't feel like they belong in libvirt.
> 
> We experimented with the attestation flow quite a bit while working on
> SEV-ES support for libkrun-tee. One important aspect we noticed quite
> early, is that there's more data that's needed to be exchange of top
> of the attestation itself.
> 
> For instance, even before you start the VM, the management layer in
> charge of coordinating the confidential VM launch needs to obtain the
> Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
> vs. TDX) and the platform version, to know which features are
> available and whether that host is a candidate for running the VM at
> all.

Libvirt already reports a wide variety of information about a
compute host, that is used for placement decisions by mgmt apps,
and this includes SEV data (obtained from qemu's query-sev-capabilities)

> With that information, the mgmt layer can build a guest policy (this
> is SEV's terminology, but I guess we'll have something similar in
> TDX) and feed it to component launching the VMM (libvirt, in this
> case).
> 
> For SEV-SNP, this is pretty much the end of the story, because the
> attestation exchange is driven by an agent inside the guest. Well,
> there's also the need to have in the VM a well-known vNIC bridged to a
> network that's routed to the Attestation Server, that everyone seems
> to consider a given, but to me, from a CSP perspective, looks like
> quite a headache. In fact, I'd go as far as to suggest this
> communication should happen through an alternative channel, such as
> vsock, having a proxy on the Host, but I guess that depends on the CSP
> infrastructure.
> 
> For SEV/SEV-ES, as the attestation happens at the VMM level, there's
> still the need to have some interactions with it. As Tyler pointed
> out, we basically need to retrieve the measurement and, if valid,
> inject the secret. If the measurement isn't valid, the VM must be shut
> down immediately.

Is that really 'must' be shut down or merely 'should' be shut down.
I'm expecting the latter, as if we're faced with an untrustworthy
compute host, we can't guarantee it will ever be shut dwon.

> 
> In libkrun-tee, this operation is driven by the VMM in libkrun, which
> contacts the Attestation Server with the measurement and receives the
> secret in exchange. I guess for QEMU/libvirt we expect this to be
> driven by the upper management layer through a delegated component in
> the Host, such as NOVA. In this case, NOVA would need to:
> 
>  - Based on the upper management layer info and the Host properties,
>    generate a guest policy and use it while generating the compute
>    instance XML.
> 
>  - Ask libvirt to launch the VM.
> 
>  - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
> 
>  - Retrieve the measurement *.
> 
>  - Contact the Attestation Server and provide it with some kind of
>    information to uniquely identify the VM (needed to determine what's
>    the expected measurement) and the measurement itself.
> 
>    * If the measurement if valid, inject the secret *.
> 
>      + The secret is pre-encrypted with a key that only the PSP has,
>        so there's no need to do any special handling of it.
> 
>  - Ask libvirt to either destroy the VM (if the measurement wasn't
>    valid or there was some kind of communication error with the
>    Attestation Server) or continue the execution of the VM (this will
>    be the first time kvm_vcpu_run() is entered).
> 
> The operations marked with (*) are the ones that I'm not sure if
> NOVA should communicate with libvirt or talk directly to QEMU.

Nova must always talk with libvirt, never QEMU, because it needs to
be insulated from low level implementation details that change over
time. The QEMU binary we are invoking today, might be completely
replaced with something new tomorrow, with totally different CLI /
APIs, and livirt will isolate apps from this.

Also we need to bear in mind the complexity we're putting on users
and mgmt apps.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 12:44       ` Dov Murik
@ 2021-11-25 13:42         ` Daniel P. Berrangé
  2021-11-25 13:59           ` Dov Murik
  2021-11-25 15:11         ` Sergio Lopez
  1 sibling, 1 reply; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-25 13:42 UTC (permalink / raw)
  To: Dov Murik
  Cc: Sergio Lopez, afrosi, James Bottomley, Dr. David Alan Gilbert,
	qemu-devel, Hubertus Franke, Tyler Fanelli,
	Tobin Feldman-Fitzthum, Jim Cadden, dinechin, John Ferlan

On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
> [+cc jejb, tobin, jim, hubertus]
> 
> 
> On 25/11/2021 9:14, Sergio Lopez wrote:
> > On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> >> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> >>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> >>>> Hi,
> >>>>
> >>>> We recently discussed a way for remote SEV guest attestation through QEMU.
> >>>> My initial approach was to get data needed for attestation through different
> >>>> QMP commands (all of which are already available, so no changes required
> >>>> there), deriving hashes and certificate data; and collecting all of this
> >>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> >>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> >>>> provided, QEMU would then need to have support for attestation before a VM
> >>>> is started. Upon speaking to Dave about this proposal, he mentioned that
> >>>> this may not be the best approach, as some situations would render the
> >>>> attestation unavailable, such as the instance where a VM is running in a
> >>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
> >>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> >>>> commands, as this could be an issue.
> >>>
> >>> As a general point, QMP is a low level QEMU implementation detail,
> >>> which is generally expected to be consumed exclusively on the host
> >>> by a privileged mgmt layer, which will in turn expose its own higher
> >>> level APIs to users or other apps. I would not expect to see QMP
> >>> exposed to anything outside of the privileged host layer.
> >>>
> >>> We also use the QAPI protocol for QEMU guest agent commmunication,
> >>> however, that is a distinct service from QMP on the host. It shares
> >>> most infra with QMP but has a completely diffent command set. On the
> >>> host it is not consumed inside QEMU, but instead consumed by a
> >>> mgmt app like libvirt. 
> >>>
> >>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
> >>>> on a quality way to perform guest attestation? If so, I'd be interested.
> >>>
> >>> I think what's missing is some clearer illustrations of how this
> >>> feature is expected to be consumed in some real world application
> >>> and the use cases we're trying to solve.
> >>>
> >>> I'd like to understand how it should fit in with common libvirt
> >>> applications across the different virtualization management
> >>> scenarios - eg virsh (command line),  virt-manger (local desktop
> >>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> >>> And of course any non-traditional virt use cases that might be
> >>> relevant such as Kata.
> >>
> >> That's still not that clear; I know Alice and Sergio have some ideas
> >> (cc'd).
> >> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> >> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> >> ) - that I can't claim to fully understand.
> >> However, there are some themes that are emerging:
> >>
> >>   a) One use is to only allow a VM to access some private data once we
> >> prove it's the VM we expect running in a secure/confidential system
> >>   b) (a) normally involves requesting some proof from the VM and then
> >> providing it some confidential data/a key if it's OK
> >>   c) RATs splits the problem up:
> >>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> >>     I don't fully understand the split yet, but in principal there are
> >> at least a few different things:
> >>
> >>   d) The comms layer
> >>   e) Something that validates the attestation message (i.e. the
> >> signatures are valid, the hashes all add up etc)
> >>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> >> 8.4 kernel, or that's a valid kernel command line)
> >>   g) Something that holds some secrets that can be handed out if e & f
> >> are happy.
> >>
> >>   There have also been proposals (e.g. Intel HTTPA) for an attestable
> >> connection after a VM is running; that's probably quite different from
> >> (g) but still involves (e) & (f).
> >>
> >> In the simpler setups d,e,f,g probably live in one place; but it's not
> >> clear where they live - for example one scenario says that your cloud
> >> management layer holds some of them, another says you don't trust your
> >> cloud management layer and you keep them separate.
> >>
> >> So I think all we're actually interested in at the moment, is (d) and
> >> (e) and the way for (g) to get the secret back to the guest.
> >>
> >> Unfortunately the comms and the contents of them varies heavily with
> >> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> >> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> >> SEV-ES in some cases).
> 
> SEV-ES has pre-launch measurement and secret injection, just like SEV
> (except that the measurement includes the initial states of all vcpus,
> that is, their VMSAs.  BTW that means that in order to calculate the
> measurement the Attestation Server must know exactly how many vcpus are
> in the VM).

Does that work with CPU hotplug ? ie cold boot with -smp 4,maxcpus=8
and some time later try to enable the extra 4 cpus at runtime ?


> >> So my expectation at the moment is libvirt needs to provide a transport
> >> layer for the comms, to enable an external validator to retrieve the
> >> measurements from the guest/hypervisor and provide data back if
> >> necessary.  Once this shakes out a bit, we might want libvirt to be
> >> able to invoke the validator; however I expect (f) and (g) to be much
> >> more complex things that don't feel like they belong in libvirt.
> > 
> > We experimented with the attestation flow quite a bit while working on
> > SEV-ES support for libkrun-tee. One important aspect we noticed quite
> > early, is that there's more data that's needed to be exchange of top
> > of the attestation itself.
> > 
> > For instance, even before you start the VM, the management layer in
> > charge of coordinating the confidential VM launch needs to obtain the
> > Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
> > vs. TDX) and the platform version, to know which features are
> > available and whether that host is a candidate for running the VM at
> > all.
> > 
> > With that information, the mgmt layer can build a guest policy (this
> > is SEV's terminology, but I guess we'll have something similar in
> > TDX) and feed it to component launching the VMM (libvirt, in this
> > case).
> > 
> > For SEV-SNP, this is pretty much the end of the story, because the
> > attestation exchange is driven by an agent inside the guest. Well,
> > there's also the need to have in the VM a well-known vNIC bridged to a
> > network that's routed to the Attestation Server, that everyone seems
> > to consider a given, but to me, from a CSP perspective, looks like
> > quite a headache. In fact, I'd go as far as to suggest this
> > communication should happen through an alternative channel, such as
> > vsock, having a proxy on the Host, but I guess that depends on the CSP
> > infrastructure.
> 
> If we have an alternative channel (vsock?) and a proxy on the host,
> maybe we can share parts of the solution between SEV and SNP.
> 
> 
> > For SEV/SEV-ES, as the attestation happens at the VMM level, there's
> > still the need to have some interactions with it. As Tyler pointed
> > out, we basically need to retrieve the measurement and, if valid,
> > inject the secret. If the measurement isn't valid, the VM must be shut
> > down immediately.
> > 
> > In libkrun-tee, this operation is driven by the VMM in libkrun, which
> > contacts the Attestation Server with the measurement and receives the
> > secret in exchange. I guess for QEMU/libvirt we expect this to be
> > driven by the upper management layer through a delegated component in
> > the Host, such as NOVA. In this case, NOVA would need to:
> > 
> >  - Based on the upper management layer info and the Host properties,
> >    generate a guest policy and use it while generating the compute
> >    instance XML.
> > 
> >  - Ask libvirt to launch the VM.
> 
> Launch the VM with -S (suspended; so it doesn't actually begin running
> guest instructions).
> 
> 
> > 
> >  - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
> > 
> >  - Retrieve the measurement *.
> 
> Note that libvirt holds the QMP socket to QEMU.  So whoever fetches the
> measurement needs either (a) to ask libvirt to it; or (b) to connect to
> another QMP listening socket for getting the measurement and injecting
> the secret.

Libvirt would not be particularly happy with allowing (b) because it
enables the 3rd parties to change the VM state behind libvirt's back
in ways that can ultimately confuse its understanding of the state
of the VM. If there's some task that needs  interaction with a QEMU
managed by libvirt, we need to expose suitable APIs in libvirt (if
they don't already exist).


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:27     ` Daniel P. Berrangé
@ 2021-11-25 13:50       ` Dov Murik
  2021-11-25 13:56         ` Daniel P. Berrangé
  2021-11-25 15:19       ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 26+ messages in thread
From: Dov Murik @ 2021-11-25 13:50 UTC (permalink / raw)
  To: Daniel P. Berrangé, Dr. David Alan Gilbert
  Cc: slp, afrosi, qemu-devel, Dov Murik, Tyler Fanelli, dinechin,
	John Ferlan



On 25/11/2021 15:27, Daniel P. Berrangé wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
>> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
>>>> Hi,
>>>>
>>>> We recently discussed a way for remote SEV guest attestation through QEMU.
>>>> My initial approach was to get data needed for attestation through different
>>>> QMP commands (all of which are already available, so no changes required
>>>> there), deriving hashes and certificate data; and collecting all of this
>>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
>>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
>>>> provided, QEMU would then need to have support for attestation before a VM
>>>> is started. Upon speaking to Dave about this proposal, he mentioned that
>>>> this may not be the best approach, as some situations would render the
>>>> attestation unavailable, such as the instance where a VM is running in a
>>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
>>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
>>>> commands, as this could be an issue.
>>>
>>> As a general point, QMP is a low level QEMU implementation detail,
>>> which is generally expected to be consumed exclusively on the host
>>> by a privileged mgmt layer, which will in turn expose its own higher
>>> level APIs to users or other apps. I would not expect to see QMP
>>> exposed to anything outside of the privileged host layer.
>>>
>>> We also use the QAPI protocol for QEMU guest agent commmunication,
>>> however, that is a distinct service from QMP on the host. It shares
>>> most infra with QMP but has a completely diffent command set. On the
>>> host it is not consumed inside QEMU, but instead consumed by a
>>> mgmt app like libvirt. 
>>>
>>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
>>>> on a quality way to perform guest attestation? If so, I'd be interested.
>>>
>>> I think what's missing is some clearer illustrations of how this
>>> feature is expected to be consumed in some real world application
>>> and the use cases we're trying to solve.
>>>
>>> I'd like to understand how it should fit in with common libvirt
>>> applications across the different virtualization management
>>> scenarios - eg virsh (command line),  virt-manger (local desktop
>>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
>>> And of course any non-traditional virt use cases that might be
>>> relevant such as Kata.
>>
>> That's still not that clear; I know Alice and Sergio have some ideas
>> (cc'd).
>> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
>> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
>> ) - that I can't claim to fully understand.
>> However, there are some themes that are emerging:
>>
>>   a) One use is to only allow a VM to access some private data once we
>> prove it's the VM we expect running in a secure/confidential system
>>   b) (a) normally involves requesting some proof from the VM and then
>> providing it some confidential data/a key if it's OK
> 
> I guess I'm wondering what the threat we're protecting against is,
> and / or which pieces of the stack we can trust ?
> 
> eg, if the host has 2 VMs running, we verify the 1st and provide
> its confidental data back to the host, what stops the host giving
> that dat to the 2nd non-verified VM ? 

The host can't read the injected secret: It is encrypted with a key that
is available only to the PSP.  The PSP receives it and writes it in a
guest-encrypted memory (which the host also cannot read; for the guest
it's a simple memory access with C-bit=1).  So it's a per-vm-invocation
secret.


> 
> Presumably the data has to be encrypted with a key that is uniquely
> tied to this specific boot attempt of the verified VM, and not
> accessible to any other VM, or to future boots of this VM ?

Yes, launch blob, which (if I recall correctly) the Guest Owner should
generate and give to the Cloud Provider so it can start a VM with it
(this is one of the options on the sev-guest object).

-Dov


> 
> 
>>   c) RATs splits the problem up:
>>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
>>     I don't fully understand the split yet, but in principal there are
>> at least a few different things:
>>
>>   d) The comms layer
>>   e) Something that validates the attestation message (i.e. the
>> signatures are valid, the hashes all add up etc)
>>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
>> 8.4 kernel, or that's a valid kernel command line)
>>   g) Something that holds some secrets that can be handed out if e & f
>> are happy.
>>
>>   There have also been proposals (e.g. Intel HTTPA) for an attestable
>> connection after a VM is running; that's probably quite different from
>> (g) but still involves (e) & (f).
>>
>> In the simpler setups d,e,f,g probably live in one place; but it's not
>> clear where they live - for example one scenario says that your cloud
>> management layer holds some of them, another says you don't trust your
>> cloud management layer and you keep them separate.
> 
> Yep, again I'm wondering what the specific threats are that we're
> trying to mitigate. Whether we trust the cloud mgmt APIs, but don't
> trust the compute hosts, or whether we trust neither the cloud
> mgmt APIs or the compute hosts.
> 
> If we don't trust the compute hosts, does that include the part
> of the cloud mgmt API that is  running on the compute host, or
> does that just mean the execution environment of the VM, or something
> else?
> 
>> So I think all we're actually interested in at the moment, is (d) and
>> (e) and the way for (g) to get the secret back to the guest.
>>
>> Unfortunately the comms and the contents of them varies heavily with
>> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
>> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
>> SEV-ES in some cases).
>>
>> So my expectation at the moment is libvirt needs to provide a transport
>> layer for the comms, to enable an external validator to retrieve the
>> measurements from the guest/hypervisor and provide data back if
>> necessary.  Once this shakes out a bit, we might want libvirt to be
>> able to invoke the validator; however I expect (f) and (g) to be much
>> more complex things that don't feel like they belong in libvirt.
> 
> Yep, I don't think (f) & (g) belong in libvirt, since libvirt is
> deployed per compute host, while (f) / (g) are something that is
> likely to be deployed in a separate trusted host, at least for
> data center / cloud deployments. May be there's a case where they
> can all be same-host for more specialized use cases.
> 
> Regards,
> Daniel
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25  7:14     ` Sergio Lopez
                         ` (2 preceding siblings ...)
  2021-11-25 13:36       ` Daniel P. Berrangé
@ 2021-11-25 13:52       ` Daniel P. Berrangé
  2021-11-25 13:55         ` Dov Murik
  2021-11-25 15:00         ` Dr. David Alan Gilbert
  3 siblings, 2 replies; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-25 13:52 UTC (permalink / raw)
  To: Sergio Lopez
  Cc: afrosi, Dr. David Alan Gilbert, qemu-devel, dovmurik,
	Tyler Fanelli, dinechin, John Ferlan

On Thu, Nov 25, 2021 at 08:14:28AM +0100, Sergio Lopez wrote:
> For SEV-SNP, this is pretty much the end of the story, because the
> attestation exchange is driven by an agent inside the guest. Well,
> there's also the need to have in the VM a well-known vNIC bridged to a
> network that's routed to the Attestation Server, that everyone seems
> to consider a given, but to me, from a CSP perspective, looks like
> quite a headache. In fact, I'd go as far as to suggest this
> communication should happen through an alternative channel, such as
> vsock, having a proxy on the Host, but I guess that depends on the CSP
> infrastructure.

Allowing network connections from inside the VM, to any kind
of host side mgmt LAN services is a big no for some cloud hosts.

They usually desire for any guest network connectivity to be
associated with a VLAN/network segment that is strictly isolated
from any host mgmt LAN.

OpenStack provides a virtual CCDROM for injecting cloud-init
metadata as an alternative to the network based metadata REST
service, since they latter often isn't deployed.

Similarly for virtual filesystems, we've designed virtiofs,
rather than relying on a 2nd NIC combined with NFS.

We cannot assume availability of a real network device for the
attestation. If one does exist fine, but there needs to be an
alternative option that can be used.


On a slightly different topic - if the attestation is driven
from an agent inside the guest, this seems to imply we let the
guest vCPUs start beforre attestation is done. Contrary to
the SEV/SEV-ES where we seem to be wanting vCPUs to remain
in the stopped state until attestation is complete & secrets
provided.  If the vCPUs are started, is there some mechanism
to restrict what can be done  before attestation is complete?

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:52       ` Daniel P. Berrangé
@ 2021-11-25 13:55         ` Dov Murik
  2021-11-25 15:00         ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 26+ messages in thread
From: Dov Murik @ 2021-11-25 13:55 UTC (permalink / raw)
  To: Daniel P. Berrangé, Sergio Lopez
  Cc: afrosi, James Bottomley, Dr. David Alan Gilbert, qemu-devel,
	Dov Murik, Tyler Fanelli, dinechin, John Ferlan



On 25/11/2021 15:52, Daniel P. Berrangé wrote:
> On Thu, Nov 25, 2021 at 08:14:28AM +0100, Sergio Lopez wrote:
>> For SEV-SNP, this is pretty much the end of the story, because the
>> attestation exchange is driven by an agent inside the guest. Well,
>> there's also the need to have in the VM a well-known vNIC bridged to a
>> network that's routed to the Attestation Server, that everyone seems
>> to consider a given, but to me, from a CSP perspective, looks like
>> quite a headache. In fact, I'd go as far as to suggest this
>> communication should happen through an alternative channel, such as
>> vsock, having a proxy on the Host, but I guess that depends on the CSP
>> infrastructure.
> 
> Allowing network connections from inside the VM, to any kind
> of host side mgmt LAN services is a big no for some cloud hosts.
> 
> They usually desire for any guest network connectivity to be
> associated with a VLAN/network segment that is strictly isolated
> from any host mgmt LAN.
> 
> OpenStack provides a virtual CCDROM for injecting cloud-init
> metadata as an alternative to the network based metadata REST
> service, since they latter often isn't deployed.
> 
> Similarly for virtual filesystems, we've designed virtiofs,
> rather than relying on a 2nd NIC combined with NFS.
> 
> We cannot assume availability of a real network device for the
> attestation. If one does exist fine, but there needs to be an
> alternative option that can be used.
> 
> 
> On a slightly different topic - if the attestation is driven
> from an agent inside the guest, this seems to imply we let the
> guest vCPUs start beforre attestation is done. Contrary to
> the SEV/SEV-ES where we seem to be wanting vCPUs to remain
> in the stopped state until attestation is complete & secrets
> provided.  If the vCPUs are started, is there some mechanism
> to restrict what can be done  before attestation is complete?

The only mechanism is to design the workload in the Guest in a way that
it can't do anything meaningful until the secret is injected, and the
Attestation Server will release the secret only if a proper attestation
report is presented.

James (cc'd) wants to move this attestation check as early as possible
--> "to restrict what can be done before attestation is complete".


-Dov


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:50       ` Dov Murik
@ 2021-11-25 13:56         ` Daniel P. Berrangé
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-25 13:56 UTC (permalink / raw)
  To: Dov Murik
  Cc: slp, afrosi, Dr. David Alan Gilbert, qemu-devel, Tyler Fanelli,
	dinechin, John Ferlan

On Thu, Nov 25, 2021 at 03:50:46PM +0200, Dov Murik wrote:
> 
> 
> On 25/11/2021 15:27, Daniel P. Berrangé wrote:
> > On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> >> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> >>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> >>>> Hi,
> >>>>
> >>>> We recently discussed a way for remote SEV guest attestation through QEMU.
> >>>> My initial approach was to get data needed for attestation through different
> >>>> QMP commands (all of which are already available, so no changes required
> >>>> there), deriving hashes and certificate data; and collecting all of this
> >>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> >>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> >>>> provided, QEMU would then need to have support for attestation before a VM
> >>>> is started. Upon speaking to Dave about this proposal, he mentioned that
> >>>> this may not be the best approach, as some situations would render the
> >>>> attestation unavailable, such as the instance where a VM is running in a
> >>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
> >>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> >>>> commands, as this could be an issue.
> >>>
> >>> As a general point, QMP is a low level QEMU implementation detail,
> >>> which is generally expected to be consumed exclusively on the host
> >>> by a privileged mgmt layer, which will in turn expose its own higher
> >>> level APIs to users or other apps. I would not expect to see QMP
> >>> exposed to anything outside of the privileged host layer.
> >>>
> >>> We also use the QAPI protocol for QEMU guest agent commmunication,
> >>> however, that is a distinct service from QMP on the host. It shares
> >>> most infra with QMP but has a completely diffent command set. On the
> >>> host it is not consumed inside QEMU, but instead consumed by a
> >>> mgmt app like libvirt. 
> >>>
> >>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
> >>>> on a quality way to perform guest attestation? If so, I'd be interested.
> >>>
> >>> I think what's missing is some clearer illustrations of how this
> >>> feature is expected to be consumed in some real world application
> >>> and the use cases we're trying to solve.
> >>>
> >>> I'd like to understand how it should fit in with common libvirt
> >>> applications across the different virtualization management
> >>> scenarios - eg virsh (command line),  virt-manger (local desktop
> >>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> >>> And of course any non-traditional virt use cases that might be
> >>> relevant such as Kata.
> >>
> >> That's still not that clear; I know Alice and Sergio have some ideas
> >> (cc'd).
> >> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> >> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> >> ) - that I can't claim to fully understand.
> >> However, there are some themes that are emerging:
> >>
> >>   a) One use is to only allow a VM to access some private data once we
> >> prove it's the VM we expect running in a secure/confidential system
> >>   b) (a) normally involves requesting some proof from the VM and then
> >> providing it some confidential data/a key if it's OK
> > 
> > I guess I'm wondering what the threat we're protecting against is,
> > and / or which pieces of the stack we can trust ?
> > 
> > eg, if the host has 2 VMs running, we verify the 1st and provide
> > its confidental data back to the host, what stops the host giving
> > that dat to the 2nd non-verified VM ? 
> 
> The host can't read the injected secret: It is encrypted with a key that
> is available only to the PSP.  The PSP receives it and writes it in a
> guest-encrypted memory (which the host also cannot read; for the guest
> it's a simple memory access with C-bit=1).  So it's a per-vm-invocation
> secret.

Is there some way the PSP verifies which VM is supposed to receive
the injected data. ie the host can't read it, but it can tell the
PSP to inject it to VM B instead of VM A.

> > Presumably the data has to be encrypted with a key that is uniquely
> > tied to this specific boot attempt of the verified VM, and not
> > accessible to any other VM, or to future boots of this VM ?
> 
> Yes, launch blob, which (if I recall correctly) the Guest Owner should
> generate and give to the Cloud Provider so it can start a VM with it
> (this is one of the options on the sev-guest object).

Does something stop the host from booting a 2nd VM on the side with
the same launch blob, and thus be able to also tell the PSP to inject
the secret data into this 2nd VM later too ?

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:42         ` Daniel P. Berrangé
@ 2021-11-25 13:59           ` Dov Murik
  2021-11-29 14:29             ` Brijesh Singh
  0 siblings, 1 reply; 26+ messages in thread
From: Dov Murik @ 2021-11-25 13:59 UTC (permalink / raw)
  To: Daniel P. Berrangé, Tom Lendacky, Brijesh Singh
  Cc: Dov Murik, Sergio Lopez, afrosi, James Bottomley,
	Dr. David Alan Gilbert, qemu-devel, Hubertus Franke,
	Tyler Fanelli, Tobin Feldman-Fitzthum, Jim Cadden, dinechin,
	John Ferlan

[+cc Tom, Brijesh]

On 25/11/2021 15:42, Daniel P. Berrangé wrote:
> On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
>> [+cc jejb, tobin, jim, hubertus]
>>
>>
>> On 25/11/2021 9:14, Sergio Lopez wrote:
>>> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
>>>> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>>>>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
>>>>>> Hi,
>>>>>>
>>>>>> We recently discussed a way for remote SEV guest attestation through QEMU.
>>>>>> My initial approach was to get data needed for attestation through different
>>>>>> QMP commands (all of which are already available, so no changes required
>>>>>> there), deriving hashes and certificate data; and collecting all of this
>>>>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
>>>>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
>>>>>> provided, QEMU would then need to have support for attestation before a VM
>>>>>> is started. Upon speaking to Dave about this proposal, he mentioned that
>>>>>> this may not be the best approach, as some situations would render the
>>>>>> attestation unavailable, such as the instance where a VM is running in a
>>>>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
>>>>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
>>>>>> commands, as this could be an issue.
>>>>>
>>>>> As a general point, QMP is a low level QEMU implementation detail,
>>>>> which is generally expected to be consumed exclusively on the host
>>>>> by a privileged mgmt layer, which will in turn expose its own higher
>>>>> level APIs to users or other apps. I would not expect to see QMP
>>>>> exposed to anything outside of the privileged host layer.
>>>>>
>>>>> We also use the QAPI protocol for QEMU guest agent commmunication,
>>>>> however, that is a distinct service from QMP on the host. It shares
>>>>> most infra with QMP but has a completely diffent command set. On the
>>>>> host it is not consumed inside QEMU, but instead consumed by a
>>>>> mgmt app like libvirt. 
>>>>>
>>>>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
>>>>>> on a quality way to perform guest attestation? If so, I'd be interested.
>>>>>
>>>>> I think what's missing is some clearer illustrations of how this
>>>>> feature is expected to be consumed in some real world application
>>>>> and the use cases we're trying to solve.
>>>>>
>>>>> I'd like to understand how it should fit in with common libvirt
>>>>> applications across the different virtualization management
>>>>> scenarios - eg virsh (command line),  virt-manger (local desktop
>>>>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
>>>>> And of course any non-traditional virt use cases that might be
>>>>> relevant such as Kata.
>>>>
>>>> That's still not that clear; I know Alice and Sergio have some ideas
>>>> (cc'd).
>>>> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
>>>> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
>>>> ) - that I can't claim to fully understand.
>>>> However, there are some themes that are emerging:
>>>>
>>>>   a) One use is to only allow a VM to access some private data once we
>>>> prove it's the VM we expect running in a secure/confidential system
>>>>   b) (a) normally involves requesting some proof from the VM and then
>>>> providing it some confidential data/a key if it's OK
>>>>   c) RATs splits the problem up:
>>>>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
>>>>     I don't fully understand the split yet, but in principal there are
>>>> at least a few different things:
>>>>
>>>>   d) The comms layer
>>>>   e) Something that validates the attestation message (i.e. the
>>>> signatures are valid, the hashes all add up etc)
>>>>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
>>>> 8.4 kernel, or that's a valid kernel command line)
>>>>   g) Something that holds some secrets that can be handed out if e & f
>>>> are happy.
>>>>
>>>>   There have also been proposals (e.g. Intel HTTPA) for an attestable
>>>> connection after a VM is running; that's probably quite different from
>>>> (g) but still involves (e) & (f).
>>>>
>>>> In the simpler setups d,e,f,g probably live in one place; but it's not
>>>> clear where they live - for example one scenario says that your cloud
>>>> management layer holds some of them, another says you don't trust your
>>>> cloud management layer and you keep them separate.
>>>>
>>>> So I think all we're actually interested in at the moment, is (d) and
>>>> (e) and the way for (g) to get the secret back to the guest.
>>>>
>>>> Unfortunately the comms and the contents of them varies heavily with
>>>> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
>>>> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
>>>> SEV-ES in some cases).
>>
>> SEV-ES has pre-launch measurement and secret injection, just like SEV
>> (except that the measurement includes the initial states of all vcpus,
>> that is, their VMSAs.  BTW that means that in order to calculate the
>> measurement the Attestation Server must know exactly how many vcpus are
>> in the VM).
> 
> Does that work with CPU hotplug ? ie cold boot with -smp 4,maxcpus=8
> and some time later try to enable the extra 4 cpus at runtime ?
> 

AFAIK all generations of SEV don't support CPU hotplug. Tom, Brijesh -
is that indeed the case?

I don't know about TDX.

-Dov


> 
>>>> So my expectation at the moment is libvirt needs to provide a transport
>>>> layer for the comms, to enable an external validator to retrieve the
>>>> measurements from the guest/hypervisor and provide data back if
>>>> necessary.  Once this shakes out a bit, we might want libvirt to be
>>>> able to invoke the validator; however I expect (f) and (g) to be much
>>>> more complex things that don't feel like they belong in libvirt.
>>>
>>> We experimented with the attestation flow quite a bit while working on
>>> SEV-ES support for libkrun-tee. One important aspect we noticed quite
>>> early, is that there's more data that's needed to be exchange of top
>>> of the attestation itself.
>>>
>>> For instance, even before you start the VM, the management layer in
>>> charge of coordinating the confidential VM launch needs to obtain the
>>> Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
>>> vs. TDX) and the platform version, to know which features are
>>> available and whether that host is a candidate for running the VM at
>>> all.
>>>
>>> With that information, the mgmt layer can build a guest policy (this
>>> is SEV's terminology, but I guess we'll have something similar in
>>> TDX) and feed it to component launching the VMM (libvirt, in this
>>> case).
>>>
>>> For SEV-SNP, this is pretty much the end of the story, because the
>>> attestation exchange is driven by an agent inside the guest. Well,
>>> there's also the need to have in the VM a well-known vNIC bridged to a
>>> network that's routed to the Attestation Server, that everyone seems
>>> to consider a given, but to me, from a CSP perspective, looks like
>>> quite a headache. In fact, I'd go as far as to suggest this
>>> communication should happen through an alternative channel, such as
>>> vsock, having a proxy on the Host, but I guess that depends on the CSP
>>> infrastructure.
>>
>> If we have an alternative channel (vsock?) and a proxy on the host,
>> maybe we can share parts of the solution between SEV and SNP.
>>
>>
>>> For SEV/SEV-ES, as the attestation happens at the VMM level, there's
>>> still the need to have some interactions with it. As Tyler pointed
>>> out, we basically need to retrieve the measurement and, if valid,
>>> inject the secret. If the measurement isn't valid, the VM must be shut
>>> down immediately.
>>>
>>> In libkrun-tee, this operation is driven by the VMM in libkrun, which
>>> contacts the Attestation Server with the measurement and receives the
>>> secret in exchange. I guess for QEMU/libvirt we expect this to be
>>> driven by the upper management layer through a delegated component in
>>> the Host, such as NOVA. In this case, NOVA would need to:
>>>
>>>  - Based on the upper management layer info and the Host properties,
>>>    generate a guest policy and use it while generating the compute
>>>    instance XML.
>>>
>>>  - Ask libvirt to launch the VM.
>>
>> Launch the VM with -S (suspended; so it doesn't actually begin running
>> guest instructions).
>>
>>
>>>
>>>  - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
>>>
>>>  - Retrieve the measurement *.
>>
>> Note that libvirt holds the QMP socket to QEMU.  So whoever fetches the
>> measurement needs either (a) to ask libvirt to it; or (b) to connect to
>> another QMP listening socket for getting the measurement and injecting
>> the secret.
> 
> Libvirt would not be particularly happy with allowing (b) because it
> enables the 3rd parties to change the VM state behind libvirt's back
> in ways that can ultimately confuse its understanding of the state
> of the VM. If there's some task that needs  interaction with a QEMU
> managed by libvirt, we need to expose suitable APIs in libvirt (if
> they don't already exist).
> 
> 
> Regards,
> Daniel
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:52       ` Daniel P. Berrangé
  2021-11-25 13:55         ` Dov Murik
@ 2021-11-25 15:00         ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 15:00 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Sergio Lopez, afrosi, qemu-devel, dovmurik, Tyler Fanelli,
	dinechin, John Ferlan

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Thu, Nov 25, 2021 at 08:14:28AM +0100, Sergio Lopez wrote:
> > For SEV-SNP, this is pretty much the end of the story, because the
> > attestation exchange is driven by an agent inside the guest. Well,
> > there's also the need to have in the VM a well-known vNIC bridged to a
> > network that's routed to the Attestation Server, that everyone seems
> > to consider a given, but to me, from a CSP perspective, looks like
> > quite a headache. In fact, I'd go as far as to suggest this
> > communication should happen through an alternative channel, such as
> > vsock, having a proxy on the Host, but I guess that depends on the CSP
> > infrastructure.
> 
> Allowing network connections from inside the VM, to any kind
> of host side mgmt LAN services is a big no for some cloud hosts.
> 
> They usually desire for any guest network connectivity to be
> associated with a VLAN/network segment that is strictly isolated
> from any host mgmt LAN.
> 
> OpenStack provides a virtual CCDROM for injecting cloud-init
> metadata as an alternative to the network based metadata REST
> service, since they latter often isn't deployed.
> 
> Similarly for virtual filesystems, we've designed virtiofs,
> rather than relying on a 2nd NIC combined with NFS.
> 
> We cannot assume availability of a real network device for the
> attestation. If one does exist fine, but there needs to be an
> alternative option that can be used.
> 
> 
> On a slightly different topic - if the attestation is driven
> from an agent inside the guest, this seems to imply we let the
> guest vCPUs start beforre attestation is done. Contrary to
> the SEV/SEV-ES where we seem to be wanting vCPUs to remain
> in the stopped state until attestation is complete & secrets
> provided.

That's right; SEV/SEV-ES is the odd case here.

> If the vCPUs are started, is there some mechanism
> to restrict what can be done  before attestation is complete?

Just the fact you haven't provided it the keys to decrypt it's disk to
do anything interesting; there's the potential to add extra if you
wanted (e.g. 802.1X network auth).

Dave

> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 12:44       ` Dov Murik
  2021-11-25 13:42         ` Daniel P. Berrangé
@ 2021-11-25 15:11         ` Sergio Lopez
  2021-11-25 15:40           ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 26+ messages in thread
From: Sergio Lopez @ 2021-11-25 15:11 UTC (permalink / raw)
  To: Dov Murik
  Cc: Daniel P. Berrangé, afrosi, James Bottomley,
	Dr. David Alan Gilbert, qemu-devel, Hubertus Franke,
	Tyler Fanelli, Tobin Feldman-Fitzthum, Jim Cadden, dinechin,
	John Ferlan

[-- Attachment #1: Type: text/plain, Size: 5757 bytes --]

On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
> [+cc jejb, tobin, jim, hubertus]
> 
> 
> On 25/11/2021 9:14, Sergio Lopez wrote:
> > On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> >> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> >>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> >>>> Hi,
> >>>>
> >>>> We recently discussed a way for remote SEV guest attestation through QEMU.
> >>>> My initial approach was to get data needed for attestation through different
> >>>> QMP commands (all of which are already available, so no changes required
> >>>> there), deriving hashes and certificate data; and collecting all of this
> >>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> >>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> >>>> provided, QEMU would then need to have support for attestation before a VM
> >>>> is started. Upon speaking to Dave about this proposal, he mentioned that
> >>>> this may not be the best approach, as some situations would render the
> >>>> attestation unavailable, such as the instance where a VM is running in a
> >>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
> >>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> >>>> commands, as this could be an issue.
> >>>
> >>> As a general point, QMP is a low level QEMU implementation detail,
> >>> which is generally expected to be consumed exclusively on the host
> >>> by a privileged mgmt layer, which will in turn expose its own higher
> >>> level APIs to users or other apps. I would not expect to see QMP
> >>> exposed to anything outside of the privileged host layer.
> >>>
> >>> We also use the QAPI protocol for QEMU guest agent commmunication,
> >>> however, that is a distinct service from QMP on the host. It shares
> >>> most infra with QMP but has a completely diffent command set. On the
> >>> host it is not consumed inside QEMU, but instead consumed by a
> >>> mgmt app like libvirt. 
> >>>
> >>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
> >>>> on a quality way to perform guest attestation? If so, I'd be interested.
> >>>
> >>> I think what's missing is some clearer illustrations of how this
> >>> feature is expected to be consumed in some real world application
> >>> and the use cases we're trying to solve.
> >>>
> >>> I'd like to understand how it should fit in with common libvirt
> >>> applications across the different virtualization management
> >>> scenarios - eg virsh (command line),  virt-manger (local desktop
> >>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> >>> And of course any non-traditional virt use cases that might be
> >>> relevant such as Kata.
> >>
> >> That's still not that clear; I know Alice and Sergio have some ideas
> >> (cc'd).
> >> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> >> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> >> ) - that I can't claim to fully understand.
> >> However, there are some themes that are emerging:
> >>
> >>   a) One use is to only allow a VM to access some private data once we
> >> prove it's the VM we expect running in a secure/confidential system
> >>   b) (a) normally involves requesting some proof from the VM and then
> >> providing it some confidential data/a key if it's OK
> >>   c) RATs splits the problem up:
> >>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> >>     I don't fully understand the split yet, but in principal there are
> >> at least a few different things:
> >>
> >>   d) The comms layer
> >>   e) Something that validates the attestation message (i.e. the
> >> signatures are valid, the hashes all add up etc)
> >>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> >> 8.4 kernel, or that's a valid kernel command line)
> >>   g) Something that holds some secrets that can be handed out if e & f
> >> are happy.
> >>
> >>   There have also been proposals (e.g. Intel HTTPA) for an attestable
> >> connection after a VM is running; that's probably quite different from
> >> (g) but still involves (e) & (f).
> >>
> >> In the simpler setups d,e,f,g probably live in one place; but it's not
> >> clear where they live - for example one scenario says that your cloud
> >> management layer holds some of them, another says you don't trust your
> >> cloud management layer and you keep them separate.
> >>
> >> So I think all we're actually interested in at the moment, is (d) and
> >> (e) and the way for (g) to get the secret back to the guest.
> >>
> >> Unfortunately the comms and the contents of them varies heavily with
> >> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> >> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> >> SEV-ES in some cases).
> 
> SEV-ES has pre-launch measurement and secret injection, just like SEV
> (except that the measurement includes the initial states of all vcpus,
> that is, their VMSAs.  BTW that means that in order to calculate the
> measurement the Attestation Server must know exactly how many vcpus are
> in the VM).

You need the number of vCPUs and an idea of what their initial state
is going to be, to be able to reproduce the same VMSA struct in the
Attestation Server.

This may tie the Attestation Server with a particular version of both
QEMU and KVM. I haven't checked if configuration changes in QEMU may
also have an impact on it.

Sergio.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:27     ` Daniel P. Berrangé
  2021-11-25 13:50       ` Dov Murik
@ 2021-11-25 15:19       ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 15:19 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: slp, afrosi, qemu-devel, dovmurik, Tyler Fanelli, dinechin,
	John Ferlan

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > > > Hi,
> > > > 
> > > > We recently discussed a way for remote SEV guest attestation through QEMU.
> > > > My initial approach was to get data needed for attestation through different
> > > > QMP commands (all of which are already available, so no changes required
> > > > there), deriving hashes and certificate data; and collecting all of this
> > > > into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > > > secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > > > provided, QEMU would then need to have support for attestation before a VM
> > > > is started. Upon speaking to Dave about this proposal, he mentioned that
> > > > this may not be the best approach, as some situations would render the
> > > > attestation unavailable, such as the instance where a VM is running in a
> > > > cloud, and a guest owner would like to perform attestation via QMP (a likely
> > > > scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > > > commands, as this could be an issue.
> > > 
> > > As a general point, QMP is a low level QEMU implementation detail,
> > > which is generally expected to be consumed exclusively on the host
> > > by a privileged mgmt layer, which will in turn expose its own higher
> > > level APIs to users or other apps. I would not expect to see QMP
> > > exposed to anything outside of the privileged host layer.
> > > 
> > > We also use the QAPI protocol for QEMU guest agent commmunication,
> > > however, that is a distinct service from QMP on the host. It shares
> > > most infra with QMP but has a completely diffent command set. On the
> > > host it is not consumed inside QEMU, but instead consumed by a
> > > mgmt app like libvirt. 
> > > 
> > > > So I ask, does anyone involved in QEMU's SEV implementation have any input
> > > > on a quality way to perform guest attestation? If so, I'd be interested.
> > > 
> > > I think what's missing is some clearer illustrations of how this
> > > feature is expected to be consumed in some real world application
> > > and the use cases we're trying to solve.
> > > 
> > > I'd like to understand how it should fit in with common libvirt
> > > applications across the different virtualization management
> > > scenarios - eg virsh (command line),  virt-manger (local desktop
> > > GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > > And of course any non-traditional virt use cases that might be
> > > relevant such as Kata.
> > 
> > That's still not that clear; I know Alice and Sergio have some ideas
> > (cc'd).
> > There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> > and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> > ) - that I can't claim to fully understand.
> > However, there are some themes that are emerging:
> > 
> >   a) One use is to only allow a VM to access some private data once we
> > prove it's the VM we expect running in a secure/confidential system
> >   b) (a) normally involves requesting some proof from the VM and then
> > providing it some confidential data/a key if it's OK
> 
> I guess I'm wondering what the threat we're protecting against is,
> and / or which pieces of the stack we can trust ?

Yeh and that varies depending who you speak to.

> eg, if the host has 2 VMs running, we verify the 1st and provide
> its confidental data back to the host, what stops the host giving
> that dat to the 2nd non-verified VM ? 
> 
> Presumably the data has to be encrypted with a key that is uniquely
> tied to this specific boot attempt of the verified VM, and not
> accessible to any other VM, or to future boots of this VM ?

In the SEV/-ES case the attestation is uniquefied by a Nonce I think
and there's sometype of session key used (can't remember the details)
and the returning of the key to the VM is encrypted through that same
channel; so you know you're giving the key to the thing you attested.

However, since in SEV/ES you only measure the firmware (and number of
CPUs) all VMs look pretty much identical at that point - distinguishing
them relies either on:
  a) In the GRUB/OVMF case you are relying on the key you return to the
VM succesfully decrypting the disk and the embedded Grub being able to
load the kernel/initrd (You attested the embedded Grub, so you trust
it to do that)
  b) In the separate kernel/initrd case you do have the kernel command
line measured as well.

> >   c) RATs splits the problem up:
> >     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> >     I don't fully understand the split yet, but in principal there are
> > at least a few different things:
> > 
> >   d) The comms layer
> >   e) Something that validates the attestation message (i.e. the
> > signatures are valid, the hashes all add up etc)
> >   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> > 8.4 kernel, or that's a valid kernel command line)
> >   g) Something that holds some secrets that can be handed out if e & f
> > are happy.
> > 
> >   There have also been proposals (e.g. Intel HTTPA) for an attestable
> > connection after a VM is running; that's probably quite different from
> > (g) but still involves (e) & (f).
> > 
> > In the simpler setups d,e,f,g probably live in one place; but it's not
> > clear where they live - for example one scenario says that your cloud
> > management layer holds some of them, another says you don't trust your
> > cloud management layer and you keep them separate.
> 
> Yep, again I'm wondering what the specific threats are that we're
> trying to mitigate. Whether we trust the cloud mgmt APIs, but don't
> trust the compute hosts, or whether we trust neither the cloud
> mgmt APIs or the compute hosts.
> 
> If we don't trust the compute hosts, does that include the part
> of the cloud mgmt API that is  running on the compute host, or
> does that just mean the execution environment of the VM, or something
> else?

I think there's pretty good consensus you don't trust the compute host
at all.  How much of the rest of the cloud you trust varies
depending on who you ask.  Some suggest trusting one small part of the
cloud (some highly secure apparently trusted attestation box).
Some would rather not trust the cloud at all, so would want to do
attestation to do their own system;  the problem there is you have to do
an off-site attestation every time your VMs start.
Personally I think maybe a 2 level system would work;  you boot one [set
of ] VMs in the cloud that's attested to your offsite - and they then
run the attestation service for all your VMs in the cloud.

> > So I think all we're actually interested in at the moment, is (d) and
> > (e) and the way for (g) to get the secret back to the guest.
> > 
> > Unfortunately the comms and the contents of them varies heavily with
> > technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> > while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> > SEV-ES in some cases).
> > 
> > So my expectation at the moment is libvirt needs to provide a transport
> > layer for the comms, to enable an external validator to retrieve the
> > measurements from the guest/hypervisor and provide data back if
> > necessary.  Once this shakes out a bit, we might want libvirt to be
> > able to invoke the validator; however I expect (f) and (g) to be much
> > more complex things that don't feel like they belong in libvirt.
> 
> Yep, I don't think (f) & (g) belong in libvirt, since libvirt is
> deployed per compute host, while (f) / (g) are something that is
> likely to be deployed in a separate trusted host, at least for
> data center / cloud deployments. May be there's a case where they
> can all be same-host for more specialized use cases.

Or even less specialised;  the easiest setup is where you run an
attestation server that does all this on your site, and then put the
compute nodes in a cloud somewhere.

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 15:11         ` Sergio Lopez
@ 2021-11-25 15:40           ` Dr. David Alan Gilbert
  2021-11-25 15:56             ` Daniel P. Berrangé
  0 siblings, 1 reply; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 15:40 UTC (permalink / raw)
  To: Sergio Lopez
  Cc: Daniel P. Berrangé, Hubertus Franke, afrosi, James Bottomley,
	qemu-devel, Dov Murik, Tyler Fanelli, Tobin Feldman-Fitzthum,
	Jim Cadden, dinechin, John Ferlan

* Sergio Lopez (slp@redhat.com) wrote:
> On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
> > [+cc jejb, tobin, jim, hubertus]
> > 
> > 
> > On 25/11/2021 9:14, Sergio Lopez wrote:
> > > On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> > >> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > >>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > >>>> Hi,
> > >>>>
> > >>>> We recently discussed a way for remote SEV guest attestation through QEMU.
> > >>>> My initial approach was to get data needed for attestation through different
> > >>>> QMP commands (all of which are already available, so no changes required
> > >>>> there), deriving hashes and certificate data; and collecting all of this
> > >>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
> > >>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
> > >>>> provided, QEMU would then need to have support for attestation before a VM
> > >>>> is started. Upon speaking to Dave about this proposal, he mentioned that
> > >>>> this may not be the best approach, as some situations would render the
> > >>>> attestation unavailable, such as the instance where a VM is running in a
> > >>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
> > >>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
> > >>>> commands, as this could be an issue.
> > >>>
> > >>> As a general point, QMP is a low level QEMU implementation detail,
> > >>> which is generally expected to be consumed exclusively on the host
> > >>> by a privileged mgmt layer, which will in turn expose its own higher
> > >>> level APIs to users or other apps. I would not expect to see QMP
> > >>> exposed to anything outside of the privileged host layer.
> > >>>
> > >>> We also use the QAPI protocol for QEMU guest agent commmunication,
> > >>> however, that is a distinct service from QMP on the host. It shares
> > >>> most infra with QMP but has a completely diffent command set. On the
> > >>> host it is not consumed inside QEMU, but instead consumed by a
> > >>> mgmt app like libvirt. 
> > >>>
> > >>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
> > >>>> on a quality way to perform guest attestation? If so, I'd be interested.
> > >>>
> > >>> I think what's missing is some clearer illustrations of how this
> > >>> feature is expected to be consumed in some real world application
> > >>> and the use cases we're trying to solve.
> > >>>
> > >>> I'd like to understand how it should fit in with common libvirt
> > >>> applications across the different virtualization management
> > >>> scenarios - eg virsh (command line),  virt-manger (local desktop
> > >>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > >>> And of course any non-traditional virt use cases that might be
> > >>> relevant such as Kata.
> > >>
> > >> That's still not that clear; I know Alice and Sergio have some ideas
> > >> (cc'd).
> > >> There's also some standardisation efforts (e.g. https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> > >> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> > >> ) - that I can't claim to fully understand.
> > >> However, there are some themes that are emerging:
> > >>
> > >>   a) One use is to only allow a VM to access some private data once we
> > >> prove it's the VM we expect running in a secure/confidential system
> > >>   b) (a) normally involves requesting some proof from the VM and then
> > >> providing it some confidential data/a key if it's OK
> > >>   c) RATs splits the problem up:
> > >>     https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> > >>     I don't fully understand the split yet, but in principal there are
> > >> at least a few different things:
> > >>
> > >>   d) The comms layer
> > >>   e) Something that validates the attestation message (i.e. the
> > >> signatures are valid, the hashes all add up etc)
> > >>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> > >> 8.4 kernel, or that's a valid kernel command line)
> > >>   g) Something that holds some secrets that can be handed out if e & f
> > >> are happy.
> > >>
> > >>   There have also been proposals (e.g. Intel HTTPA) for an attestable
> > >> connection after a VM is running; that's probably quite different from
> > >> (g) but still involves (e) & (f).
> > >>
> > >> In the simpler setups d,e,f,g probably live in one place; but it's not
> > >> clear where they live - for example one scenario says that your cloud
> > >> management layer holds some of them, another says you don't trust your
> > >> cloud management layer and you keep them separate.
> > >>
> > >> So I think all we're actually interested in at the moment, is (d) and
> > >> (e) and the way for (g) to get the secret back to the guest.
> > >>
> > >> Unfortunately the comms and the contents of them varies heavily with
> > >> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> > >> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> > >> SEV-ES in some cases).
> > 
> > SEV-ES has pre-launch measurement and secret injection, just like SEV
> > (except that the measurement includes the initial states of all vcpus,
> > that is, their VMSAs.  BTW that means that in order to calculate the
> > measurement the Attestation Server must know exactly how many vcpus are
> > in the VM).
> 
> You need the number of vCPUs and an idea of what their initial state
> is going to be, to be able to reproduce the same VMSA struct in the
> Attestation Server.
> 
> This may tie the Attestation Server with a particular version of both
> QEMU and KVM. I haven't checked if configuration changes in QEMU may
> also have an impact on it.

That's all OK; I'm expecting the attestation server to be given a whole
pile of information about the apparent environment to check.

Dave

> Sergio.


-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 15:40           ` Dr. David Alan Gilbert
@ 2021-11-25 15:56             ` Daniel P. Berrangé
  2021-11-25 16:08               ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 26+ messages in thread
From: Daniel P. Berrangé @ 2021-11-25 15:56 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Hubertus Franke, Sergio Lopez, afrosi, James Bottomley,
	qemu-devel, Dov Murik, Tyler Fanelli, Tobin Feldman-Fitzthum,
	Jim Cadden, dinechin, John Ferlan

On Thu, Nov 25, 2021 at 03:40:36PM +0000, Dr. David Alan Gilbert wrote:
> * Sergio Lopez (slp@redhat.com) wrote:
> > On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
> > > 
> > > SEV-ES has pre-launch measurement and secret injection, just like SEV
> > > (except that the measurement includes the initial states of all vcpus,
> > > that is, their VMSAs.  BTW that means that in order to calculate the
> > > measurement the Attestation Server must know exactly how many vcpus are
> > > in the VM).
> > 
> > You need the number of vCPUs and an idea of what their initial state
> > is going to be, to be able to reproduce the same VMSA struct in the
> > Attestation Server.
> > 
> > This may tie the Attestation Server with a particular version of both
> > QEMU and KVM. I haven't checked if configuration changes in QEMU may
> > also have an impact on it.
> 
> That's all OK; I'm expecting the attestation server to be given a whole
> pile of information about the apparent environment to check.

Generally though we try not to let a VM to tied to a specific
version of software. eg use machine types to ensure that the
guest can run on any QEMU version, and get the same environment.
This lets host admin upgrade the host software for bug/security
fixes without negatively impacting users. It'd be nice not to
loose that feature with SEV if possible.

IOW, if there are aspects of the vCPU initial state that might
vary over time with different QEMU versions, should we be looking
to tie that variance into the machine type version.

For KVM changes, this might again come back to the idea fo a
"host type version".

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 15:56             ` Daniel P. Berrangé
@ 2021-11-25 16:08               ` Dr. David Alan Gilbert
  2021-11-29 13:33                 ` Dov Murik
  0 siblings, 1 reply; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 16:08 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Hubertus Franke, Sergio Lopez, afrosi, James Bottomley,
	qemu-devel, Dov Murik, Tyler Fanelli, Tobin Feldman-Fitzthum,
	Jim Cadden, dinechin, John Ferlan

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Thu, Nov 25, 2021 at 03:40:36PM +0000, Dr. David Alan Gilbert wrote:
> > * Sergio Lopez (slp@redhat.com) wrote:
> > > On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
> > > > 
> > > > SEV-ES has pre-launch measurement and secret injection, just like SEV
> > > > (except that the measurement includes the initial states of all vcpus,
> > > > that is, their VMSAs.  BTW that means that in order to calculate the
> > > > measurement the Attestation Server must know exactly how many vcpus are
> > > > in the VM).
> > > 
> > > You need the number of vCPUs and an idea of what their initial state
> > > is going to be, to be able to reproduce the same VMSA struct in the
> > > Attestation Server.
> > > 
> > > This may tie the Attestation Server with a particular version of both
> > > QEMU and KVM. I haven't checked if configuration changes in QEMU may
> > > also have an impact on it.
> > 
> > That's all OK; I'm expecting the attestation server to be given a whole
> > pile of information about the apparent environment to check.
> 
> Generally though we try not to let a VM to tied to a specific
> version of software. eg use machine types to ensure that the
> guest can run on any QEMU version, and get the same environment.
> This lets host admin upgrade the host software for bug/security
> fixes without negatively impacting users. It'd be nice not to
> loose that feature with SEV if possible.
> 
> IOW, if there are aspects of the vCPU initial state that might
> vary over time with different QEMU versions, should we be looking
> to tie that variance into the machine type version.

It's not tied to a particular version; but you may need to let the
attesting server know what version it's using so that it can check
everything adds up.

Dave

> For KVM changes, this might again come back to the idea fo a
> "host type version".
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 16:08               ` Dr. David Alan Gilbert
@ 2021-11-29 13:33                 ` Dov Murik
  0 siblings, 0 replies; 26+ messages in thread
From: Dov Murik @ 2021-11-29 13:33 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Daniel P. Berrangé
  Cc: Dov Murik, Sergio Lopez, afrosi, James Bottomley, qemu-devel,
	Hubertus Franke, Tyler Fanelli, Tobin Feldman-Fitzthum,
	Jim Cadden, dinechin, John Ferlan



On 25/11/2021 18:08, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>> On Thu, Nov 25, 2021 at 03:40:36PM +0000, Dr. David Alan Gilbert wrote:
>>> * Sergio Lopez (slp@redhat.com) wrote:
>>>> On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
>>>>>
>>>>> SEV-ES has pre-launch measurement and secret injection, just like SEV
>>>>> (except that the measurement includes the initial states of all vcpus,
>>>>> that is, their VMSAs.  BTW that means that in order to calculate the
>>>>> measurement the Attestation Server must know exactly how many vcpus are
>>>>> in the VM).
>>>>
>>>> You need the number of vCPUs and an idea of what their initial state
>>>> is going to be, to be able to reproduce the same VMSA struct in the
>>>> Attestation Server.
>>>>
>>>> This may tie the Attestation Server with a particular version of both
>>>> QEMU and KVM. I haven't checked if configuration changes in QEMU may
>>>> also have an impact on it.
>>>
>>> That's all OK; I'm expecting the attestation server to be given a whole
>>> pile of information about the apparent environment to check.
>>
>> Generally though we try not to let a VM to tied to a specific
>> version of software. eg use machine types to ensure that the
>> guest can run on any QEMU version, and get the same environment.
>> This lets host admin upgrade the host software for bug/security
>> fixes without negatively impacting users. It'd be nice not to
>> loose that feature with SEV if possible.
>>
>> IOW, if there are aspects of the vCPU initial state that might
>> vary over time with different QEMU versions, should we be looking
>> to tie that variance into the machine type version.
> 
> It's not tied to a particular version; but you may need to let the
> attesting server know what version it's using so that it can check
> everything adds up.


To further complicate things, note that in SEV-ES the reset vector
address (CS:IP) for all APs is not set by QEMU, but taken from GUIDed
tables in OVMF (towards the end of the image); QEMU parses the table and
takes the reset address from there.  So a benign-looking change in OVMF
(changing the AP reset vector address) might cause a change in the
VMSAs, and therefore a change in the measurement.

Of course the OVMF binary itself is part of the measurement as well.

-Dov



> 
> Dave
> 
>> For KVM changes, this might again come back to the idea fo a
>> "host type version".
>>
>> Regards,
>> Daniel
>> -- 
>> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
>> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
>> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-25 13:59           ` Dov Murik
@ 2021-11-29 14:29             ` Brijesh Singh
  2021-11-29 14:49               ` Brijesh Singh
  0 siblings, 1 reply; 26+ messages in thread
From: Brijesh Singh @ 2021-11-29 14:29 UTC (permalink / raw)
  To: Dov Murik, Daniel P. Berrangé, Tom Lendacky
  Cc: brijesh.singh, Sergio Lopez, Dr. David Alan Gilbert,
	Tyler Fanelli, afrosi, qemu-devel, dinechin, John Ferlan,
	James Bottomley, Tobin Feldman-Fitzthum, Jim Cadden,
	Hubertus Franke



On 11/25/21 7:59 AM, Dov Murik wrote:
> [+cc Tom, Brijesh]
> 
> On 25/11/2021 15:42, Daniel P. Berrangé wrote:
>> On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
>>> [+cc jejb, tobin, jim, hubertus]
>>>
>>>
>>> On 25/11/2021 9:14, Sergio Lopez wrote:
>>>> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
>>>>> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>>>>>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> We recently discussed a way for remote SEV guest attestation through QEMU.
>>>>>>> My initial approach was to get data needed for attestation through different
>>>>>>> QMP commands (all of which are already available, so no changes required
>>>>>>> there), deriving hashes and certificate data; and collecting all of this
>>>>>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
>>>>>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
>>>>>>> provided, QEMU would then need to have support for attestation before a VM
>>>>>>> is started. Upon speaking to Dave about this proposal, he mentioned that
>>>>>>> this may not be the best approach, as some situations would render the
>>>>>>> attestation unavailable, such as the instance where a VM is running in a
>>>>>>> cloud, and a guest owner would like to perform attestation via QMP (a likely
>>>>>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
>>>>>>> commands, as this could be an issue.
>>>>>>
>>>>>> As a general point, QMP is a low level QEMU implementation detail,
>>>>>> which is generally expected to be consumed exclusively on the host
>>>>>> by a privileged mgmt layer, which will in turn expose its own higher
>>>>>> level APIs to users or other apps. I would not expect to see QMP
>>>>>> exposed to anything outside of the privileged host layer.
>>>>>>
>>>>>> We also use the QAPI protocol for QEMU guest agent commmunication,
>>>>>> however, that is a distinct service from QMP on the host. It shares
>>>>>> most infra with QMP but has a completely diffent command set. On the
>>>>>> host it is not consumed inside QEMU, but instead consumed by a
>>>>>> mgmt app like libvirt.
>>>>>>
>>>>>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
>>>>>>> on a quality way to perform guest attestation? If so, I'd be interested.
>>>>>>
>>>>>> I think what's missing is some clearer illustrations of how this
>>>>>> feature is expected to be consumed in some real world application
>>>>>> and the use cases we're trying to solve.
>>>>>>
>>>>>> I'd like to understand how it should fit in with common libvirt
>>>>>> applications across the different virtualization management
>>>>>> scenarios - eg virsh (command line),  virt-manger (local desktop
>>>>>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
>>>>>> And of course any non-traditional virt use cases that might be
>>>>>> relevant such as Kata.
>>>>>
>>>>> That's still not that clear; I know Alice and Sergio have some ideas
>>>>> (cc'd).
>>>>> There's also some standardisation efforts (e.g. https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.potaroo.net%2Fietf%2Fhtml%2Fids-wg-rats.html&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065941078%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=E%2FeaI6JNF2ckosTeAbFRaCZUJOZ3zG0GNfKP8082INQ%3D&reserved=0
>>>>> and https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-ietf-rats-architecture-00.html&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065951077%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=WEkMIZZp3O5Gyay5jZT8KSUH9fyarNfXy5O0Z%2FpHdnQ%3D&reserved=0
>>>>> ) - that I can't claim to fully understand.
>>>>> However, there are some themes that are emerging:
>>>>>
>>>>>    a) One use is to only allow a VM to access some private data once we
>>>>> prove it's the VM we expect running in a secure/confidential system
>>>>>    b) (a) normally involves requesting some proof from the VM and then
>>>>> providing it some confidential data/a key if it's OK
>>>>>    c) RATs splits the problem up:
>>>>>      https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-ietf-rats-architecture-00.html%23name-architectural-overview&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065951077%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FwNFMGAfojFZyGIj79D5%2BW%2BRPPuwumJiqIrf5UVrkPU%3D&reserved=0
>>>>>      I don't fully understand the split yet, but in principal there are
>>>>> at least a few different things:
>>>>>
>>>>>    d) The comms layer
>>>>>    e) Something that validates the attestation message (i.e. the
>>>>> signatures are valid, the hashes all add up etc)
>>>>>    f) Something that knows what hashes to expect (i.e. oh that's a RHEL
>>>>> 8.4 kernel, or that's a valid kernel command line)
>>>>>    g) Something that holds some secrets that can be handed out if e & f
>>>>> are happy.
>>>>>
>>>>>    There have also been proposals (e.g. Intel HTTPA) for an attestable
>>>>> connection after a VM is running; that's probably quite different from
>>>>> (g) but still involves (e) & (f).
>>>>>
>>>>> In the simpler setups d,e,f,g probably live in one place; but it's not
>>>>> clear where they live - for example one scenario says that your cloud
>>>>> management layer holds some of them, another says you don't trust your
>>>>> cloud management layer and you keep them separate.
>>>>>
>>>>> So I think all we're actually interested in at the moment, is (d) and
>>>>> (e) and the way for (g) to get the secret back to the guest.
>>>>>
>>>>> Unfortunately the comms and the contents of them varies heavily with
>>>>> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
>>>>> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
>>>>> SEV-ES in some cases).
>>>
>>> SEV-ES has pre-launch measurement and secret injection, just like SEV
>>> (except that the measurement includes the initial states of all vcpus,
>>> that is, their VMSAs.  BTW that means that in order to calculate the
>>> measurement the Attestation Server must know exactly how many vcpus are
>>> in the VM).
>>
>> Does that work with CPU hotplug ? ie cold boot with -smp 4,maxcpus=8
>> and some time later try to enable the extra 4 cpus at runtime ?
>>
> 
> AFAIK all generations of SEV don't support CPU hotplug. Tom, Brijesh -
> is that indeed the case?
> 

I think we may able to do a CPU hotplug on SEV but hotplug will not work 
for the SEV-ES and SEV-SNP. This is mainly because the register state 
need to be measured before the boot.

> I don't know about TDX.
> 
> -Dov
> 
> 
>>
>>>>> So my expectation at the moment is libvirt needs to provide a transport
>>>>> layer for the comms, to enable an external validator to retrieve the
>>>>> measurements from the guest/hypervisor and provide data back if
>>>>> necessary.  Once this shakes out a bit, we might want libvirt to be
>>>>> able to invoke the validator; however I expect (f) and (g) to be much
>>>>> more complex things that don't feel like they belong in libvirt.
>>>>
>>>> We experimented with the attestation flow quite a bit while working on
>>>> SEV-ES support for libkrun-tee. One important aspect we noticed quite
>>>> early, is that there's more data that's needed to be exchange of top
>>>> of the attestation itself.
>>>>
>>>> For instance, even before you start the VM, the management layer in
>>>> charge of coordinating the confidential VM launch needs to obtain the
>>>> Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
>>>> vs. TDX) and the platform version, to know which features are
>>>> available and whether that host is a candidate for running the VM at
>>>> all.
>>>>
>>>> With that information, the mgmt layer can build a guest policy (this
>>>> is SEV's terminology, but I guess we'll have something similar in
>>>> TDX) and feed it to component launching the VMM (libvirt, in this
>>>> case).
>>>>
>>>> For SEV-SNP, this is pretty much the end of the story, because the
>>>> attestation exchange is driven by an agent inside the guest. Well,
>>>> there's also the need to have in the VM a well-known vNIC bridged to a
>>>> network that's routed to the Attestation Server, that everyone seems
>>>> to consider a given, but to me, from a CSP perspective, looks like
>>>> quite a headache. In fact, I'd go as far as to suggest this
>>>> communication should happen through an alternative channel, such as
>>>> vsock, having a proxy on the Host, but I guess that depends on the CSP
>>>> infrastructure.
>>>
>>> If we have an alternative channel (vsock?) and a proxy on the host,
>>> maybe we can share parts of the solution between SEV and SNP.
>>>
>>>
>>>> For SEV/SEV-ES, as the attestation happens at the VMM level, there's
>>>> still the need to have some interactions with it. As Tyler pointed
>>>> out, we basically need to retrieve the measurement and, if valid,
>>>> inject the secret. If the measurement isn't valid, the VM must be shut
>>>> down immediately.
>>>>
>>>> In libkrun-tee, this operation is driven by the VMM in libkrun, which
>>>> contacts the Attestation Server with the measurement and receives the
>>>> secret in exchange. I guess for QEMU/libvirt we expect this to be
>>>> driven by the upper management layer through a delegated component in
>>>> the Host, such as NOVA. In this case, NOVA would need to:
>>>>
>>>>   - Based on the upper management layer info and the Host properties,
>>>>     generate a guest policy and use it while generating the compute
>>>>     instance XML.
>>>>
>>>>   - Ask libvirt to launch the VM.
>>>
>>> Launch the VM with -S (suspended; so it doesn't actually begin running
>>> guest instructions).
>>>
>>>
>>>>
>>>>   - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
>>>>
>>>>   - Retrieve the measurement *.
>>>
>>> Note that libvirt holds the QMP socket to QEMU.  So whoever fetches the
>>> measurement needs either (a) to ask libvirt to it; or (b) to connect to
>>> another QMP listening socket for getting the measurement and injecting
>>> the secret.
>>
>> Libvirt would not be particularly happy with allowing (b) because it
>> enables the 3rd parties to change the VM state behind libvirt's back
>> in ways that can ultimately confuse its understanding of the state
>> of the VM. If there's some task that needs  interaction with a QEMU
>> managed by libvirt, we need to expose suitable APIs in libvirt (if
>> they don't already exist).
>>
>>
>> Regards,
>> Daniel
>>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: SEV guest attestation
  2021-11-29 14:29             ` Brijesh Singh
@ 2021-11-29 14:49               ` Brijesh Singh
  0 siblings, 0 replies; 26+ messages in thread
From: Brijesh Singh @ 2021-11-29 14:49 UTC (permalink / raw)
  To: Dov Murik, Daniel P. Berrangé, Tom Lendacky
  Cc: brijesh.singh, Sergio Lopez, Dr. David Alan Gilbert,
	Tyler Fanelli, afrosi, qemu-devel, dinechin, John Ferlan,
	James Bottomley, Tobin Feldman-Fitzthum, Jim Cadden,
	Hubertus Franke



On 11/29/21 8:29 AM, Brijesh Singh wrote:
> 
> 
> On 11/25/21 7:59 AM, Dov Murik wrote:
>> [+cc Tom, Brijesh]
>>
>> On 25/11/2021 15:42, Daniel P. Berrangé wrote:
>>> On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
>>>> [+cc jejb, tobin, jim, hubertus]
>>>>
>>>>
>>>> On 25/11/2021 9:14, Sergio Lopez wrote:
>>>>> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert 
>>>>> wrote:
>>>>>> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>>>>>>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> We recently discussed a way for remote SEV guest attestation 
>>>>>>>> through QEMU.
>>>>>>>> My initial approach was to get data needed for attestation 
>>>>>>>> through different
>>>>>>>> QMP commands (all of which are already available, so no changes 
>>>>>>>> required
>>>>>>>> there), deriving hashes and certificate data; and collecting all 
>>>>>>>> of this
>>>>>>>> into a new QMP struct (SevLaunchStart, which would include the 
>>>>>>>> VM's policy,
>>>>>>>> secret, and GPA) which would need to be upstreamed into QEMU. 
>>>>>>>> Once this is
>>>>>>>> provided, QEMU would then need to have support for attestation 
>>>>>>>> before a VM
>>>>>>>> is started. Upon speaking to Dave about this proposal, he 
>>>>>>>> mentioned that
>>>>>>>> this may not be the best approach, as some situations would 
>>>>>>>> render the
>>>>>>>> attestation unavailable, such as the instance where a VM is 
>>>>>>>> running in a
>>>>>>>> cloud, and a guest owner would like to perform attestation via 
>>>>>>>> QMP (a likely
>>>>>>>> scenario), yet a cloud provider cannot simply let anyone pass 
>>>>>>>> arbitrary QMP
>>>>>>>> commands, as this could be an issue.
>>>>>>>
>>>>>>> As a general point, QMP is a low level QEMU implementation detail,
>>>>>>> which is generally expected to be consumed exclusively on the host
>>>>>>> by a privileged mgmt layer, which will in turn expose its own higher
>>>>>>> level APIs to users or other apps. I would not expect to see QMP
>>>>>>> exposed to anything outside of the privileged host layer.
>>>>>>>
>>>>>>> We also use the QAPI protocol for QEMU guest agent commmunication,
>>>>>>> however, that is a distinct service from QMP on the host. It shares
>>>>>>> most infra with QMP but has a completely diffent command set. On the
>>>>>>> host it is not consumed inside QEMU, but instead consumed by a
>>>>>>> mgmt app like libvirt.
>>>>>>>
>>>>>>>> So I ask, does anyone involved in QEMU's SEV implementation have 
>>>>>>>> any input
>>>>>>>> on a quality way to perform guest attestation? If so, I'd be 
>>>>>>>> interested.
>>>>>>>
>>>>>>> I think what's missing is some clearer illustrations of how this
>>>>>>> feature is expected to be consumed in some real world application
>>>>>>> and the use cases we're trying to solve.
>>>>>>>
>>>>>>> I'd like to understand how it should fit in with common libvirt
>>>>>>> applications across the different virtualization management
>>>>>>> scenarios - eg virsh (command line),  virt-manger (local desktop
>>>>>>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
>>>>>>> And of course any non-traditional virt use cases that might be
>>>>>>> relevant such as Kata.
>>>>>>
>>>>>> That's still not that clear; I know Alice and Sergio have some ideas
>>>>>> (cc'd).
>>>>>> There's also some standardisation efforts (e.g. 
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.potaroo.net%2Fietf%2Fhtml%2Fids-wg-rats.html&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065941078%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=E%2FeaI6JNF2ckosTeAbFRaCZUJOZ3zG0GNfKP8082INQ%3D&reserved=0 
>>>>>>
>>>>>> and 
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-ietf-rats-architecture-00.html&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065951077%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=WEkMIZZp3O5Gyay5jZT8KSUH9fyarNfXy5O0Z%2FpHdnQ%3D&reserved=0 
>>>>>>
>>>>>> ) - that I can't claim to fully understand.
>>>>>> However, there are some themes that are emerging:
>>>>>>
>>>>>>    a) One use is to only allow a VM to access some private data 
>>>>>> once we
>>>>>> prove it's the VM we expect running in a secure/confidential system
>>>>>>    b) (a) normally involves requesting some proof from the VM and 
>>>>>> then
>>>>>> providing it some confidential data/a key if it's OK
>>>>>>    c) RATs splits the problem up:
>>>>>>      
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-ietf-rats-architecture-00.html%23name-architectural-overview&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065951077%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FwNFMGAfojFZyGIj79D5%2BW%2BRPPuwumJiqIrf5UVrkPU%3D&reserved=0 
>>>>>>
>>>>>>      I don't fully understand the split yet, but in principal 
>>>>>> there are
>>>>>> at least a few different things:
>>>>>>
>>>>>>    d) The comms layer
>>>>>>    e) Something that validates the attestation message (i.e. the
>>>>>> signatures are valid, the hashes all add up etc)
>>>>>>    f) Something that knows what hashes to expect (i.e. oh that's a 
>>>>>> RHEL
>>>>>> 8.4 kernel, or that's a valid kernel command line)
>>>>>>    g) Something that holds some secrets that can be handed out if 
>>>>>> e & f
>>>>>> are happy.
>>>>>>
>>>>>>    There have also been proposals (e.g. Intel HTTPA) for an 
>>>>>> attestable
>>>>>> connection after a VM is running; that's probably quite different 
>>>>>> from
>>>>>> (g) but still involves (e) & (f).
>>>>>>
>>>>>> In the simpler setups d,e,f,g probably live in one place; but it's 
>>>>>> not
>>>>>> clear where they live - for example one scenario says that your cloud
>>>>>> management layer holds some of them, another says you don't trust 
>>>>>> your
>>>>>> cloud management layer and you keep them separate.
>>>>>>
>>>>>> So I think all we're actually interested in at the moment, is (d) and
>>>>>> (e) and the way for (g) to get the secret back to the guest.
>>>>>>
>>>>>> Unfortunately the comms and the contents of them varies heavily with
>>>>>> technology; in some you're talking to the qemu/hypervisor 
>>>>>> (SEV/SEV-ES)
>>>>>> while in some you're talking to the guest after boot (SEV-SNP/TDX 
>>>>>> maybe
>>>>>> SEV-ES in some cases).
>>>>
>>>> SEV-ES has pre-launch measurement and secret injection, just like SEV
>>>> (except that the measurement includes the initial states of all vcpus,
>>>> that is, their VMSAs.  BTW that means that in order to calculate the
>>>> measurement the Attestation Server must know exactly how many vcpus are
>>>> in the VM).
>>>
>>> Does that work with CPU hotplug ? ie cold boot with -smp 4,maxcpus=8
>>> and some time later try to enable the extra 4 cpus at runtime ?
>>>
>>
>> AFAIK all generations of SEV don't support CPU hotplug. Tom, Brijesh -
>> is that indeed the case?
>>
> 
> I think we may able to do a CPU hotplug on SEV but hotplug will not work 
> for the SEV-ES and SEV-SNP. This is mainly because the register state 
> need to be measured before the boot.

Tom just pointed me out, theoretically we could do a hotplug of CPUs 
under the SEV-SNP but I will need to check the security team just to be 
sure that we are good from the attestation flow. I can update you guys 
on it.

thanks

> 
>> I don't know about TDX.
>>
>> -Dov
>>
>>
>>>
>>>>>> So my expectation at the moment is libvirt needs to provide a 
>>>>>> transport
>>>>>> layer for the comms, to enable an external validator to retrieve the
>>>>>> measurements from the guest/hypervisor and provide data back if
>>>>>> necessary.  Once this shakes out a bit, we might want libvirt to be
>>>>>> able to invoke the validator; however I expect (f) and (g) to be much
>>>>>> more complex things that don't feel like they belong in libvirt.
>>>>>
>>>>> We experimented with the attestation flow quite a bit while working on
>>>>> SEV-ES support for libkrun-tee. One important aspect we noticed quite
>>>>> early, is that there's more data that's needed to be exchange of top
>>>>> of the attestation itself.
>>>>>
>>>>> For instance, even before you start the VM, the management layer in
>>>>> charge of coordinating the confidential VM launch needs to obtain the
>>>>> Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
>>>>> vs. TDX) and the platform version, to know which features are
>>>>> available and whether that host is a candidate for running the VM at
>>>>> all.
>>>>>
>>>>> With that information, the mgmt layer can build a guest policy (this
>>>>> is SEV's terminology, but I guess we'll have something similar in
>>>>> TDX) and feed it to component launching the VMM (libvirt, in this
>>>>> case).
>>>>>
>>>>> For SEV-SNP, this is pretty much the end of the story, because the
>>>>> attestation exchange is driven by an agent inside the guest. Well,
>>>>> there's also the need to have in the VM a well-known vNIC bridged to a
>>>>> network that's routed to the Attestation Server, that everyone seems
>>>>> to consider a given, but to me, from a CSP perspective, looks like
>>>>> quite a headache. In fact, I'd go as far as to suggest this
>>>>> communication should happen through an alternative channel, such as
>>>>> vsock, having a proxy on the Host, but I guess that depends on the CSP
>>>>> infrastructure.
>>>>
>>>> If we have an alternative channel (vsock?) and a proxy on the host,
>>>> maybe we can share parts of the solution between SEV and SNP.
>>>>
>>>>
>>>>> For SEV/SEV-ES, as the attestation happens at the VMM level, there's
>>>>> still the need to have some interactions with it. As Tyler pointed
>>>>> out, we basically need to retrieve the measurement and, if valid,
>>>>> inject the secret. If the measurement isn't valid, the VM must be shut
>>>>> down immediately.
>>>>>
>>>>> In libkrun-tee, this operation is driven by the VMM in libkrun, which
>>>>> contacts the Attestation Server with the measurement and receives the
>>>>> secret in exchange. I guess for QEMU/libvirt we expect this to be
>>>>> driven by the upper management layer through a delegated component in
>>>>> the Host, such as NOVA. In this case, NOVA would need to:
>>>>>
>>>>>   - Based on the upper management layer info and the Host properties,
>>>>>     generate a guest policy and use it while generating the compute
>>>>>     instance XML.
>>>>>
>>>>>   - Ask libvirt to launch the VM.
>>>>
>>>> Launch the VM with -S (suspended; so it doesn't actually begin running
>>>> guest instructions).
>>>>
>>>>
>>>>>
>>>>>   - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.
>>>>>
>>>>>   - Retrieve the measurement *.
>>>>
>>>> Note that libvirt holds the QMP socket to QEMU.  So whoever fetches the
>>>> measurement needs either (a) to ask libvirt to it; or (b) to connect to
>>>> another QMP listening socket for getting the measurement and injecting
>>>> the secret.
>>>
>>> Libvirt would not be particularly happy with allowing (b) because it
>>> enables the 3rd parties to change the VM state behind libvirt's back
>>> in ways that can ultimately confuse its understanding of the state
>>> of the VM. If there's some task that needs  interaction with a QEMU
>>> managed by libvirt, we need to expose suitable APIs in libvirt (if
>>> they don't already exist).
>>>
>>>
>>> Regards,
>>> Daniel
>>>


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-11-29 14:50 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-11-24 16:34 SEV guest attestation Tyler Fanelli
2021-11-24 17:27 ` Tyler Fanelli
2021-11-24 17:49 ` Dr. David Alan Gilbert
2021-11-24 18:29   ` Tyler Fanelli
2021-11-24 17:57 ` Daniel P. Berrangé
2021-11-24 18:29   ` Dr. David Alan Gilbert
2021-11-25  7:14     ` Sergio Lopez
2021-11-25 12:44       ` Dov Murik
2021-11-25 13:42         ` Daniel P. Berrangé
2021-11-25 13:59           ` Dov Murik
2021-11-29 14:29             ` Brijesh Singh
2021-11-29 14:49               ` Brijesh Singh
2021-11-25 15:11         ` Sergio Lopez
2021-11-25 15:40           ` Dr. David Alan Gilbert
2021-11-25 15:56             ` Daniel P. Berrangé
2021-11-25 16:08               ` Dr. David Alan Gilbert
2021-11-29 13:33                 ` Dov Murik
2021-11-25 13:20       ` Dr. David Alan Gilbert
2021-11-25 13:36       ` Daniel P. Berrangé
2021-11-25 13:52       ` Daniel P. Berrangé
2021-11-25 13:55         ` Dov Murik
2021-11-25 15:00         ` Dr. David Alan Gilbert
2021-11-25 13:27     ` Daniel P. Berrangé
2021-11-25 13:50       ` Dov Murik
2021-11-25 13:56         ` Daniel P. Berrangé
2021-11-25 15:19       ` Dr. David Alan Gilbert

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).