linux-coco.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Dov Murik <dovmurik@linux.ibm.com>
Cc: linux-coco@lists.linux.dev,
	Tobin Feldman-Fitzthum <tobin@linux.ibm.com>,
	James Bottomley <jejb@linux.ibm.com>,
	amd-sev-snp@lists.suse.com,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: Secure vTPMs for confidential VMs
Date: Wed, 21 Sep 2022 10:36:13 +0100	[thread overview]
Message-ID: <YyrbDbB1HJ9juGV1@redhat.com> (raw)
In-Reply-To: <84d6ee10-ff8a-a121-d62f-19becf400e75@linux.ibm.com>

On Tue, Sep 20, 2022 at 11:28:15PM +0300, Dov Murik wrote:
> Emulating hardware TPMs has an advantage that guest software already
> uses TPM devices to measure boot sequence components (firmware,
> bootloader, kernel, initrd) and runtime events (IMA in Linux).  We know
> that this currently works with vTPMs backed by the VMM implementation,
> such as QEMU's tpm device which is connected to swtpm running on the
> host.

Leveraging pre-existing support in guest OS feels pretty compelling.
It is apparent that there is alot of maintainer activity across pieces
of the Linux software/distro stack in relation to improving support
for SecureBoot and (v)TPMs in general. Being able to take advantage of
this would be good for confidential computing, by reducing the burden
on software/distro maintainers, and giving users technology that they
are (in theory) at least somewhat familiar with already.

If we can drive the confidential compute specific bits, including
the attestation of the confidential hardware, from the guest firmware,
then it ought to make it easier for guest OS images to be agnostic as
to whether they're running a non-confidential or confidential VM.

It becomes more of a deployment decision for the user of whether to
use a confidential VM or not at any launch attempt. eg they could
have 1 image and run it in a non-confidential VM on their internal
cloud, while using a confidential VM on public cloud when needing
to scale their resources.


This would not be so straightforward with some of the alternative
proposals for confidential VM disk images. For example another
proposal has been to have a bootloader like grub embedded in the
firmware, such that even /boot is encrypted in the disk image and
gets keys provided for unlock prior to the OS being launched.

This would make that disk image inherantly incompatible with use
in non-confidential VM, as well as requiring OS vendors to ship
even more different cloud disk image variants, and support different
boot processes in their software stack.


So overall I'm heavily attracted to re-using existing technology
to the greatest extent that is practical. It makes confidential
computing "normal" and will facilitate its uptake.

> We so far recognized three issues that should be further researched in
> order to implement secure vTPMs for confidential VMs; these are TPM
> provisioning, implementations in TEEs, and guest enlightment.
> 
> * TPM provisioning: The TPM contains sensitive information such as EK
> private key which should not be accessible to the host and to the guest.
> How should such information be delivered to the vTPM when starting a new
> VM?  If we provision encrypted NVDATA, who has the key to decrypt it?
> If we provision it with "classic" TEE secret injection, we need to do it
> quite early in the VM launch sequence (even before the firmware starts?).

For it to be transparent to the guest OS, then the vTPM state would
need to be unlocked prior to the guest OS being launched. This points
towards the confidential VM firmware triggering an initial call to the
attestation service, and receiving a key to unlock the vTPM state
as a response.

It is likely that the guest OS owner would want the option to perform
another attestation later in boot, to validate the broader OS userspace
boot status.  IOW, the firmware initiated attestation handles aspects
specific to bootstrapping the confidential VM environment, while an OS
initiated attestation would handle the generic (pre-existing) use cases
for OS state validation, familiar to anyone already using (v)TPMs.

> One suggestion is to use an ephemeral EK, generated at launch by the
> vTPM.  The system may start to boot using such a TPM, but once we want
> to unseal secrets (for example, to unlock a LUKS partition), we need
> something persistent inside the TPM (or re-seal the key for each TPM).
> Ephemeral TPMs might be a useful first step.

If the motivation for using vTPMs is to take advantage of pre-existing
TPM support in guest OS, then IMHO we should be aiming for the vTPM to
be on a par with a vTPM from a non-confidential VM / bare metal.  An
ephemeral only vTPM would loose some (but not all) of the benefit of
targetting pre-existing TPM support in guests.


> * Implementation in TEEs: SNP introduced VPMLs, and AMD's linux-SVSM
> running in VPML0 can also run vTPM code to handle TPM requests from the
> guest running in VMPL1.  Such a solution is not applicable as-is to
> other TEEs (SEV, TDX).  People suggested running vTPMs in a separate
> confidential VMs, and somehow connect the tenant's guest to the TPM VM;
> but we'll need a way to secure this communication channel.

TDX is obviously an important target, but I'm not sure its worth
worrying too much about SEV/SEV-ES as that generation is inherantly
limited & flawed compared to current SEV-SNP. The ony thing in favour
of SEV/SEV-ES is broader hardware availability today, but that will be
a time limited advantage that's eroded as SEV-SNP deployment expands. 

> * Guest enlightment: Guest software currently interacts with the TPM by
> writing commands to a memory-mapped IO page (GPA 0xfed40000) and reading
> responses from that page.  We want such writes to trigger the code of
> our vTPM (for whatever implementation we choose).  Our current early
> experience with TPM running in linux-SVSM required adding "exit-guest"
> calls after writing commands to the IO page, in order to allow the SVSM
> to run and recognize the incoming command.  Ideally, we'd like a
> solution that doesn't require modifying all the TPM drivers out there
> (in Linux, Windows, OVMF, grub, ...).

As best I could tell looking at the public Ubuntu confidential VM
image published in Azure, there were no modifications to TPM related
pieces of the stack. So theoretically it appears possible to achieve,
but I have no idea how they do so at a technical level.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


  parent reply	other threads:[~2022-09-21  9:36 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-20 20:28 Secure vTPMs for confidential VMs Dov Murik
2022-09-21  8:49 ` Jörg Rödel
2022-09-21 17:07   ` Tom Lendacky
2022-09-22 21:14     ` Tobin Feldman-Fitzthum
2022-09-22 22:01       ` [EXTERNAL] " Jon Lange
2022-09-21  9:36 ` Daniel P. Berrangé [this message]
2022-10-03  7:42   ` Dov Murik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YyrbDbB1HJ9juGV1@redhat.com \
    --to=berrange@redhat.com \
    --cc=amd-sev-snp@lists.suse.com \
    --cc=dgilbert@redhat.com \
    --cc=dovmurik@linux.ibm.com \
    --cc=jejb@linux.ibm.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=tobin@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).