From: "Jörg Rödel" <jroedel@suse.de>
To: Tom Lendacky <thomas.lendacky@amd.com>
Cc: "linux-coco@lists.linux.dev" <linux-coco@lists.linux.dev>,
"amd-sev-snp@lists.suse.com" <amd-sev-snp@lists.suse.com>
Subject: Re: SVSM Attestation and vTPM specification additions - v0.61
Date: Fri, 24 Feb 2023 15:15:12 +0100 [thread overview]
Message-ID: <Y/jGcMhXfEm8MCYK@suse.de> (raw)
In-Reply-To: <069c74a5-2735-adba-5748-090e80550c9e@amd.com>
On Tue, Feb 21, 2023 at 04:07:46PM -0600, Tom Lendacky wrote:
> VCPU create doesn't actually replace the VMSA from the hypervisor point of
> view. The guest is still required to invoke the GHCB interface to notify the
> hypervisor.
>
> The SVSM PoC leaves any current VMSA in the list of all VMSAs, but does
> replace the current / in-use version of the VMSA for the specified VCPU.
> This is used to examine the register associated with a request, so the guest
> can shoot itself in the foot if it doesn't switch that VCPU to the new VMSA.
>
> I'm ok with either doing that or, as you suggest below, checking if there's
> an existing VMSA for the VMPL level and failing the request until the the
> existing VMSA is deleted. That does require the deletion/creation to be
> offloaded to another VCPU, though (unless we allow the self-deletion change
> talked about below).
Yeah, so the current protocol expects a bit too much from the code
running at VMPL1 for my personal taste. I think it would be better if we
put some constraints into the handling of CREATE_VCPU and DELETE_VCPU
calls to keep the number of pages marked as VMSA as small as possible.
With the current semantics the guest is required to call DELETE_VCPU on
any VMSA it no longer uses. But it is not required to do so. Also there
is a race window between switching to the new VMSA and deleting the old
one where the HV could run the old and the new VMSA in parallel.
The SVSM will not get to see any old VMSA before it is retired (e.g. to
clear EFER.SVME and make it invalid) until the DELETE_VCPU call, but at
that time the new VMSA could already be running.
I don't think we can fully prevent such attacks, but we can make them
more complicated by having some constraints who and when VCPUs can be
created and deleted.
In particular:
1) VCPUs can only delete themselves. Exception is VCPU 0
(or whichever VCPU the BSP is).
2) VCPUs can only be created for same or higher VMPLs on any
VCPU in the guest, given that there is no VMSA configured for
this (VCPU, VMPL) combination.
These are constraints the SVSM can enforce and it should keep the number
of VMSAs in the system minimal.
It requires some changes to the current OVMF patches for SNP and SVSM,
though.
> It might be desirable to allow another APIC to delete a VMSA. Maybe kexec
> processing would be able to benefit from that.
Yeah, maybe. On the other side kexec would benefit if VCPUs delete
themselves in the old kernel. Otherwise the new kernel needs to find the
VMSA GPAs und delete them manually.
There is already a code-path for kexec/kdump where the non-boot CPUs are
parked. This could be instrumented. If a VCPU deletes itself the SVSM
will go into HLT until a new VCPU is created.
> Should the clearing of the SVME bit fail, then a FAIL_INUSE return code
> would be returned. I wouldn't think that the synchronization would be that
> difficult, but I'm not sure. If we do allow self-deletion, then, I agree,
> that would work nicely.
The question here is, can we reliably detect within the SVSM that
clearing EFER.SVME failed?
Regards,
--
Jörg Rödel
jroedel@suse.de
SUSE Software Solutions Germany GmbH
Frankenstraße 146
90461 Nürnberg
Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
next prev parent reply other threads:[~2023-02-24 14:15 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-08 21:55 SVSM Attestation and vTPM specification additions - v0.61 Tom Lendacky
2023-02-08 23:19 ` Dionna Amalie Glaze
2023-02-08 23:44 ` Tom Lendacky
2023-02-15 9:49 ` Jörg Rödel
2023-02-21 22:07 ` Tom Lendacky
2023-02-24 14:15 ` Jörg Rödel [this message]
2023-02-24 19:02 ` [EXTERNAL] " Jon Lange
2023-02-25 6:33 ` Jörg Rödel
2023-02-27 17:03 ` Jon Lange
2023-03-01 8:56 ` Jörg Rödel
2023-03-01 14:00 ` Tom Lendacky
2023-03-01 15:00 ` Tom Lendacky
2023-02-15 14:57 ` Tom Lendacky
2023-03-06 10:33 ` Dov Murik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y/jGcMhXfEm8MCYK@suse.de \
--to=jroedel@suse.de \
--cc=amd-sev-snp@lists.suse.com \
--cc=linux-coco@lists.linux.dev \
--cc=thomas.lendacky@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).