From: Tom Lendacky <thomas.lendacky@amd.com>
To: "Jörg Rödel" <jroedel@suse.de>
Cc: "linux-coco@lists.linux.dev" <linux-coco@lists.linux.dev>,
"amd-sev-snp@lists.suse.com" <amd-sev-snp@lists.suse.com>
Subject: Re: SVSM Attestation and vTPM specification additions - v0.61
Date: Wed, 1 Mar 2023 09:00:23 -0600 [thread overview]
Message-ID: <88f87fe5-e503-5faf-c36e-13595443a69b@amd.com> (raw)
In-Reply-To: <Y/jGcMhXfEm8MCYK@suse.de>
On 2/24/23 08:15, Jörg Rödel wrote:
> On Tue, Feb 21, 2023 at 04:07:46PM -0600, Tom Lendacky wrote:
>> VCPU create doesn't actually replace the VMSA from the hypervisor point of
>> view. The guest is still required to invoke the GHCB interface to notify the
>> hypervisor.
>>
>> The SVSM PoC leaves any current VMSA in the list of all VMSAs, but does
>> replace the current / in-use version of the VMSA for the specified VCPU.
>> This is used to examine the register associated with a request, so the guest
>> can shoot itself in the foot if it doesn't switch that VCPU to the new VMSA.
>>
>> I'm ok with either doing that or, as you suggest below, checking if there's
>> an existing VMSA for the VMPL level and failing the request until the the
>> existing VMSA is deleted. That does require the deletion/creation to be
>> offloaded to another VCPU, though (unless we allow the self-deletion change
>> talked about below).
>
> Yeah, so the current protocol expects a bit too much from the code
> running at VMPL1 for my personal taste. I think it would be better if we
> put some constraints into the handling of CREATE_VCPU and DELETE_VCPU
> calls to keep the number of pages marked as VMSA as small as possible.
>
> With the current semantics the guest is required to call DELETE_VCPU on
> any VMSA it no longer uses. But it is not required to do so. Also there
> is a race window between switching to the new VMSA and deleting the old
> one where the HV could run the old and the new VMSA in parallel.
>
> The SVSM will not get to see any old VMSA before it is retired (e.g. to
> clear EFER.SVME and make it invalid) until the DELETE_VCPU call, but at
> that time the new VMSA could already be running.
We could specify that any vCPU create where there's an existing VMSA at
the VMPL level specified, results in the existing VMSA being "deleted" (an
implicit call to delete vCPU) before performing the create vCPU. If the
old VMSA is executing, this would prevent the creation until the old VMSA
can be deleted.
The guest could get itself in trouble no matter what if it does not
quiesce the target vCPU before either calling delete or create vCPU. So I
think the above would work ok.
>
> I don't think we can fully prevent such attacks, but we can make them
> more complicated by having some constraints who and when VCPUs can be
> created and deleted.
>
> In particular:
>
> 1) VCPUs can only delete themselves. Exception is VCPU 0
> (or whichever VCPU the BSP is).
I don't think we should have this limitation. We don't have the limitation
today when the kernel is running at VMPL0, so not sure we should have it
here. If the guest wants to do it, then it is really up to them as I don't
think it's an attack against the SVSM, only against itself.
>
> 2) VCPUs can only be created for same or higher VMPLs on any
> VCPU in the guest, given that there is no VMSA configured for
> this (VCPU, VMPL) combination.
Agreed, we need to verify that the caller is not trying to create a VMSA
at a higher VMPL privilege level that it is currently running at.
>
> These are constraints the SVSM can enforce and it should keep the number
> of VMSAs in the system minimal.
>
> It requires some changes to the current OVMF patches for SNP and SVSM,
> though.
>
>> It might be desirable to allow another APIC to delete a VMSA. Maybe kexec
>> processing would be able to benefit from that.
>
> Yeah, maybe. On the other side kexec would benefit if VCPUs delete
> themselves in the old kernel. Otherwise the new kernel needs to find the
> VMSA GPAs und delete them manually.
>
> There is already a code-path for kexec/kdump where the non-boot CPUs are
> parked. This could be instrumented. If a VCPU deletes itself the SVSM
> will go into HLT until a new VCPU is created.
Yes, this would need to be documented in the spec to be sure we get the
proper behavior on a self deletion.
Thanks,
Tom
>
>> Should the clearing of the SVME bit fail, then a FAIL_INUSE return code
>> would be returned. I wouldn't think that the synchronization would be that
>> difficult, but I'm not sure. If we do allow self-deletion, then, I agree,
>> that would work nicely.
>
> The question here is, can we reliably detect within the SVSM that
> clearing EFER.SVME failed?
>
> Regards,
>
next prev parent reply other threads:[~2023-03-01 15:00 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-08 21:55 SVSM Attestation and vTPM specification additions - v0.61 Tom Lendacky
2023-02-08 23:19 ` Dionna Amalie Glaze
2023-02-08 23:44 ` Tom Lendacky
2023-02-15 9:49 ` Jörg Rödel
2023-02-21 22:07 ` Tom Lendacky
2023-02-24 14:15 ` Jörg Rödel
2023-02-24 19:02 ` [EXTERNAL] " Jon Lange
2023-02-25 6:33 ` Jörg Rödel
2023-02-27 17:03 ` Jon Lange
2023-03-01 8:56 ` Jörg Rödel
2023-03-01 14:00 ` Tom Lendacky
2023-03-01 15:00 ` Tom Lendacky [this message]
2023-02-15 14:57 ` Tom Lendacky
2023-03-06 10:33 ` Dov Murik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=88f87fe5-e503-5faf-c36e-13595443a69b@amd.com \
--to=thomas.lendacky@amd.com \
--cc=amd-sev-snp@lists.suse.com \
--cc=jroedel@suse.de \
--cc=linux-coco@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).