From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Dov Murik <dovmurik@linux.ibm.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>,
Ashish Kalra <ashish.kalra@amd.com>,
Brijesh Singh <brijesh.singh@amd.com>,
Markus Armbruster <armbru@redhat.com>,
James Bottomley <jejb@linux.ibm.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
qemu-devel@nongnu.org,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Tobin Feldman-Fitzthum <tobin@linux.ibm.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Eric Blake <eblake@redhat.com>
Subject: Re: [PATCH v2] qapi, target/i386/sev: Add cpu0-id to query-sev-capabilities
Date: Thu, 24 Feb 2022 08:59:22 +0000 [thread overview]
Message-ID: <YhdI6iRAFSvaL7wr@redhat.com> (raw)
In-Reply-To: <20220224061405.1959756-1-dovmurik@linux.ibm.com>
On Thu, Feb 24, 2022 at 06:14:05AM +0000, Dov Murik wrote:
> Add a new field 'cpu0-id' to the response of query-sev-capabilities QMP
> command. The value of the field is the base64-encoded 64-byte unique ID
> of the CPU0 (socket 0), which can be used to retrieve the signed CEK of
> the CPU from AMD's Key Distribution Service (KDS).
>
> Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
>
> ---
>
> v2:
> - change encoding to Base64 (thanks Daniel)
> - rename constant to SEV_CPU_UNIQUE_ID_LEN
> ---
> qapi/misc-target.json | 4 ++++
> target/i386/sev.c | 27 +++++++++++++++++++++++++++
> 2 files changed, 31 insertions(+)
>
> diff --git a/qapi/misc-target.json b/qapi/misc-target.json
> index 4bc45d2474..c6d9ad69e1 100644
> --- a/qapi/misc-target.json
> +++ b/qapi/misc-target.json
> @@ -177,6 +177,8 @@
> #
> # @cert-chain: PDH certificate chain (base64 encoded)
> #
> +# @cpu0-id: 64-byte unique ID of CPU0 (base64 encoded) (since 7.0)
> +#
> # @cbitpos: C-bit location in page table entry
> #
> # @reduced-phys-bits: Number of physical Address bit reduction when SEV is
> @@ -187,6 +189,7 @@
> { 'struct': 'SevCapability',
> 'data': { 'pdh': 'str',
> 'cert-chain': 'str',
> + 'cpu0-id': 'str',
> 'cbitpos': 'int',
> 'reduced-phys-bits': 'int'},
> 'if': 'TARGET_I386' }
> @@ -205,6 +208,7 @@
> #
> # -> { "execute": "query-sev-capabilities" }
> # <- { "return": { "pdh": "8CCDD8DDD", "cert-chain": "888CCCDDDEE",
> +# "cpu0-id": "2lvmGwo+...61iEinw==",
> # "cbitpos": 47, "reduced-phys-bits": 5}}
> #
> ##
> diff --git a/target/i386/sev.c b/target/i386/sev.c
> index 025ff7a6f8..d3d2680e16 100644
> --- a/target/i386/sev.c
> +++ b/target/i386/sev.c
> @@ -82,6 +82,8 @@ struct SevGuestState {
> #define DEFAULT_GUEST_POLICY 0x1 /* disable debug */
> #define DEFAULT_SEV_DEVICE "/dev/sev"
>
> +#define SEV_CPU_UNIQUE_ID_LEN 64
Is this fixed size actually guaranteed by AMD ? In reading the spec
for "GET_ID" it feels like they're intentionally not specifying a
fixed length, pushing towards querying once and then re-trying with
the reported buffer size:
"If the value of the ID_LEN field is too small, an
INVALID_LENGTH error is returned and the minimum
required length is written to the field"
I didn't find where it says 64 bytes exactly.
> +
> #define SEV_INFO_BLOCK_GUID "00f771de-1a7e-4fcb-890e-68c77e2fb44e"
> typedef struct __attribute__((__packed__)) SevInfoBlock {
> /* SEV-ES Reset Vector Address */
> @@ -531,11 +533,31 @@ e_free:
> return 1;
> }
>
> +static int
> +sev_get_id(int fd, guchar *id_buf, size_t id_buf_len, Error **errp)
> +{
> + struct sev_user_data_get_id2 id = {
> + .address = (unsigned long)id_buf,
> + .length = id_buf_len
> + };
> + int err, r;
> +
> + r = sev_platform_ioctl(fd, SEV_GET_ID2, &id, &err);
> + if (r < 0) {
> + error_setg(errp, "SEV: Failed to get ID ret=%d fw_err=%d (%s)",
> + r, err, fw_error_to_str(err));
> + return 1;
> + }
> +
> + return 0;
> +}
> +
> static SevCapability *sev_get_capabilities(Error **errp)
> {
> SevCapability *cap = NULL;
> guchar *pdh_data = NULL;
> guchar *cert_chain_data = NULL;
> + guchar cpu0_id[SEV_CPU_UNIQUE_ID_LEN];
> size_t pdh_len = 0, cert_chain_len = 0;
> uint32_t ebx;
> int fd;
> @@ -561,9 +583,14 @@ static SevCapability *sev_get_capabilities(Error **errp)
> goto out;
> }
>
> + if (sev_get_id(fd, cpu0_id, sizeof(cpu0_id), errp)) {
> + goto out;
> + }
> +
> cap = g_new0(SevCapability, 1);
> cap->pdh = g_base64_encode(pdh_data, pdh_len);
> cap->cert_chain = g_base64_encode(cert_chain_data, cert_chain_len);
> + cap->cpu0_id = g_base64_encode(cpu0_id, sizeof(cpu0_id));
>
> host_cpuid(0x8000001F, 0, NULL, &ebx, NULL, NULL);
> cap->cbitpos = ebx & 0x3f;
> --
> 2.25.1
>
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2022-02-24 9:07 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-24 6:14 [PATCH v2] qapi, target/i386/sev: Add cpu0-id to query-sev-capabilities Dov Murik
2022-02-24 8:59 ` Daniel P. Berrangé [this message]
2022-02-24 10:27 ` Dov Murik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YhdI6iRAFSvaL7wr@redhat.com \
--to=berrange@redhat.com \
--cc=armbru@redhat.com \
--cc=ashish.kalra@amd.com \
--cc=brijesh.singh@amd.com \
--cc=dgilbert@redhat.com \
--cc=dovmurik@linux.ibm.com \
--cc=eblake@redhat.com \
--cc=jejb@linux.ibm.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=thomas.lendacky@amd.com \
--cc=tobin@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).