From: Vipin Sharma <vipinsh-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
To: Sean Christopherson
<seanjc-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org
Cc: Janosch Frank <frankja-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>,
Christian Borntraeger
<borntraeger-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>,
Lendacky-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
Thomas <thomas.lendacky-5C7GfCeVMHo@public.gmane.org>,
pbonzini-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org,
joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org,
corbet-T1hC0tSOHrs@public.gmane.org,
Singh-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
Brijesh <brijesh.singh-5C7GfCeVMHo@public.gmane.org>,
Grimm-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
Jon <jon.grimm-5C7GfCeVMHo@public.gmane.org>,
VanTassell-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
Eric <eric.vantassell-5C7GfCeVMHo@public.gmane.org>,
gingell-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [RFC Patch 0/2] KVM: SVM: Cgroup support for SVM SEV ASIDs
Date: Tue, 24 Nov 2020 11:49:04 -0800 [thread overview]
Message-ID: <20201124194904.GA45519@google.com> (raw)
In-Reply-To: <20201124191629.GB235281-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
On Tue, Nov 24, 2020 at 07:16:29PM +0000, Sean Christopherson wrote:
> On Fri, Nov 13, 2020, David Rientjes wrote:
> >
> > On Mon, 2 Nov 2020, Sean Christopherson wrote:
> >
> > > On Fri, Oct 02, 2020 at 01:48:10PM -0700, Vipin Sharma wrote:
> > > > On Fri, Sep 25, 2020 at 03:22:20PM -0700, Vipin Sharma wrote:
> > > > > I agree with you that the abstract name is better than the concrete
> > > > > name, I also feel that we must provide HW extensions. Here is one
> > > > > approach:
> > > > >
> > > > > Cgroup name: cpu_encryption, encryption_slots, or memcrypt (open to
> > > > > suggestions)
> > > > >
> > > > > Control files: slots.{max, current, events}
> > >
> > > I don't particularly like the "slots" name, mostly because it could be confused
> > > with KVM's memslots. Maybe encryption_ids.ids.{max, current, events}? I don't
> > > love those names either, but "encryption" and "IDs" are the two obvious
> > > commonalities betwee TDX's encryption key IDs and SEV's encryption address
> > > space IDs.
> > >
> >
> > Looping Janosch and Christian back into the thread.
> >
> > I interpret this suggestion as
> > encryption.{sev,sev_es,keyids}.{max,current,events} for AMD and Intel
>
> I think it makes sense to use encryption_ids instead of simply encryption, that
> way it's clear the cgroup is accounting ids as opposed to restricting what
> techs can be used on yes/no basis.
>
> > offerings, which was my thought on this as well.
> >
> > Certainly the kernel could provide a single interface for all of these and
> > key value pairs depending on the underlying encryption technology but it
> > seems to only introduce additional complexity in the kernel in string
> > parsing that can otherwise be avoided. I think we all agree that a single
> > interface for all encryption keys or one-value-per-file could be done in
> > the kernel and handled by any userspace agent that is configuring these
> > values.
> >
> > I think Vipin is adding a root level file that describes how many keys we
> > have available on the platform for each technology. So I think this comes
> > down to, for example, a single encryption.max file vs
> > encryption.{sev,sev_es,keyid}.max. SEV and SEV-ES ASIDs are provisioned
>
> Are you suggesting that the cgroup omit "current" and "events"? I agree there's
> no need to enumerate platform total, but not knowing how many of the allowed IDs
> have been allocated seems problematic.
>
We will be showing encryption_ids.{sev,sev_es}.{max,current}
I am inclined to not provide "events" as I am not using it, let me know
if this file is required, I can provide it then.
I will provide an encryption_ids.{sev,sev_es}.stat file, which shows
total available ids on the platform. This one will be useful for
scheduling jobs in the cloud infrastructure based on total supported
capacity.
> > separately so we treat them as their own resource here.
> >
> > So which is easier?
> >
> > $ cat encryption.sev.max
> > 10
> > $ echo -n 15 > encryption.sev.max
> >
> > or
> >
> > $ cat encryption.max
> > sev 10
> > sev_es 10
> > keyid 0
> > $ echo -n "sev 10" > encryption.max
> >
> > I would argue the former is simplest (always preferring
> > one-value-per-file) and avoids any string parsing or resource controller
> > lookups that need to match on that string in the kernel.
>
> Ya, I prefer individual files as well.
>
> I don't think "keyid" is the best name for TDX, it doesn't leave any wiggle room
> if there are other flavors of key IDs on Intel platform, e.g. private vs. shared
> in the future. It's also inconsistent with the SEV names, e.g. "asid" isn't
> mentioned anywhere. And "keyid" sort of reads as "max key id", rather than "max
> number of keyids". Maybe "tdx_private", or simply "tdx"? Doesn't have to be
> solved now though, there's plenty of time before TDX will be upstream. :-)
>
> > The set of encryption.{sev,sev_es,keyid} files that exist would depend on
> > CONFIG_CGROUP_ENCRYPTION and whether CONFIG_AMD_MEM_ENCRYPT or
> > CONFIG_INTEL_TDX is configured. Both can be configured so we have all
> > three files, but the root file will obviously indicate 0 keys available
> > for one of them (can't run on AMD and Intel at the same time :).
> >
> > So I'm inclined to suggest that the one-value-per-file format is the ideal
> > way to go unless there are objections to it.
next prev parent reply other threads:[~2020-11-24 19:49 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-22 0:40 [RFC Patch 0/2] KVM: SVM: Cgroup support for SVM SEV ASIDs Vipin Sharma
2020-09-22 0:40 ` [RFC Patch 1/2] KVM: SVM: Create SEV cgroup controller Vipin Sharma
[not found] ` <20200922004024.3699923-2-vipinsh-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2020-09-22 1:04 ` Randy Dunlap
2020-09-22 1:22 ` Sean Christopherson
[not found] ` <20200922012227.GA26483-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2020-09-22 16:05 ` Vipin Sharma
2020-11-03 16:39 ` James Bottomley
2020-11-03 18:10 ` Sean Christopherson
2020-11-03 22:43 ` James Bottomley
[not found] ` <20200922004024.3699923-1-vipinsh-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2020-09-22 0:40 ` [RFC Patch 2/2] KVM: SVM: SEV cgroup controller documentation Vipin Sharma
2020-09-22 1:48 ` [RFC Patch 0/2] KVM: SVM: Cgroup support for SVM SEV ASIDs Sean Christopherson
[not found] ` <20200922014836.GA26507-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2020-09-22 21:14 ` Vipin Sharma
[not found] ` <20200924192116.GC9649@linux.intel.com>
[not found] ` <20200924192116.GC9649-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2020-09-24 19:55 ` Tom Lendacky
2020-09-25 22:22 ` Vipin Sharma
2020-10-02 20:48 ` Vipin Sharma
2020-11-03 2:06 ` Sean Christopherson
2020-11-14 0:26 ` David Rientjes
2020-11-24 19:16 ` Sean Christopherson
[not found] ` <20201124191629.GB235281-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2020-11-24 19:49 ` Vipin Sharma [this message]
2020-11-24 20:18 ` David Rientjes
2020-11-24 21:08 ` Vipin Sharma
[not found] ` <20201124210817.GA65542-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2020-11-24 21:27 ` Sean Christopherson
2020-11-24 22:21 ` Vipin Sharma
2020-11-24 23:18 ` Sean Christopherson
2020-11-27 18:01 ` Christian Borntraeger
2020-10-01 18:08 ` Peter Gonda
[not found] ` <CAMkAt6oX+18cZy_t3hm0zo-sLmTGeGs5H9YAWvj7WBU7_uwU5Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-10-01 22:44 ` Tom Lendacky
2020-09-23 12:47 ` Paolo Bonzini
2020-09-28 9:12 ` Janosch Frank
2020-09-28 9:21 ` Christian Borntraeger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201124194904.GA45519@google.com \
--to=vipinsh-hpiqsd4aklfqt0dzr+alfa@public.gmane.org \
--cc=Grimm-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=Lendacky-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=Singh-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=VanTassell-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=borntraeger-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org \
--cc=brijesh.singh-5C7GfCeVMHo@public.gmane.org \
--cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=corbet-T1hC0tSOHrs@public.gmane.org \
--cc=eric.vantassell-5C7GfCeVMHo@public.gmane.org \
--cc=frankja-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org \
--cc=gingell-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=jon.grimm-5C7GfCeVMHo@public.gmane.org \
--cc=joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org \
--cc=kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org \
--cc=pbonzini-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=seanjc-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=thomas.lendacky-5C7GfCeVMHo@public.gmane.org \
--cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).