From: Vipin Sharma <vipinsh@google.com>
To: Sean Christopherson <seanjc@google.com>, rientjes@google.com
Cc: Janosch Frank <frankja@linux.ibm.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Lendacky@google.com, Thomas <thomas.lendacky@amd.com>,
pbonzini@redhat.com, tj@kernel.org, lizefan@huawei.com,
joro@8bytes.org, corbet@lwn.net, Singh@google.com,
Brijesh <brijesh.singh@amd.com>,
Grimm@google.com, Jon <jon.grimm@amd.com>,
VanTassell@google.com, Eric <eric.vantassell@amd.com>,
gingell@google.com, kvm@vger.kernel.org, x86@kernel.org,
cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFC Patch 0/2] KVM: SVM: Cgroup support for SVM SEV ASIDs
Date: Tue, 24 Nov 2020 11:49:04 -0800 [thread overview]
Message-ID: <20201124194904.GA45519@google.com> (raw)
In-Reply-To: <20201124191629.GB235281@google.com>
On Tue, Nov 24, 2020 at 07:16:29PM +0000, Sean Christopherson wrote:
> On Fri, Nov 13, 2020, David Rientjes wrote:
> >
> > On Mon, 2 Nov 2020, Sean Christopherson wrote:
> >
> > > On Fri, Oct 02, 2020 at 01:48:10PM -0700, Vipin Sharma wrote:
> > > > On Fri, Sep 25, 2020 at 03:22:20PM -0700, Vipin Sharma wrote:
> > > > > I agree with you that the abstract name is better than the concrete
> > > > > name, I also feel that we must provide HW extensions. Here is one
> > > > > approach:
> > > > >
> > > > > Cgroup name: cpu_encryption, encryption_slots, or memcrypt (open to
> > > > > suggestions)
> > > > >
> > > > > Control files: slots.{max, current, events}
> > >
> > > I don't particularly like the "slots" name, mostly because it could be confused
> > > with KVM's memslots. Maybe encryption_ids.ids.{max, current, events}? I don't
> > > love those names either, but "encryption" and "IDs" are the two obvious
> > > commonalities betwee TDX's encryption key IDs and SEV's encryption address
> > > space IDs.
> > >
> >
> > Looping Janosch and Christian back into the thread.
> >
> > I interpret this suggestion as
> > encryption.{sev,sev_es,keyids}.{max,current,events} for AMD and Intel
>
> I think it makes sense to use encryption_ids instead of simply encryption, that
> way it's clear the cgroup is accounting ids as opposed to restricting what
> techs can be used on yes/no basis.
>
> > offerings, which was my thought on this as well.
> >
> > Certainly the kernel could provide a single interface for all of these and
> > key value pairs depending on the underlying encryption technology but it
> > seems to only introduce additional complexity in the kernel in string
> > parsing that can otherwise be avoided. I think we all agree that a single
> > interface for all encryption keys or one-value-per-file could be done in
> > the kernel and handled by any userspace agent that is configuring these
> > values.
> >
> > I think Vipin is adding a root level file that describes how many keys we
> > have available on the platform for each technology. So I think this comes
> > down to, for example, a single encryption.max file vs
> > encryption.{sev,sev_es,keyid}.max. SEV and SEV-ES ASIDs are provisioned
>
> Are you suggesting that the cgroup omit "current" and "events"? I agree there's
> no need to enumerate platform total, but not knowing how many of the allowed IDs
> have been allocated seems problematic.
>
We will be showing encryption_ids.{sev,sev_es}.{max,current}
I am inclined to not provide "events" as I am not using it, let me know
if this file is required, I can provide it then.
I will provide an encryption_ids.{sev,sev_es}.stat file, which shows
total available ids on the platform. This one will be useful for
scheduling jobs in the cloud infrastructure based on total supported
capacity.
> > separately so we treat them as their own resource here.
> >
> > So which is easier?
> >
> > $ cat encryption.sev.max
> > 10
> > $ echo -n 15 > encryption.sev.max
> >
> > or
> >
> > $ cat encryption.max
> > sev 10
> > sev_es 10
> > keyid 0
> > $ echo -n "sev 10" > encryption.max
> >
> > I would argue the former is simplest (always preferring
> > one-value-per-file) and avoids any string parsing or resource controller
> > lookups that need to match on that string in the kernel.
>
> Ya, I prefer individual files as well.
>
> I don't think "keyid" is the best name for TDX, it doesn't leave any wiggle room
> if there are other flavors of key IDs on Intel platform, e.g. private vs. shared
> in the future. It's also inconsistent with the SEV names, e.g. "asid" isn't
> mentioned anywhere. And "keyid" sort of reads as "max key id", rather than "max
> number of keyids". Maybe "tdx_private", or simply "tdx"? Doesn't have to be
> solved now though, there's plenty of time before TDX will be upstream. :-)
>
> > The set of encryption.{sev,sev_es,keyid} files that exist would depend on
> > CONFIG_CGROUP_ENCRYPTION and whether CONFIG_AMD_MEM_ENCRYPT or
> > CONFIG_INTEL_TDX is configured. Both can be configured so we have all
> > three files, but the root file will obviously indicate 0 keys available
> > for one of them (can't run on AMD and Intel at the same time :).
> >
> > So I'm inclined to suggest that the one-value-per-file format is the ideal
> > way to go unless there are objections to it.
next prev parent reply other threads:[~2020-11-24 19:49 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-22 0:40 [RFC Patch 0/2] KVM: SVM: Cgroup support for SVM SEV ASIDs Vipin Sharma
2020-09-22 0:40 ` [RFC Patch 1/2] KVM: SVM: Create SEV cgroup controller Vipin Sharma
2020-09-22 1:04 ` Randy Dunlap
2020-09-22 1:22 ` Sean Christopherson
2020-09-22 16:05 ` Vipin Sharma
2020-11-03 16:39 ` James Bottomley
2020-11-03 18:10 ` Sean Christopherson
2020-11-03 22:43 ` James Bottomley
2020-09-22 0:40 ` [RFC Patch 2/2] KVM: SVM: SEV cgroup controller documentation Vipin Sharma
2020-09-22 1:48 ` [RFC Patch 0/2] KVM: SVM: Cgroup support for SVM SEV ASIDs Sean Christopherson
2020-09-22 21:14 ` Vipin Sharma
[not found] ` <20200924192116.GC9649@linux.intel.com>
2020-09-24 19:55 ` Tom Lendacky
2020-09-25 22:22 ` Vipin Sharma
2020-10-02 20:48 ` Vipin Sharma
2020-11-03 2:06 ` Sean Christopherson
2020-11-14 0:26 ` David Rientjes
2020-11-24 19:16 ` Sean Christopherson
2020-11-24 19:49 ` Vipin Sharma [this message]
2020-11-24 20:18 ` David Rientjes
2020-11-24 21:08 ` Vipin Sharma
2020-11-24 21:27 ` Sean Christopherson
2020-11-24 22:21 ` Vipin Sharma
2020-11-24 23:18 ` Sean Christopherson
2020-11-27 18:01 ` Christian Borntraeger
2020-10-01 18:08 ` Peter Gonda
2020-10-01 22:44 ` Tom Lendacky
2020-09-23 12:47 ` Paolo Bonzini
2020-09-28 9:12 ` Janosch Frank
2020-09-28 9:21 ` Christian Borntraeger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201124194904.GA45519@google.com \
--to=vipinsh@google.com \
--cc=Grimm@google.com \
--cc=Lendacky@google.com \
--cc=Singh@google.com \
--cc=VanTassell@google.com \
--cc=borntraeger@de.ibm.com \
--cc=brijesh.singh@amd.com \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=eric.vantassell@amd.com \
--cc=frankja@linux.ibm.com \
--cc=gingell@google.com \
--cc=jon.grimm@amd.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizefan@huawei.com \
--cc=pbonzini@redhat.com \
--cc=rientjes@google.com \
--cc=seanjc@google.com \
--cc=thomas.lendacky@amd.com \
--cc=tj@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).