From: Jason Chen CJ <jason.cj.chen@intel.com>
To: Sean Christopherson <seanjc@google.com>
Cc: <kvm@vger.kernel.org>
Subject: Re: [RFC PATCH part-1 0/5] pKVM on Intel Platform Introduction
Date: Tue, 14 Mar 2023 16:17:35 +0000 [thread overview]
Message-ID: <ZBCeH5JB14Gl3wOM@jiechen-ubuntu-dev> (raw)
In-Reply-To: <ZA9QZcADubkx/3Ev@google.com>
On Mon, Mar 13, 2023 at 09:33:41AM -0700, Sean Christopherson wrote:
> On Mon, Mar 13, 2023, Jason Chen CJ wrote:
> > There are similar use cases on x86 platforms requesting protected
> > environment which is isolated from host OS for confidential computing.
>
> What exactly are those use cases? The more details you can provide, the better.
> E.g. restricting the isolated VMs to 64-bit mode a la TDX would likely simplify
> the pKVM implementation.
Thanks Sean for your comments, I am very appreciated!
We are expected to run protected VM with general OS and may with
pass-thru secure devices support.
Yes, restricting the isolated(protected) VMs to 64-bit mode could
simplify the pKVM implementation, I think it should be considered.
Especially it could benefit vmcs isolation for protected VM - it echoes
to your comments on VMX emulation.
But we have a pain point to support normal VM. You know, TDX SEAM only
take care protected VM, it has dedicated secure EPT, TDCS etc. for a
protected VM; while for normal VM, it still go to the old KVM logic as
legacy EPT, VMCS kind of thing are still there.
For pKVM, we must rely on EPT, VMCS, IOMMU to do the isolation, so move
them to the hypervisor, and KVM-high need to manage them through pKVM for
both normal & protected VM:
- for EPT, technically, both paravirtualize & emulation method works,
we choose to use EPT emulation only because we do not want to change
KVM x86 MMU code. I am open to switch to paravirtualize method
especially after TDX patches got merged - we can leverage from it but
with more consideration to support normal VM.
- for VMCS, it's more tricky, as the best solution is that normal VM
run with emulated VMX to see full VMCS features, while protected VM
run with paravirtualized VMX to limit supported features (which
simplify the implementation in pKVM for VMCS isolation & management).
- for IOMMU, it has similar situation as EPT.
>
> > HW solutions e.g. TDX [5] also exist to support above use cases. But
> > they are available only on very new platforms. Hence having a software
> > solution on massive existing platforms is also plausible.
>
> TDX is a software solution, not a hardware solution. TDX relies on hardware features
> that are only present in bleeding edge CPUs, e.g. SEAM, but TDX itself is software.
Agree.
>
> I bring that up because this RFC, especially since it's being posted by folks
> from Intel, raises the question: why not utilize SEAM to implement pKVM for x86?
Some feedback in above, I suppose SEAM can be leveraged to support
protected VM, but with some further questions:
- how to support normal VM? if we have tradeoff to limit normal VM's
feature (same as protected VM), then things may become easier - but I
don't think it's friendly to end users. If we want to run normal VM
as what KVM can run now, we need to add extra code in SEAM.
- do we want to follow same interface? My feeling to TDX interface like
SEAMCALL for SEPT PAGE.ADD/AUG SEPT.ADD etc are complicated, for pKVM,
we can actually use simpler & straight-forward hypercall like
host_donate_guest, host_donate_hyp, host_share_guest.... And further
more in protected VM (which is TD guest in TDX), PAGE.ACCEPT may not
need for pKVM, and page sharing (based on SHARED_BIT) may also have
different implementation for pKVM.
- do we want to leverage the page ownership mechanism like PAMT? I have
to say pKVM also aready have one page state management mechanism can
easily be used.
May I know your suggestion of "utilize SEAM" is to follow TDX SPEC then
work out a SW-TDX solution, or just do some leverage from SEAM code?
--
Thanks
Jason CJ Chen
next prev parent reply other threads:[~2023-03-14 8:13 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-12 18:00 [RFC PATCH part-1 0/5] pKVM on Intel Platform Introduction Jason Chen CJ
2023-03-12 18:00 ` [RFC PATCH part-1 1/5] pkvm: arm64: Move nvhe/spinlock.h to include/asm dir Jason Chen CJ
2023-03-12 18:00 ` [RFC PATCH part-1 2/5] pkvm: arm64: Make page allocator arch agnostic Jason Chen CJ
2023-03-12 18:00 ` [RFC PATCH part-1 3/5] pkvm: arm64: Move page allocator to virt/kvm/pkvm Jason Chen CJ
2023-03-12 18:00 ` [RFC PATCH part-1 4/5] pkvm: arm64: Make memory reservation arch agnostic Jason Chen CJ
2023-03-12 18:00 ` [RFC PATCH part-1 5/5] pkvm: arm64: Move general part of memory reservation to virt/kvm/pkvm Jason Chen CJ
2023-03-13 16:33 ` [RFC PATCH part-1 0/5] pKVM on Intel Platform Introduction Sean Christopherson
2023-03-14 16:17 ` Jason Chen CJ [this message]
2023-03-14 14:21 ` Sean Christopherson
2023-03-16 8:50 ` Jason Chen CJ
2023-03-24 10:30 ` Keir Fraser
2023-06-07 14:26 ` Mickaël Salaün
2023-06-08 21:06 ` Dmytro Maluka
[not found] ` <d0900265-6ae6-2430-8185-4f9d153ec105@intel.com>
2023-06-09 8:08 ` Dmytro Maluka
2023-06-09 16:57 ` Trilok Soni
2023-06-09 18:44 ` Dmytro Maluka
2023-06-10 8:56 ` Dmytro Maluka
2023-06-13 17:45 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZBCeH5JB14Gl3wOM@jiechen-ubuntu-dev \
--to=jason.cj.chen@intel.com \
--cc=kvm@vger.kernel.org \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox