From: Isaku Yamahata <isaku.yamahata@linux.intel.com>
To: "Wang, Wei W" <wei.w.wang@intel.com>
Cc: "Yamahata, Isaku" <isaku.yamahata@intel.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"isaku.yamahata@gmail.com" <isaku.yamahata@gmail.com>,
Paolo Bonzini <pbonzini@redhat.com>,
"Aktas, Erdem" <erdemaktas@google.com>, "Christopherson,,
Sean" <seanjc@google.com>, "Shahar, Sagi" <sagis@google.com>,
David Matlack <dmatlack@google.com>,
"Huang, Kai" <kai.huang@intel.com>,
Zhi Wang <zhi.wang.linux@gmail.com>,
"Chen, Bo2" <chen.bo@intel.com>,
"Yuan, Hang" <hang.yuan@intel.com>,
"Zhang, Tina" <tina.zhang@intel.com>,
"gkirkpatrick@google.com" <gkirkpatrick@google.com>,
isaku.yamahata@linux.intel.com
Subject: Re: [PATCH v16 059/116] KVM: TDX: Create initial guest memory
Date: Fri, 17 Nov 2023 12:15:23 -0800 [thread overview]
Message-ID: <20231117201523.GD1109547@ls.amr.corp.intel.com> (raw)
In-Reply-To: <DS0PR11MB6373EC1033F88008D3B71568DCB7A@DS0PR11MB6373.namprd11.prod.outlook.com>
On Fri, Nov 17, 2023 at 12:56:32PM +0000,
"Wang, Wei W" <wei.w.wang@intel.com> wrote:
> On Tuesday, October 17, 2023 12:14 AM, isaku.yamahata@intel.com wrote:
> > Because the guest memory is protected in TDX, the creation of the initial guest
> > memory requires a dedicated TDX module API, tdh_mem_page_add, instead of
> > directly copying the memory contents into the guest memory in the case of
> > the default VM type. KVM MMU page fault handler callback, private_page_add,
> > handles it.
> >
> > Define new subcommand, KVM_TDX_INIT_MEM_REGION, of VM-scoped
> > KVM_MEMORY_ENCRYPT_OP. It assigns the guest page, copies the initial
> > memory contents into the guest memory, encrypts the guest memory. At the
> > same time, optionally it extends memory measurement of the TDX guest. It
> > calls the KVM MMU page fault(EPT-violation) handler to trigger the callbacks
> > for it.
> >
> > Reported-by: gkirkpatrick@google.com
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> >
> > ---
> > v15 -> v16:
> > - add check if nr_pages isn't large with
> > (nr_page << PAGE_SHIFT) >> PAGE_SHIFT
> >
> > v14 -> v15:
> > - add a check if TD is finalized or not to tdx_init_mem_region()
> > - return -EAGAIN when partial population
> > ---
> > arch/x86/include/uapi/asm/kvm.h | 9 ++
> > arch/x86/kvm/mmu/mmu.c | 1 +
> > arch/x86/kvm/vmx/tdx.c | 167 +++++++++++++++++++++++++-
> > arch/x86/kvm/vmx/tdx.h | 2 +
> > tools/arch/x86/include/uapi/asm/kvm.h | 9 ++
> > 5 files changed, 185 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/include/uapi/asm/kvm.h
> > b/arch/x86/include/uapi/asm/kvm.h index 311a7894b712..a1815fcbb0be
> > 100644
> > --- a/arch/x86/include/uapi/asm/kvm.h
> > +++ b/arch/x86/include/uapi/asm/kvm.h
> > @@ -572,6 +572,7 @@ enum kvm_tdx_cmd_id {
> > KVM_TDX_CAPABILITIES = 0,
> > KVM_TDX_INIT_VM,
> > KVM_TDX_INIT_VCPU,
> > + KVM_TDX_INIT_MEM_REGION,
> >
> > KVM_TDX_CMD_NR_MAX,
> > };
> > @@ -645,4 +646,12 @@ struct kvm_tdx_init_vm {
> > struct kvm_cpuid2 cpuid;
> > };
> >
> > +#define KVM_TDX_MEASURE_MEMORY_REGION (1UL << 0)
> > +
> > +struct kvm_tdx_init_mem_region {
> > + __u64 source_addr;
> > + __u64 gpa;
> > + __u64 nr_pages;
> > +};
> > +
> > #endif /* _ASM_X86_KVM_H */
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index
> > 107cf27505fe..63a4efd1e40a 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5652,6 +5652,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
> > out:
> > return r;
> > }
> > +EXPORT_SYMBOL(kvm_mmu_load);
> >
> > void kvm_mmu_unload(struct kvm_vcpu *vcpu) { diff --git
> > a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index
> > a5f1b3e75764..dc17c212cb38 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -470,6 +470,21 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu,
> > hpa_t root_hpa, int pgd_level)
> > td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa &
> > PAGE_MASK); }
> >
> > +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa) {
> > + struct tdx_module_args out;
> > + u64 err;
> > + int i;
> > +
> > + for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
> > + err = tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out);
> > + if (KVM_BUG_ON(err, &kvm_tdx->kvm)) {
> > + pr_tdx_error(TDH_MR_EXTEND, err, &out);
> > + break;
> > + }
> > + }
> > +}
> > +
> > static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) {
> > struct page *page = pfn_to_page(pfn);
> > @@ -533,6 +548,61 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t
> > gfn,
> > return 0;
> > }
> >
> > +static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
> > + enum pg_level level, kvm_pfn_t pfn) {
> > + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> > + hpa_t hpa = pfn_to_hpa(pfn);
> > + gpa_t gpa = gfn_to_gpa(gfn);
> > + struct tdx_module_args out;
> > + hpa_t source_pa;
> > + bool measure;
> > + u64 err;
> > +
> > + /*
> > + * KVM_INIT_MEM_REGION, tdx_init_mem_region(), supports only 4K
> > page
> > + * because tdh_mem_page_add() supports only 4K page.
> > + */
> > + if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
> > + return -EINVAL;
> > +
> > + /*
> > + * In case of TDP MMU, fault handler can run concurrently. Note
> > + * 'source_pa' is a TD scope variable, meaning if there are multiple
> > + * threads reaching here with all needing to access 'source_pa', it
> > + * will break. However fortunately this won't happen, because below
> > + * TDH_MEM_PAGE_ADD code path is only used when VM is being
> > created
> > + * before it is running, using KVM_TDX_INIT_MEM_REGION ioctl
> > (which
> > + * always uses vcpu 0's page table and protected by vcpu->mutex).
> > + */
> > + if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) {
> > + tdx_unpin(kvm, pfn);
> > + return -EINVAL;
> > + }
> > +
> > + source_pa = kvm_tdx->source_pa &
> > ~KVM_TDX_MEASURE_MEMORY_REGION;
> > + measure = kvm_tdx->source_pa &
> > KVM_TDX_MEASURE_MEMORY_REGION;
> > + kvm_tdx->source_pa = INVALID_PAGE;
> > +
> > + do {
> > + err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa,
> > source_pa,
> > + &out);
> > + /*
> > + * This path is executed during populating initial guest memory
> > + * image. i.e. before running any vcpu. Race is rare.
> > + */
> > + } while (unlikely(err == TDX_ERROR_SEPT_BUSY));
> > + if (KVM_BUG_ON(err, kvm)) {
> > + pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
> > + tdx_unpin(kvm, pfn);
> > + return -EIO;
> > + } else if (measure)
> > + tdx_measure_page(kvm_tdx, gpa);
> > +
> > + return 0;
> > +
> > +}
> > +
> > static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
> > enum pg_level level, kvm_pfn_t pfn) { @@
> > -555,9 +625,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t
> > gfn,
> > if (likely(is_td_finalized(kvm_tdx)))
> > return tdx_sept_page_aug(kvm, gfn, level, pfn);
> >
> > - /* TODO: tdh_mem_page_add() comes here for the initial memory. */
> > -
> > - return 0;
> > + return tdx_sept_page_add(kvm, gfn, level, pfn);
> > }
> >
> > static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, @@ -1265,6
> > +1333,96 @@ void tdx_flush_tlb_current(struct kvm_vcpu *vcpu)
> > tdx_track(vcpu->kvm);
> > }
> >
> > +#define TDX_SEPT_PFERR (PFERR_WRITE_MASK |
> > PFERR_GUEST_ENC_MASK)
> > +
> > +static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd
> > +*cmd) {
> > + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> > + struct kvm_tdx_init_mem_region region;
> > + struct kvm_vcpu *vcpu;
> > + struct page *page;
> > + int idx, ret = 0;
> > + bool added = false;
> > +
> > + /* Once TD is finalized, the initial guest memory is fixed. */
> > + if (is_td_finalized(kvm_tdx))
> > + return -EINVAL;
> > +
> > + /* The BSP vCPU must be created before initializing memory regions.
> > */
> > + if (!atomic_read(&kvm->online_vcpus))
> > + return -EINVAL;
> > +
> > + if (cmd->flags & ~KVM_TDX_MEASURE_MEMORY_REGION)
> > + return -EINVAL;
> > +
> > + if (copy_from_user(®ion, (void __user *)cmd->data, sizeof(region)))
> > + return -EFAULT;
> > +
> > + /* Sanity check */
> > + if (!IS_ALIGNED(region.source_addr, PAGE_SIZE) ||
> > + !IS_ALIGNED(region.gpa, PAGE_SIZE) ||
> > + !region.nr_pages ||
> > + region.nr_pages & GENMASK_ULL(63, 63 - PAGE_SHIFT) ||
> > + region.gpa + (region.nr_pages << PAGE_SHIFT) <= region.gpa ||
> > + !kvm_is_private_gpa(kvm, region.gpa) ||
> > + !kvm_is_private_gpa(kvm, region.gpa + (region.nr_pages <<
> > PAGE_SHIFT)))
> > + return -EINVAL;
> > +
> > + vcpu = kvm_get_vcpu(kvm, 0);
> > + if (mutex_lock_killable(&vcpu->mutex))
> > + return -EINTR;
> > +
> > + vcpu_load(vcpu);
> > + idx = srcu_read_lock(&kvm->srcu);
> > +
> > + kvm_mmu_reload(vcpu);
> > +
> > + while (region.nr_pages) {
> > + if (signal_pending(current)) {
> > + ret = -ERESTARTSYS;
> > + break;
> > + }
> > +
> > + if (need_resched())
> > + cond_resched();
> > +
> > + /* Pin the source page. */
> > + ret = get_user_pages_fast(region.source_addr, 1, 0, &page);
> > + if (ret < 0)
> > + break;
> > + if (ret != 1) {
> > + ret = -ENOMEM;
> > + break;
> > + }
> > +
> > + kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) |
> > + (cmd->flags &
> > KVM_TDX_MEASURE_MEMORY_REGION);
> > +
>
> Is it fundamentally correct to take a userspace mapped page to add as a TD private page?
> Maybe take the corresponding page from gmem and do a copy to it?
> For example:
> ret = get_user_pages_fast(region.source_addr, 1, 0, &user_page);
> ...
> kvm_gmem_get_pfn(kvm, gfn_to_memslot(kvm, gfn), gfn, &gmem_pfn, NULL);
> memcpy(__va(gmem_pfn << PAGE_SHIFT), page_to_virt(user_page), PAGE_SIZE);
> kvm_tdx->source_pa = pfn_to_hpa(gmem_pfn) |
> (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION);
Please refer to
static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
enum pg_level level, kvm_pfn_t pfn)
The guest memfd provides the page of gfn which is different from
kvm_tdx->source_pa. The function calls tdh_mem_page_add().
tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa, &out);
gpa: corresponds to the page from guest memfd
source_pa: corresopnds to the page tdx_init_mem_region() pinned down.
tdh_mem_page_add() copies the page contents from source_pa to gpa and
gives gpa to the TD guest. not source_pa.
--
Isaku Yamahata <isaku.yamahata@linux.intel.com>
next prev parent reply other threads:[~2023-11-17 20:15 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-16 16:13 [PATCH v16 000/116] KVM TDX basic feature support isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 001/116] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 002/116] KVM: x86/vmx: initialize loaded_vmcss_on_cpu in vmx_hardware_setup() isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 003/116] KVM: x86/vmx: Refactor KVM VMX module init/exit functions isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 004/116] KVM: VMX: Reorder vmx initialization with kvm vendor initialization isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 005/116] KVM: TDX: Initialize the TDX module when loading the KVM intel kernel module isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 006/116] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 007/116] KVM: TDX: Make TDX VM type supported isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 008/116] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 009/116] KVM: TDX: Define " isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 010/116] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 011/116] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 012/116] KVM: TDX: Retry SEAMCALL on the lack of entropy error isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 013/116] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 014/116] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 015/116] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 016/116] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 017/116] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 018/116] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 019/116] KVM: x86, tdx: Make KVM_CAP_MAX_VCPUS backend specific isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 020/116] KVM: TDX: create/destroy VM structure isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 021/116] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 022/116] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 023/116] KVM: TDX: Refuse to unplug the last cpu on the package isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 024/116] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 025/116] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 026/116] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 027/116] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 028/116] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 029/116] KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 030/116] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 031/116] KVM: Allow page-sized MMU caches to be initialized with custom 64-bit values isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 032/116] KVM: x86/mmu: Replace hardcoded value 0 for the initial value for SPTE isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 033/116] KVM: x86/mmu: Allow non-zero value for non-present SPTE and removed SPTE isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 034/116] KVM: x86/mmu: Add Suppress VE bit to shadow_mmio_mask/shadow_present_mask isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 035/116] KVM: x86/mmu: Track shadow MMIO value on a per-VM basis isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 036/116] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 037/116] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 038/116] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 039/116] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 040/116] KVM: x86/mmu: Assume guest MMIOs are shared isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 041/116] KVM: x86/tdp_mmu: Init role member of struct kvm_mmu_page at allocation isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 042/116] KVM: x86/mmu: Add a new is_private member for union kvm_mmu_page_role isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 043/116] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 044/116] KVM: x86/tdp_mmu: Don't zap private pages for unsupported cases isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 045/116] KVM: x86/tdp_mmu: Sprinkle __must_check isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 046/116] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2023-10-16 16:13 ` [PATCH v16 047/116] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 048/116] KVM: x86/mmu: TDX: Do not enable page track for TD guest isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 049/116] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 050/116] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 051/116] KVM: TDX: Add accessors VMX VMCS helpers isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 052/116] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 053/116] KVM: TDX: Retry seamcall when TDX_OPERAND_BUSY with operand SEPT isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 054/116] KVM: TDX: Require TDP MMU and mmio caching for TDX isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 055/116] KVM: TDX: TDP MMU TDX support isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 056/116] KVM: TDX: MTRR: implement get_mt_mask() for TDX isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 057/116] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 058/116] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 059/116] KVM: TDX: Create initial guest memory isaku.yamahata
2023-11-17 12:56 ` Wang, Wei W
2023-11-17 20:15 ` Isaku Yamahata [this message]
2023-11-20 12:01 ` Wang, Wei W
2023-10-16 16:14 ` [PATCH v16 060/116] KVM: TDX: Finalize VM initialization isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 061/116] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 062/116] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 063/116] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 064/116] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 065/116] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 066/116] KVM: TDX: restore user ret MSRs isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 067/116] KVM: TDX: Add TSX_CTRL msr into uret_msrs list isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 068/116] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 069/116] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 070/116] KVM: TDX: restore debug store when TD exit isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 071/116] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 072/116] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 073/116] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 074/116] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 075/116] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 076/116] KVM: TDX: Implement interrupt injection isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 077/116] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 078/116] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 079/116] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 080/116] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 081/116] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 082/116] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 083/116] KVM: TDX: Handle vmentry failure for INTEL TD guest isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 084/116] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 085/116] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 086/116] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 087/116] KVM: TDX: Handle EXIT_REASON_OTHER_SMI with MSMI isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 088/116] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 089/116] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 090/116] KVM: TDX: Add KVM Exit for TDX TDG.VP.VMCALL isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 091/116] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 092/116] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 093/116] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 094/116] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 095/116] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 096/116] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 097/116] KVM: TDX: Handle MSR MTRRCap and MTRRDefType access isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 098/116] KVM: TDX: Handle MSR IA32_FEAT_CTL MSR and IA32_MCG_EXT_CTL isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 099/116] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 100/116] KVM: TDX: Silently discard SMI request isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 101/116] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 102/116] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 103/116] KVM: TDX: Add methods to ignore guest instruction emulation isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 104/116] KVM: TDX: Add a method to ignore dirty logging isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 105/116] KVM: TDX: Add methods to ignore VMX preemption timer isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 106/116] KVM: TDX: Add methods to ignore accesses to TSC isaku.yamahata
2023-10-16 16:14 ` [PATCH v16 107/116] KVM: TDX: Ignore setting up mce isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 108/116] KVM: TDX: Add a method to ignore for TDX to ignore hypercall patch isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 109/116] KVM: TDX: Add methods to ignore virtual apic related operation isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 110/116] KVM: TDX: Inhibit APICv for TDX guest isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 111/116] Documentation/virt/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 112/116] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 113/116] KVM: TDX: Add hint TDX ioctl to release Secure-EPT isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 114/116] RFC: KVM: x86: Add x86 callback to check cpuid isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 115/116] RFC: KVM: x86, TDX: Add check for KVM_SET_CPUID2 isaku.yamahata
2023-10-16 16:15 ` [PATCH v16 116/116] [MARKER] the end of (the first phase of) TDX KVM patch series isaku.yamahata
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231117201523.GD1109547@ls.amr.corp.intel.com \
--to=isaku.yamahata@linux.intel.com \
--cc=chen.bo@intel.com \
--cc=dmatlack@google.com \
--cc=erdemaktas@google.com \
--cc=gkirkpatrick@google.com \
--cc=hang.yuan@intel.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=kai.huang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=sagis@google.com \
--cc=seanjc@google.com \
--cc=tina.zhang@intel.com \
--cc=wei.w.wang@intel.com \
--cc=zhi.wang.linux@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox