* [PATCH v2 00/20] KVM: TDX: TDX "the rest" part
@ 2025-02-27 1:20 Binbin Wu
2025-02-27 1:20 ` [PATCH v2 01/20] KVM: TDX: Handle EPT violation/misconfig exit Binbin Wu
` (19 more replies)
0 siblings, 20 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
Hi,
This patch series adds the support for EPT violation/misconfig handling and
several TDVMCALL leaves, adds a bunch of wrappers to ignore the operations
not supported by TDX guests, and the document.
This patch series is the last part needed to provide the ability to run a
functioning TD VM. We think this is in pretty good shape at this point and
ready for handoff to Paolo.
Base of this series
===================
This series is based on kvm-coco-queue up to the end of "TDX interrupts",
plus one PAT quirk series. Stack is:
- '31db5921f12d ("KVM: TDX: Handle EXIT_REASON_OTHER_SMI")' from
kvm-coco-queue.
- PAT quirk series
"KVM: x86: Introduce quirk KVM_X86_QUIRK_EPT_IGNORE_GUEST_PAT" [0].
Notable changes since v1 [1]
============================
Patch "KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched
behavior" is moved to "KVM: TDX: TD vcpu enter/exit" [2].
Rebased after adding tdcall_to_vmx_exit_reason() in [3] and the way to
get exit_qualification, ext_exit_qualification.
For EPT MISCONFIG, bug the VM and return -EIO. The handling is deferred
until tdx_handle_exit() because tdx_to_vmx_exit_reason() is called by
'noinstr' code with interrupt disabled.
Add SEPT local retry and wait for SEPT zap logic to provide a clean
solution to avoid the blind SEPT retries.
Morph the following guest requested exit reasons (via TDVMCALL) to KVM's
tracked exit reasons:
- Morph PV CPUID to EXIT_REASON_CPUID
- Morph PV HLT to EXIT_REASON_HLT
- Morph PV RDMSR to EXIT_REASON_RDMSR
- Morph PV WRMSR to EXIT_REASON_WRMSR
Check RVI pending (bit 0 of TD_VCPU_STATE_DETAILS_NON_ARCH field) only for
HALTED case with IRQ enabled in tdx_protected_apic_has_interrupt().
For PV RDMSR/WRMSR handling, marshall values to the appropriate x86
registers to leverage the existing kvm_emulate_{rdmsr,wrmsr}(), and
implement complete_emulated_msr() callback to set return value/code to
vp_enter_args.
Skip setting of return code when the value is TDVMCALL_STATUS_SUCCESS
because r10 is always 0 for standard TDVMCALL exit.
Get/set tdvmcall inputs/outputs from/to vp_enter_args directly in struct
vcpu_tdx. After dropping helpers for read/write a0~a3 in [3].
Added back MTRR MSRs access, but drop the special handling for TDX guests,
just align with what KVM does for normal VMs.
Dropped tdx_cache_reg().
Updated documents.
TODO
====
Macrofy vt_x86_ops callbacks suggested by Sean. [4]
Overview
========
EPT violation
-------------
EPT violation for TDX will trigger X86 MMU code.
Note that instruction fetch from shared memory is not allowed for TDX
guests, if it occurs, treat it as broken hardware, bug the VM and return
error.
(*New Updated*)
SEPT local retry and wait for SEPT zap logic provides a clean solution to
avoid the blind SEPT retries.
EPT misconfiguration
--------------------
EPT misconfiguration shouldn't happen for TDX guests. If it occurs, bug the
VM and return error.
TDVMCALL support
----------------
Supports are added to allow TDX guests to issue CPUID, HLT, RDMSR/WRMSR and
GetTdVmCallInfo via TDVMCALL.
- CPUID
For TDX, most CPUID leaf/sub-leaf combinations are virtualized by the TDX
module while some trigger #VE. On #VE, TDX guest can issue a TDVMCALL
with the leaf Instruction.CPUID to request VMM to emulate CPUID
operation.
- HLT
TDX guest can issue a TDVMCALL with HLT, which passes the interrupt
blocked flag. Whether the interrupt is allowed or not is depending on the
interrupt blocked flag. For NMI, KVM can't get the NMI blocked status of
TDX guest, it always assumes NMI is allowed.
- MSRs
Some MSRs are virtualized by TDX module directly, while some MSRs will
trigger #VE when guest accesses them. On #VE, TDX guests can issue a
TDVMCALL with WRMSR or RDMSR to request emulation in VMM.
Operations ignored
------------------
TDX protects TDX guest state from VMM, and some features are not supported
by TDX guest, a bunch of operations are ignored for TDX guests, including:
accesses to CPU state, VMX preemption timer, accesses to TSC offset and
multiplier, setup MCE for LMCE enable/disable, and hypercall patching.
Repos
=====
Due to "KVM: VMX: Move common fields of struct" in "TDX vcpu enter/exit" v2
[2], subsequent patches require changes to use new struct vcpu_vt, refer to
the full KVM branch below.
It requires TDX module 1.5.06.00.0744 [4], or later as mentioned in [2].
A working edk2 commit is 95d8a1c ("UnitTestFrameworkPkg: Use TianoCore
mirror of subhook submodule").
The full KVM branch is here:
https://github.com/intel/tdx/tree/tdx_kvm_dev-2025-02-26
A matching QEMU is here:
https://github.com/intel-staging/qemu-tdx/tree/tdx-qemu-wip-2025-02-18
Testing
=======
It has been tested as part of the development branch for the TDX base
series. The testing consisted of TDX kvm-unit-tests and booting a Linux
TD, and TDX enhanced KVM selftests. It also passed the TDX related test
cases defined in the LKVS test suite as described in:
https://github.com/intel/lkvs/blob/main/KVM/docs/lkvs_on_avocado.md
[0] https://lore.kernel.org/kvm/20250224070716.31360-1-yan.y.zhao@intel.com
[1] https://lore.kernel.org/kvm/20241210004946.3718496-1-binbin.wu@linux.intel.com
[2] https://lore.kernel.org/kvm/20250129095902.16391-1-adrian.hunter@intel.com
[3] https://lore.kernel.org/kvm/20250222014225.897298-1-binbin.wu@linux.intel.com
[4] https://lore.kernel.org/kvm/Z6v9yjWLNTU6X90d@google.com
[5] https://github.com/intel/tdx-module/releases/tag/TDX_1.5.06
Binbin Wu (1):
KVM: TDX: Enable guest access to MTRR MSRs
Isaku Yamahata (16):
KVM: TDX: Handle EPT violation/misconfig exit
KVM: TDX: Handle TDX PV CPUID hypercall
KVM: TDX: Handle TDX PV HLT hypercall
KVM: x86: Move KVM_MAX_MCE_BANKS to header file
KVM: TDX: Implement callbacks for MSR operations
KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
KVM: TDX: Enable guest access to LMCE related MSRs
KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
KVM: TDX: Add methods to ignore accesses to CPU state
KVM: TDX: Add method to ignore guest instruction emulation
KVM: TDX: Add methods to ignore VMX preemption timer
KVM: TDX: Add methods to ignore accesses to TSC
KVM: TDX: Ignore setting up mce
KVM: TDX: Add a method to ignore hypercall patching
KVM: TDX: Make TDX VM type supported
Documentation/virt/kvm: Document on Trust Domain Extensions (TDX)
Yan Zhao (3):
KVM: TDX: Detect unexpected SEPT violations due to pending SPTEs
KVM: TDX: Retry locally in TDX EPT violation handler on RET_PF_RETRY
KVM: TDX: Kick off vCPUs when SEAMCALL is busy during TD page removal
Documentation/virt/kvm/api.rst | 13 +-
Documentation/virt/kvm/x86/index.rst | 1 +
Documentation/virt/kvm/x86/intel-tdx.rst | 255 ++++++++++++
arch/x86/include/asm/shared/tdx.h | 1 +
arch/x86/include/asm/vmx.h | 2 +
arch/x86/kvm/vmx/main.c | 482 ++++++++++++++++++++---
arch/x86/kvm/vmx/posted_intr.c | 3 +-
arch/x86/kvm/vmx/tdx.c | 381 +++++++++++++++++-
arch/x86/kvm/vmx/tdx.h | 16 +
arch/x86/kvm/vmx/tdx_arch.h | 13 +
arch/x86/kvm/vmx/x86_ops.h | 6 +
arch/x86/kvm/x86.c | 1 -
arch/x86/kvm/x86.h | 2 +
13 files changed, 1113 insertions(+), 63 deletions(-)
create mode 100644 Documentation/virt/kvm/x86/intel-tdx.rst
--
2.46.0
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 01/20] KVM: TDX: Handle EPT violation/misconfig exit
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 02/20] KVM: TDX: Detect unexpected SEPT violations due to pending SPTEs Binbin Wu
` (18 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
For TDX, on EPT violation, call common __vmx_handle_ept_violation() to
trigger x86 MMU code; on EPT misconfiguration, bug the VM since it
shouldn't happen.
EPT violation due to instruction fetch should never be triggered from
shared memory in TDX guest. If such EPT violation occurs, treat it as
broken hardware.
EPT misconfiguration shouldn't happen on neither shared nor secure EPT for
TDX guests.
- TDX module guarantees no EPT misconfiguration on secure EPT. Per TDX
module v1.5 spec section 9.4 "Secure EPT Induced TD Exits":
"By design, since secure EPT is fully controlled by the TDX module, an
EPT misconfiguration on a private GPA indicates a TDX module bug and is
handled as a fatal error."
- For shared EPT, the MMIO caching optimization, which is the only case
where current KVM configures EPT entries to generate EPT
misconfiguration, is implemented in a different way for TDX guests. KVM
configures EPT entries to non-present value without suppressing #VE bit.
It causes #VE in the TDX guest and the guest will call TDG.VP.VMCALL to
request MMIO emulation.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
[binbin: rework changelog]
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- KVM_BUG_ON() and return -EIO for real EXIT_REASON_EPT_MISCONFIG. (Sean)
- Defer the handling for real EXIT_REASON_EPT_MISCONFIG until
tdx_handle_exit() because tdx_to_vmx_exit_reason() is called by
non-instrumentable code with interrupt disabled.
- Rebased after adding tdcall_to_vmx_exit_reason().
TDX "the rest" v1:
- Renamed from "KVM: TDX: Handle ept violation/misconfig exit" to
"KVM: TDX: Handle EPT violation/misconfig exit" (Reinette)
- Removed WARN_ON_ONCE(1) in tdx_handle_ept_misconfig(). (Rick)
- Add comment above EPT_VIOLATION_ACC_INSTR check. (Chao)
https://lore.kernel.org/lkml/Zgoz0sizgEZhnQ98@chao-email/
https://lore.kernel.org/lkml/ZjiE+O9fct5zI4Sf@chao-email/
- Remove unnecessary define of TDX_SEPT_VIOLATION_EXIT_QUAL. (Sean)
- Replace pr_warn() and KVM_EXIT_EXCEPTION with KVM_BUG_ON(). (Sean)
- KVM_BUG_ON() for EPT misconfig. (Sean)
- Rework changelog.
v14 -> v15:
- use PFERR_GUEST_ENC_MASK to tell the fault is private
---
arch/x86/kvm/vmx/tdx.c | 47 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index f9eccee02a69..0e2c734070d6 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -860,6 +860,12 @@ static __always_inline u32 tdx_to_vmx_exit_reason(struct kvm_vcpu *vcpu)
return EXIT_REASON_VMCALL;
return tdcall_to_vmx_exit_reason(vcpu);
+ case EXIT_REASON_EPT_MISCONFIG:
+ /*
+ * Defer KVM_BUG_ON() until tdx_handle_exit() because this is in
+ * non-instrumentable code with interrupts disabled.
+ */
+ return -1u;
default:
break;
}
@@ -968,6 +974,9 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
vcpu->arch.regs_avail &= ~TDX_REGS_UNSUPPORTED_SET;
+ if (unlikely(tdx->vp_enter_ret == EXIT_REASON_EPT_MISCONFIG))
+ return EXIT_FASTPATH_NONE;
+
if (unlikely((tdx->vp_enter_ret & TDX_SW_ERROR) == TDX_SW_ERROR))
return EXIT_FASTPATH_NONE;
@@ -1674,6 +1683,37 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, trig_mode, vector);
}
+static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
+{
+ unsigned long exit_qual;
+ gpa_t gpa = to_tdx(vcpu)->exit_gpa;
+
+ if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) {
+ /*
+ * Always treat SEPT violations as write faults. Ignore the
+ * EXIT_QUALIFICATION reported by TDX-SEAM for SEPT violations.
+ * TD private pages are always RWX in the SEPT tables,
+ * i.e. they're always mapped writable. Just as importantly,
+ * treating SEPT violations as write faults is necessary to
+ * avoid COW allocations, which will cause TDAUGPAGE failures
+ * due to aliasing a single HPA to multiple GPAs.
+ */
+ exit_qual = EPT_VIOLATION_ACC_WRITE;
+ } else {
+ exit_qual = vmx_get_exit_qual(vcpu);
+ /*
+ * EPT violation due to instruction fetch should never be
+ * triggered from shared memory in TDX guest. If such EPT
+ * violation occurs, treat it as broken hardware.
+ */
+ if (KVM_BUG_ON(exit_qual & EPT_VIOLATION_ACC_INSTR, vcpu->kvm))
+ return -EIO;
+ }
+
+ trace_kvm_page_fault(vcpu, gpa, exit_qual);
+ return __vmx_handle_ept_violation(vcpu, gpa, exit_qual);
+}
+
int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
{
struct vcpu_tdx *tdx = to_tdx(vcpu);
@@ -1683,6 +1723,11 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
if (fastpath != EXIT_FASTPATH_NONE)
return 1;
+ if (unlikely(vp_enter_ret == EXIT_REASON_EPT_MISCONFIG)) {
+ KVM_BUG_ON(1, vcpu->kvm);
+ return -EIO;
+ }
+
/*
* Handle TDX SW errors, including TDX_SEAMCALL_UD, TDX_SEAMCALL_GP and
* TDX_SEAMCALL_VMFAILINVALID.
@@ -1732,6 +1777,8 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
return tdx_emulate_io(vcpu);
case EXIT_REASON_EPT_MISCONFIG:
return tdx_emulate_mmio(vcpu);
+ case EXIT_REASON_EPT_VIOLATION:
+ return tdx_handle_ept_violation(vcpu);
case EXIT_REASON_OTHER_SMI:
/*
* Unlike VMX, SMI in SEAM non-root mode (i.e. when
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 02/20] KVM: TDX: Detect unexpected SEPT violations due to pending SPTEs
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
2025-02-27 1:20 ` [PATCH v2 01/20] KVM: TDX: Handle EPT violation/misconfig exit Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 03/20] KVM: TDX: Retry locally in TDX EPT violation handler on RET_PF_RETRY Binbin Wu
` (17 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Yan Zhao <yan.y.zhao@intel.com>
Detect SEPT violations that occur when an SEPT entry is in PENDING state
while the TD is configured not to receive #VE on SEPT violations.
A TD guest can be configured not to receive #VE by setting SEPT_VE_DISABLE
to 1 in tdh_mng_init() or modifying pending_ve_disable to 1 in TDCS when
flexible_pending_ve is permitted. In such cases, the TDX module will not
inject #VE into the TD upon encountering an EPT violation caused by an SEPT
entry in the PENDING state. Instead, TDX module will exit to VMM and set
extended exit qualification type to PENDING_EPT_VIOLATION and exit
qualification bit 6:3 to 0.
Since #VE will not be injected to such TDs, they are not able to be
notified to accept a GPA. TD accessing before accepting a private GPA
is regarded as an error within the guest.
Detect such guest error by inspecting the (extended) exit qualification
bits and make such VM dead.
Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Rebased on getting exit_qualification, ext_exit_qualification.
TDX "the rest" v1:
- New patch
---
arch/x86/include/asm/vmx.h | 2 ++
arch/x86/kvm/vmx/tdx.c | 17 +++++++++++++++++
arch/x86/kvm/vmx/tdx_arch.h | 2 ++
3 files changed, 21 insertions(+)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 9298fb9d4bb3..028f3b8db2af 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -585,12 +585,14 @@ enum vm_entry_failure_code {
#define EPT_VIOLATION_ACC_WRITE_BIT 1
#define EPT_VIOLATION_ACC_INSTR_BIT 2
#define EPT_VIOLATION_RWX_SHIFT 3
+#define EPT_VIOLATION_EXEC_R3_LIN_BIT 6
#define EPT_VIOLATION_GVA_IS_VALID_BIT 7
#define EPT_VIOLATION_GVA_TRANSLATED_BIT 8
#define EPT_VIOLATION_ACC_READ (1 << EPT_VIOLATION_ACC_READ_BIT)
#define EPT_VIOLATION_ACC_WRITE (1 << EPT_VIOLATION_ACC_WRITE_BIT)
#define EPT_VIOLATION_ACC_INSTR (1 << EPT_VIOLATION_ACC_INSTR_BIT)
#define EPT_VIOLATION_RWX_MASK (VMX_EPT_RWX_MASK << EPT_VIOLATION_RWX_SHIFT)
+#define EPT_VIOLATION_EXEC_FOR_RING3_LIN (1 << EPT_VIOLATION_EXEC_R3_LIN_BIT)
#define EPT_VIOLATION_GVA_IS_VALID (1 << EPT_VIOLATION_GVA_IS_VALID_BIT)
#define EPT_VIOLATION_GVA_TRANSLATED (1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 0e2c734070d6..b8701e343e80 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1683,12 +1683,29 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, trig_mode, vector);
}
+static inline bool tdx_is_sept_violation_unexpected_pending(struct kvm_vcpu *vcpu)
+{
+ u64 eeq_type = to_tdx(vcpu)->ext_exit_qualification & TDX_EXT_EXIT_QUAL_TYPE_MASK;
+ u64 eq = vmx_get_exit_qual(vcpu);
+
+ if (eeq_type != TDX_EXT_EXIT_QUAL_TYPE_PENDING_EPT_VIOLATION)
+ return false;
+
+ return !(eq & EPT_VIOLATION_RWX_MASK) && !(eq & EPT_VIOLATION_EXEC_FOR_RING3_LIN);
+}
+
static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
{
unsigned long exit_qual;
gpa_t gpa = to_tdx(vcpu)->exit_gpa;
if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) {
+ if (tdx_is_sept_violation_unexpected_pending(vcpu)) {
+ pr_warn("Guest access before accepting 0x%llx on vCPU %d\n",
+ gpa, vcpu->vcpu_id);
+ kvm_vm_dead(vcpu->kvm);
+ return -EIO;
+ }
/*
* Always treat SEPT violations as write faults. Ignore the
* EXIT_QUALIFICATION reported by TDX-SEAM for SEPT violations.
diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
index a8071409498f..fcbf0d4abc5f 100644
--- a/arch/x86/kvm/vmx/tdx_arch.h
+++ b/arch/x86/kvm/vmx/tdx_arch.h
@@ -69,6 +69,8 @@ struct tdx_cpuid_value {
#define TDX_TD_ATTR_KL BIT_ULL(31)
#define TDX_TD_ATTR_PERFMON BIT_ULL(63)
+#define TDX_EXT_EXIT_QUAL_TYPE_MASK GENMASK(3, 0)
+#define TDX_EXT_EXIT_QUAL_TYPE_PENDING_EPT_VIOLATION 6
/*
* TD_PARAMS is provided as an input to TDH_MNG_INIT, the size of which is 1024B.
*/
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 03/20] KVM: TDX: Retry locally in TDX EPT violation handler on RET_PF_RETRY
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
2025-02-27 1:20 ` [PATCH v2 01/20] KVM: TDX: Handle EPT violation/misconfig exit Binbin Wu
2025-02-27 1:20 ` [PATCH v2 02/20] KVM: TDX: Detect unexpected SEPT violations due to pending SPTEs Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 04/20] KVM: TDX: Kick off vCPUs when SEAMCALL is busy during TD page removal Binbin Wu
` (16 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Yan Zhao <yan.y.zhao@intel.com>
Retry locally in the TDX EPT violation handler for private memory to reduce
the chances for tdh_mem_sept_add()/tdh_mem_page_aug() to contend with
tdh_vp_enter().
TDX EPT violation installs private pages via tdh_mem_sept_add() and
tdh_mem_page_aug(). The two may have contention with tdh_vp_enter() or
TDCALLs.
Resources SHARED users EXCLUSIVE users
------------------------------------------------------------
SEPT tree tdh_mem_sept_add tdh_vp_enter(0-step mitigation)
tdh_mem_page_aug
------------------------------------------------------------
SEPT entry tdh_mem_sept_add (Host lock)
tdh_mem_page_aug (Host lock)
tdg_mem_page_accept (Guest lock)
tdg_mem_page_attr_rd (Guest lock)
tdg_mem_page_attr_wr (Guest lock)
Though the contention between tdh_mem_sept_add()/tdh_mem_page_aug() and
TDCALLs may be removed in future TDX module, their contention with
tdh_vp_enter() due to 0-step mitigation still persists.
The TDX module may trigger 0-step mitigation in SEAMCALL TDH.VP.ENTER,
which works as follows:
0. Each TDH.VP.ENTER records the guest RIP on TD entry.
1. When the TDX module encounters a VM exit with reason EPT_VIOLATION, it
checks if the guest RIP is the same as last guest RIP on TD entry.
-if yes, it means the EPT violation is caused by the same instruction
that caused the last VM exit.
Then, the TDX module increases the guest RIP no-progress count.
When the count increases from 0 to the threshold (currently 6),
the TDX module records the faulting GPA into a
last_epf_gpa_list.
-if no, it means the guest RIP has made progress.
So, the TDX module resets the RIP no-progress count and the
last_epf_gpa_list.
2. On the next TDH.VP.ENTER, the TDX module (after saving the guest RIP on
TD entry) checks if the last_epf_gpa_list is empty.
-if yes, TD entry continues without acquiring the lock on the SEPT tree.
-if no, it triggers the 0-step mitigation by acquiring the exclusive
lock on SEPT tree, walking the EPT tree to check if all page
faults caused by the GPAs in the last_epf_gpa_list have been
resolved before continuing TD entry.
Since KVM TDP MMU usually re-enters guest whenever it exits to userspace
(e.g. for KVM_EXIT_MEMORY_FAULT) or encounters a BUSY, it is possible for a
tdh_vp_enter() to be called more than the threshold count before a page
fault is addressed, triggering contention when tdh_vp_enter() attempts to
acquire exclusive lock on SEPT tree.
Retry locally in TDX EPT violation handler to reduce the count of invoking
tdh_vp_enter(), hence reducing the possibility of its contention with
tdh_mem_sept_add()/tdh_mem_page_aug(). However, the 0-step mitigation and
the contention are still not eliminated due to KVM_EXIT_MEMORY_FAULT,
signals/interrupts, and cases when one instruction faults more GFNs than
the threshold count.
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
Changes from "KVM: TDX SEPT SEAMCALL retry" series [1]
- Use kvm_vcpu_has_events() to replace "pi_has_pending_interrupt(vcpu) ||
kvm_test_request(KVM_REQ_NMI, vcpu) || vcpu->arch.nmi_pending)" [2].
(Sean)
- Update code comment to explain why break out for false positive of
interrupt injection is fine (Sean).
[1]: https://lore.kernel.org/all/20250113021218.18922-1-yan.y.zhao@intel.com
[2]: https://lore.kernel.org/all/Z4rIGv4E7Jdmhl8P@google.com
---
arch/x86/kvm/vmx/tdx.c | 57 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 56 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index b8701e343e80..6fa6f7e13e15 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1698,6 +1698,8 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
{
unsigned long exit_qual;
gpa_t gpa = to_tdx(vcpu)->exit_gpa;
+ bool local_retry = false;
+ int ret;
if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) {
if (tdx_is_sept_violation_unexpected_pending(vcpu)) {
@@ -1716,6 +1718,9 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
* due to aliasing a single HPA to multiple GPAs.
*/
exit_qual = EPT_VIOLATION_ACC_WRITE;
+
+ /* Only private GPA triggers zero-step mitigation */
+ local_retry = true;
} else {
exit_qual = vmx_get_exit_qual(vcpu);
/*
@@ -1728,7 +1733,57 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
}
trace_kvm_page_fault(vcpu, gpa, exit_qual);
- return __vmx_handle_ept_violation(vcpu, gpa, exit_qual);
+
+ /*
+ * To minimize TDH.VP.ENTER invocations, retry locally for private GPA
+ * mapping in TDX.
+ *
+ * KVM may return RET_PF_RETRY for private GPA due to
+ * - contentions when atomically updating SPTEs of the mirror page table
+ * - in-progress GFN invalidation or memslot removal.
+ * - TDX_OPERAND_BUSY error from TDH.MEM.PAGE.AUG or TDH.MEM.SEPT.ADD,
+ * caused by contentions with TDH.VP.ENTER (with zero-step mitigation)
+ * or certain TDCALLs.
+ *
+ * If TDH.VP.ENTER is invoked more times than the threshold set by the
+ * TDX module before KVM resolves the private GPA mapping, the TDX
+ * module will activate zero-step mitigation during TDH.VP.ENTER. This
+ * process acquires an SEPT tree lock in the TDX module, leading to
+ * further contentions with TDH.MEM.PAGE.AUG or TDH.MEM.SEPT.ADD
+ * operations on other vCPUs.
+ *
+ * Breaking out of local retries for kvm_vcpu_has_events() is for
+ * interrupt injection. kvm_vcpu_has_events() should not see pending
+ * events for TDX. Since KVM can't determine if IRQs (or NMIs) are
+ * blocked by TDs, false positives are inevitable i.e., KVM may re-enter
+ * the guest even if the IRQ/NMI can't be delivered.
+ *
+ * Note: even without breaking out of local retries, zero-step
+ * mitigation may still occur due to
+ * - invoking of TDH.VP.ENTER after KVM_EXIT_MEMORY_FAULT,
+ * - a single RIP causing EPT violations for more GFNs than the
+ * threshold count.
+ * This is safe, as triggering zero-step mitigation only introduces
+ * contentions to page installation SEAMCALLs on other vCPUs, which will
+ * handle retries locally in their EPT violation handlers.
+ */
+ while (1) {
+ ret = __vmx_handle_ept_violation(vcpu, gpa, exit_qual);
+
+ if (ret != RET_PF_RETRY || !local_retry)
+ break;
+
+ if (kvm_vcpu_has_events(vcpu) || signal_pending(current))
+ break;
+
+ if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
+ ret = -EIO;
+ break;
+ }
+
+ cond_resched();
+ }
+ return ret;
}
int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 04/20] KVM: TDX: Kick off vCPUs when SEAMCALL is busy during TD page removal
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (2 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 03/20] KVM: TDX: Retry locally in TDX EPT violation handler on RET_PF_RETRY Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 05/20] KVM: TDX: Handle TDX PV CPUID hypercall Binbin Wu
` (15 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Yan Zhao <yan.y.zhao@intel.com>
Kick off all vCPUs and prevent tdh_vp_enter() from executing whenever
tdh_mem_range_block()/tdh_mem_track()/tdh_mem_page_remove() encounters
contention, since the page removal path does not expect error and is less
sensitive to the performance penalty caused by kicking off vCPUs.
Although KVM has protected SEPT zap-related SEAMCALLs with kvm->mmu_lock,
KVM may still encounter TDX_OPERAND_BUSY due to the contention in the TDX
module.
- tdh_mem_track() may contend with tdh_vp_enter().
- tdh_mem_range_block()/tdh_mem_page_remove() may contend with
tdh_vp_enter() and TDCALLs.
Resources SHARED users EXCLUSIVE users
------------------------------------------------------------
TDCS epoch tdh_vp_enter tdh_mem_track
------------------------------------------------------------
SEPT tree tdh_mem_page_remove tdh_vp_enter (0-step mitigation)
tdh_mem_range_block
------------------------------------------------------------
SEPT entry tdh_mem_range_block (Host lock)
tdh_mem_page_remove (Host lock)
tdg_mem_page_accept (Guest lock)
tdg_mem_page_attr_rd (Guest lock)
tdg_mem_page_attr_wr (Guest lock)
Use a TDX specific per-VM flag wait_for_sept_zap along with
KVM_REQ_OUTSIDE_GUEST_MODE to kick off vCPUs and prevent them from entering
TD, thereby avoiding the potential contention. Apply the kick-off and no
vCPU entering only after each SEAMCALL busy error to minimize the window of
no TD entry, as the contention due to 0-step mitigation or TDCALLs is
expected to be rare.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
Rebased to use helper tdx_operand_busy().
---
arch/x86/kvm/vmx/tdx.c | 63 ++++++++++++++++++++++++++++++++++++------
arch/x86/kvm/vmx/tdx.h | 7 +++++
2 files changed, 61 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 6fa6f7e13e15..25ae2c2826d0 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -294,6 +294,26 @@ static void tdx_clear_page(struct page *page)
__mb();
}
+static void tdx_no_vcpus_enter_start(struct kvm *kvm)
+{
+ struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+
+ lockdep_assert_held_write(&kvm->mmu_lock);
+
+ WRITE_ONCE(kvm_tdx->wait_for_sept_zap, true);
+
+ kvm_make_all_cpus_request(kvm, KVM_REQ_OUTSIDE_GUEST_MODE);
+}
+
+static void tdx_no_vcpus_enter_stop(struct kvm *kvm)
+{
+ struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+
+ lockdep_assert_held_write(&kvm->mmu_lock);
+
+ WRITE_ONCE(kvm_tdx->wait_for_sept_zap, false);
+}
+
/* TDH.PHYMEM.PAGE.RECLAIM is allowed only when destroying the TD. */
static int __tdx_reclaim_page(struct page *page)
{
@@ -953,6 +973,14 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
return EXIT_FASTPATH_NONE;
}
+ /*
+ * Wait until retry of SEPT-zap-related SEAMCALL completes before
+ * allowing vCPU entry to avoid contention with tdh_vp_enter() and
+ * TDCALLs.
+ */
+ if (unlikely(READ_ONCE(to_kvm_tdx(vcpu->kvm)->wait_for_sept_zap)))
+ return EXIT_FASTPATH_EXIT_HANDLED;
+
trace_kvm_entry(vcpu, force_immediate_exit);
if (pi_test_on(&vt->pi_desc)) {
@@ -1467,15 +1495,24 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
if (KVM_BUG_ON(!is_hkid_assigned(kvm_tdx), kvm))
return -EINVAL;
- do {
+ /*
+ * When zapping private page, write lock is held. So no race condition
+ * with other vcpu sept operation.
+ * Race with TDH.VP.ENTER due to (0-step mitigation) and Guest TDCALLs.
+ */
+ err = tdh_mem_page_remove(&kvm_tdx->td, gpa, tdx_level, &entry,
+ &level_state);
+
+ if (unlikely(tdx_operand_busy(err))) {
/*
- * When zapping private page, write lock is held. So no race
- * condition with other vcpu sept operation. Race only with
- * TDH.VP.ENTER.
+ * The second retry is expected to succeed after kicking off all
+ * other vCPUs and prevent them from invoking TDH.VP.ENTER.
*/
+ tdx_no_vcpus_enter_start(kvm);
err = tdh_mem_page_remove(&kvm_tdx->td, gpa, tdx_level, &entry,
&level_state);
- } while (unlikely(tdx_operand_busy(err)));
+ tdx_no_vcpus_enter_stop(kvm);
+ }
if (KVM_BUG_ON(err, kvm)) {
pr_tdx_error_2(TDH_MEM_PAGE_REMOVE, err, entry, level_state);
@@ -1559,9 +1596,13 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
WARN_ON_ONCE(level != PG_LEVEL_4K);
err = tdh_mem_range_block(&kvm_tdx->td, gpa, tdx_level, &entry, &level_state);
- if (unlikely(tdx_operand_busy(err)))
- return -EBUSY;
+ if (unlikely(tdx_operand_busy(err))) {
+ /* After no vCPUs enter, the second retry is expected to succeed */
+ tdx_no_vcpus_enter_start(kvm);
+ err = tdh_mem_range_block(&kvm_tdx->td, gpa, tdx_level, &entry, &level_state);
+ tdx_no_vcpus_enter_stop(kvm);
+ }
if (tdx_is_sept_zap_err_due_to_premap(kvm_tdx, err, entry, level) &&
!KVM_BUG_ON(!atomic64_read(&kvm_tdx->nr_premapped), kvm)) {
atomic64_dec(&kvm_tdx->nr_premapped);
@@ -1611,9 +1652,13 @@ static void tdx_track(struct kvm *kvm)
lockdep_assert_held_write(&kvm->mmu_lock);
- do {
+ err = tdh_mem_track(&kvm_tdx->td);
+ if (unlikely(tdx_operand_busy(err))) {
+ /* After no vCPUs enter, the second retry is expected to succeed */
+ tdx_no_vcpus_enter_start(kvm);
err = tdh_mem_track(&kvm_tdx->td);
- } while (unlikely(tdx_operand_busy(err)));
+ tdx_no_vcpus_enter_stop(kvm);
+ }
if (KVM_BUG_ON(err, kvm))
pr_tdx_error(TDH_MEM_TRACK, err);
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 4e1e71d5d8e8..c6bafac31d4d 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -36,6 +36,13 @@ struct kvm_tdx {
/* For KVM_TDX_INIT_MEM_REGION. */
atomic64_t nr_premapped;
+
+ /*
+ * Prevent vCPUs from TD entry to ensure SEPT zap related SEAMCALLs do
+ * not contend with tdh_vp_enter() and TDCALLs.
+ * Set/unset is protected with kvm->mmu_lock.
+ */
+ bool wait_for_sept_zap;
};
/* TDX module vCPU states */
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 05/20] KVM: TDX: Handle TDX PV CPUID hypercall
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (3 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 04/20] KVM: TDX: Kick off vCPUs when SEAMCALL is busy during TD page removal Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 06/20] KVM: TDX: Handle TDX PV HLT hypercall Binbin Wu
` (14 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Handle TDX PV CPUID hypercall for the CPUIDs virtualized by VMM
according to TDX Guest Host Communication Interface (GHCI).
For TDX, most CPUID leaf/sub-leaf combinations are virtualized by
the TDX module while some trigger #VE. On #VE, TDX guest can issue
TDG.VP.VMCALL<INSTRUCTION.CPUID> (same value as EXIT_REASON_CPUID)
to request VMM to emulate CPUID operation.
Morph PV CPUID hypercall to EXIT_REASON_CPUID and wire up to the KVM
backend function.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
[binbin: rewrite changelog]
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Morph PV CPUID hypercall to EXIT_REASON_CPUID. (Sean)
- Rebased to use tdcall_to_vmx_exit_reason().
- Use vp_enter_args directly instead of helpers.
- Skip setting return code as TDVMCALL_STATUS_SUCCESS.
TDX "the rest" v1:
- Rewrite changelog.
- Use TDVMCALL_STATUS prefix for TDX call status codes (Binbin)
---
arch/x86/kvm/vmx/tdx.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 25ae2c2826d0..8019bf553ca5 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -845,6 +845,7 @@ static bool tdx_guest_state_is_invalid(struct kvm_vcpu *vcpu)
static __always_inline u32 tdcall_to_vmx_exit_reason(struct kvm_vcpu *vcpu)
{
switch (tdvmcall_leaf(vcpu)) {
+ case EXIT_REASON_CPUID:
case EXIT_REASON_IO_INSTRUCTION:
return tdvmcall_leaf(vcpu);
case EXIT_REASON_EPT_VIOLATION:
@@ -1212,6 +1213,25 @@ static int tdx_report_fatal_error(struct kvm_vcpu *vcpu)
return 0;
}
+static int tdx_emulate_cpuid(struct kvm_vcpu *vcpu)
+{
+ u32 eax, ebx, ecx, edx;
+ struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+ /* EAX and ECX for cpuid is stored in R12 and R13. */
+ eax = tdx->vp_enter_args.r12;
+ ecx = tdx->vp_enter_args.r13;
+
+ kvm_cpuid(vcpu, &eax, &ebx, &ecx, &edx, false);
+
+ tdx->vp_enter_args.r12 = eax;
+ tdx->vp_enter_args.r13 = ebx;
+ tdx->vp_enter_args.r14 = ecx;
+ tdx->vp_enter_args.r15 = edx;
+
+ return 1;
+}
+
static int tdx_complete_pio_out(struct kvm_vcpu *vcpu)
{
vcpu->arch.pio.count = 0;
@@ -1886,6 +1906,8 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
case EXIT_REASON_EXTERNAL_INTERRUPT:
++vcpu->stat.irq_exits;
return 1;
+ case EXIT_REASON_CPUID:
+ return tdx_emulate_cpuid(vcpu);
case EXIT_REASON_TDCALL:
return handle_tdvmcall(vcpu);
case EXIT_REASON_VMCALL:
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 06/20] KVM: TDX: Handle TDX PV HLT hypercall
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (4 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 05/20] KVM: TDX: Handle TDX PV CPUID hypercall Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 07/20] KVM: x86: Move KVM_MAX_MCE_BANKS to header file Binbin Wu
` (13 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Handle TDX PV HLT hypercall and the interrupt status due to it.
TDX guest status is protected, KVM can't get the interrupt status
of TDX guest and it assumes interrupt is always allowed unless TDX
guest calls TDVMCALL with HLT, which passes the interrupt blocked flag.
If the guest halted with interrupt enabled, also query pending RVI by
checking bit 0 of TD_VCPU_STATE_DETAILS_NON_ARCH field via a seamcall.
Update vt_interrupt_allowed() for TDX based on interrupt blocked flag
passed by HLT TDVMCALL. Do not wakeup TD vCPU if interrupt is blocked
for VT-d PI.
For NMIs, KVM cannot determine the NMI blocking status for TDX guests,
so KVM always assumes NMIs are not blocked. In the unlikely scenario
where a guest invokes the PV HLT hypercall within an NMI handler, this
could result in a spurious wakeup. The guest should implement the PV
HLT hypercall within a loop if it truly requires no interruptions, since
NMI could be unblocked by an IRET due to an exception occurring before
the PV HLT is executed in the NMI handler.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Morph PV HLT hypercall to EXIT_REASON_HLT. (Sean)
- Rebased to use tdcall_to_vmx_exit_reason().
- Use vp_enter_args directly instead of helpers.
- Check RVI pending for halted case and updated the changelog
accordingly.
TDX "the rest" v1:
- Update the changelog.
- Remove the interrupt_disabled_hlt field (Sean)
https://lore.kernel.org/kvm/Zg1seIaTmM94IyR8@google.com/
- Move the logic of interrupt status to vt_interrupt_allowed() (Chao)
https://lore.kernel.org/kvm/ZhIX7K0WK+gYtcan@chao-email/
- Add suggested-by tag.
- Use tdx_check_exit_reason()
- Use TDVMCALL_STATUS prefix for TDX call status codes (Binbin)
v19:
- move tdvps_state_non_arch_check() to this patch
v18:
- drop buggy_hlt_workaround and use TDH.VP.RD(TD_VCPU_STATE_DETAILS)
---
arch/x86/kvm/vmx/main.c | 2 +-
arch/x86/kvm/vmx/posted_intr.c | 3 ++-
arch/x86/kvm/vmx/tdx.c | 39 ++++++++++++++++++++++++++++++----
arch/x86/kvm/vmx/tdx.h | 7 ++++++
arch/x86/kvm/vmx/tdx_arch.h | 11 ++++++++++
5 files changed, 56 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 55c8507d60fe..d383f4510c15 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -418,7 +418,7 @@ static void vt_cancel_injection(struct kvm_vcpu *vcpu)
static int vt_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
{
if (is_td_vcpu(vcpu))
- return true;
+ return tdx_interrupt_allowed(vcpu);
return vmx_interrupt_allowed(vcpu, for_injection);
}
diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
index 895bbe85b818..f2ca37b3f606 100644
--- a/arch/x86/kvm/vmx/posted_intr.c
+++ b/arch/x86/kvm/vmx/posted_intr.c
@@ -203,7 +203,8 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
return;
if (kvm_vcpu_is_blocking(vcpu) &&
- (is_td_vcpu(vcpu) || !vmx_interrupt_blocked(vcpu)))
+ ((is_td_vcpu(vcpu) && tdx_interrupt_allowed(vcpu)) ||
+ (!is_td_vcpu(vcpu) && !vmx_interrupt_blocked(vcpu))))
pi_enable_wakeup_handler(vcpu);
/*
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 8019bf553ca5..30f0ceeed7d2 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -720,9 +720,39 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
local_irq_enable();
}
+bool tdx_interrupt_allowed(struct kvm_vcpu *vcpu)
+{
+ /*
+ * KVM can't get the interrupt status of TDX guest and it assumes
+ * interrupt is always allowed unless TDX guest calls TDVMCALL with HLT,
+ * which passes the interrupt blocked flag.
+ */
+ return vmx_get_exit_reason(vcpu).basic != EXIT_REASON_HLT ||
+ !to_tdx(vcpu)->vp_enter_args.r12;
+}
+
bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
{
- return pi_has_pending_interrupt(vcpu);
+ u64 vcpu_state_details;
+
+ if (pi_has_pending_interrupt(vcpu))
+ return true;
+
+ /*
+ * Only check RVI pending for HALTED case with IRQ enabled.
+ * For non-HLT cases, KVM doesn't care about STI/SS shadows. And if the
+ * interrupt was pending before TD exit, then it _must_ be blocked,
+ * otherwise the interrupt would have been serviced at the instruction
+ * boundary.
+ */
+ if (vmx_get_exit_reason(vcpu).basic != EXIT_REASON_HLT ||
+ to_tdx(vcpu)->vp_enter_args.r12)
+ return false;
+
+ vcpu_state_details =
+ td_state_non_arch_read64(to_tdx(vcpu), TD_VCPU_STATE_DETAILS_NON_ARCH);
+
+ return tdx_vcpu_state_details_intr_pending(vcpu_state_details);
}
/*
@@ -846,6 +876,7 @@ static __always_inline u32 tdcall_to_vmx_exit_reason(struct kvm_vcpu *vcpu)
{
switch (tdvmcall_leaf(vcpu)) {
case EXIT_REASON_CPUID:
+ case EXIT_REASON_HLT:
case EXIT_REASON_IO_INSTRUCTION:
return tdvmcall_leaf(vcpu);
case EXIT_REASON_EPT_VIOLATION:
@@ -1103,9 +1134,7 @@ static int tdx_complete_vmcall_map_gpa(struct kvm_vcpu *vcpu)
/*
* Stop processing the remaining part if there is a pending interrupt,
* which could be qualified to deliver. Skip checking pending RVI for
- * TDVMCALL_MAP_GPA.
- * TODO: Add a comment to link the reason when the target function is
- * implemented.
+ * TDVMCALL_MAP_GPA, see comments in tdx_protected_apic_has_interrupt().
*/
if (kvm_vcpu_has_events(vcpu)) {
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_RETRY);
@@ -1908,6 +1937,8 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
return 1;
case EXIT_REASON_CPUID:
return tdx_emulate_cpuid(vcpu);
+ case EXIT_REASON_HLT:
+ return kvm_emulate_halt_noskip(vcpu);
case EXIT_REASON_TDCALL:
return handle_tdvmcall(vcpu);
case EXIT_REASON_VMCALL:
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index c6bafac31d4d..cb9014b7a4f1 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -121,6 +121,7 @@ static __always_inline void tdvps_vmcs_check(u32 field, u8 bits)
}
static __always_inline void tdvps_management_check(u64 field, u8 bits) {}
+static __always_inline void tdvps_state_non_arch_check(u64 field, u8 bits) {}
#define TDX_BUILD_TDVPS_ACCESSORS(bits, uclass, lclass) \
static __always_inline u##bits td_##lclass##_read##bits(struct vcpu_tdx *tdx, \
@@ -168,11 +169,15 @@ static __always_inline void td_##lclass##_clearbit##bits(struct vcpu_tdx *tdx, \
tdh_vp_wr_failed(tdx, #uclass, " &= ~", field, bit, err);\
}
+
+bool tdx_interrupt_allowed(struct kvm_vcpu *vcpu);
+
TDX_BUILD_TDVPS_ACCESSORS(16, VMCS, vmcs);
TDX_BUILD_TDVPS_ACCESSORS(32, VMCS, vmcs);
TDX_BUILD_TDVPS_ACCESSORS(64, VMCS, vmcs);
TDX_BUILD_TDVPS_ACCESSORS(8, MANAGEMENT, management);
+TDX_BUILD_TDVPS_ACCESSORS(64, STATE_NON_ARCH, state_non_arch);
#else
static inline int tdx_bringup(void) { return 0; }
@@ -188,6 +193,8 @@ struct vcpu_tdx {
struct kvm_vcpu vcpu;
};
+static inline bool tdx_interrupt_allowed(struct kvm_vcpu *vcpu) { return false; }
+
#endif
#endif
diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
index fcbf0d4abc5f..e1f636ef5d0e 100644
--- a/arch/x86/kvm/vmx/tdx_arch.h
+++ b/arch/x86/kvm/vmx/tdx_arch.h
@@ -36,6 +36,17 @@ enum tdx_tdcs_execution_control {
TD_TDCS_EXEC_TSC_OFFSET = 10,
};
+enum tdx_vcpu_guest_other_state {
+ TD_VCPU_STATE_DETAILS_NON_ARCH = 0x100,
+};
+
+#define TDX_VCPU_STATE_DETAILS_INTR_PENDING BIT_ULL(0)
+
+static inline bool tdx_vcpu_state_details_intr_pending(u64 vcpu_state_details)
+{
+ return !!(vcpu_state_details & TDX_VCPU_STATE_DETAILS_INTR_PENDING);
+}
+
/* @field is any of enum tdx_tdcs_execution_control */
#define TDCS_EXEC(field) BUILD_TDX_FIELD(TD_CLASS_EXECUTION_CONTROLS, (field))
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 07/20] KVM: x86: Move KVM_MAX_MCE_BANKS to header file
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (5 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 06/20] KVM: TDX: Handle TDX PV HLT hypercall Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 08/20] KVM: TDX: Implement callbacks for MSR operations Binbin Wu
` (12 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Move KVM_MAX_MCE_BANKS to header file so that it can be used for TDX in
a future patch.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
[binbin: split into new patch]
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No Change.
TDX "the rest" v1:
- New patch. Split from "KVM: TDX: Implement callbacks for MSR operations for TDX". (Sean)
https://lore.kernel.org/kvm/Zg1yPIV6cVJrwGxX@google.com/
---
arch/x86/kvm/x86.c | 1 -
arch/x86/kvm/x86.h | 2 ++
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 33b64b65c166..1bd7977006c9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -90,7 +90,6 @@
#include "trace.h"
#define MAX_IO_MSRS 256
-#define KVM_MAX_MCE_BANKS 32
/*
* Note, kvm_caps fields should *never* have default values, all fields must be
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 72b4cee27d02..772d5c320be1 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -10,6 +10,8 @@
#include "kvm_emulate.h"
#include "cpuid.h"
+#define KVM_MAX_MCE_BANKS 32
+
struct kvm_caps {
/* control of guest tsc rate supported? */
bool has_tsc_control;
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 08/20] KVM: TDX: Implement callbacks for MSR operations
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (6 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 07/20] KVM: x86: Move KVM_MAX_MCE_BANKS to header file Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 09/20] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall Binbin Wu
` (11 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Add functions to implement MSR related callbacks, .set_msr(), .get_msr(),
and .has_emulated_msr(), for preparation of handling hypercalls from TDX
guest for PV RDMSR and WRMSR. Ignore KVM_REQ_MSR_FILTER_CHANGED for TDX.
There are three classes of MSR virtualization for TDX.
- Non-configurable: TDX module directly virtualizes it. VMM can't configure
it, the value set by KVM_SET_MSRS is ignored.
- Configurable: TDX module directly virtualizes it. VMM can configure it at
VM creation time. The value set by KVM_SET_MSRS is used.
- #VE case: TDX guest would issue TDG.VP.VMCALL<INSTRUCTION.{WRMSR,RDMSR}>
and VMM handles the MSR hypercall. The value set by KVM_SET_MSRS is used.
For the MSRs belonging to the #VE case, the TDX module injects #VE to the
TDX guest upon RDMSR or WRMSR. The exact list of such MSRs is defined in
TDX Module ABI Spec.
Upon #VE, the TDX guest may call TDG.VP.VMCALL<INSTRUCTION.{WRMSR,RDMSR}>,
which are defined in GHCI (Guest-Host Communication Interface) so that the
host VMM (e.g. KVM) can virtualize the MSRs.
TDX doesn't allow VMM to configure interception of MSR accesses. Ignore
KVM_REQ_MSR_FILTER_CHANGED for TDX guest. If the userspace has set any
MSR filters, it will be applied when handling
TDG.VP.VMCALL<INSTRUCTION.{WRMSR,RDMSR}> in a later patch.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
TDX "the rest" v2:
- Typo fix in changelog. (Kai)
https://lore.kernel.org/kvm/34ed02a6aedc7dcb35df9ce639e427e4d1a61f72.camel@intel.com
TDX "the rest" v1:
- Renamed from "KVM: TDX: Implement callbacks for MSR operations for TDX"
to "KVM: TDX: Implement callbacks for MSR operations"
- Update changelog.
- Remove the @write parameter of tdx_has_emulated_msr(), check whether the
MSR is readonly or not in tdx_set_msr(). (Sean)
https://lore.kernel.org/kvm/Zg1yPIV6cVJrwGxX@google.com/
- Change the code align with the patten "make the happy path the
not-taken path". (Sean)
- Open code the handling for an emulated KVM MSR MSR_KVM_POLL_CONTROL,
and let others go through the default statement. (Sean)
- Split macro KVM_MAX_MCE_BANKS move to a separate patch. (Sean)
- Add comments in vt_msr_filter_changed().
- Add Suggested-by tag.
---
arch/x86/kvm/vmx/main.c | 50 +++++++++++++++++++++++++---
arch/x86/kvm/vmx/tdx.c | 67 ++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/vmx/x86_ops.h | 6 ++++
3 files changed, 119 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index d383f4510c15..463de3add3bd 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -193,6 +193,48 @@ static int vt_handle_exit(struct kvm_vcpu *vcpu,
return vmx_handle_exit(vcpu, fastpath);
}
+static int vt_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+{
+ if (unlikely(is_td_vcpu(vcpu)))
+ return tdx_set_msr(vcpu, msr_info);
+
+ return vmx_set_msr(vcpu, msr_info);
+}
+
+/*
+ * The kvm parameter can be NULL (module initialization, or invocation before
+ * VM creation). Be sure to check the kvm parameter before using it.
+ */
+static bool vt_has_emulated_msr(struct kvm *kvm, u32 index)
+{
+ if (kvm && is_td(kvm))
+ return tdx_has_emulated_msr(index);
+
+ return vmx_has_emulated_msr(kvm, index);
+}
+
+static int vt_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+{
+ if (unlikely(is_td_vcpu(vcpu)))
+ return tdx_get_msr(vcpu, msr_info);
+
+ return vmx_get_msr(vcpu, msr_info);
+}
+
+static void vt_msr_filter_changed(struct kvm_vcpu *vcpu)
+{
+ /*
+ * TDX doesn't allow VMM to configure interception of MSR accesses.
+ * TDX guest requests MSR accesses by calling TDVMCALL. The MSR
+ * filters will be applied when handling the TDVMCALL for RDMSR/WRMSR
+ * if the userspace has set any.
+ */
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_msr_filter_changed(vcpu);
+}
+
#ifdef CONFIG_KVM_SMM
static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
{
@@ -516,7 +558,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.disable_virtualization_cpu = vt_disable_virtualization_cpu,
.emergency_disable_virtualization_cpu = vmx_emergency_disable_virtualization_cpu,
- .has_emulated_msr = vmx_has_emulated_msr,
+ .has_emulated_msr = vt_has_emulated_msr,
.vm_size = sizeof(struct kvm_vmx),
@@ -535,8 +577,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.update_exception_bitmap = vmx_update_exception_bitmap,
.get_feature_msr = vmx_get_feature_msr,
- .get_msr = vmx_get_msr,
- .set_msr = vmx_set_msr,
+ .get_msr = vt_get_msr,
+ .set_msr = vt_set_msr,
.get_segment_base = vmx_get_segment_base,
.get_segment = vmx_get_segment,
.set_segment = vmx_set_segment,
@@ -643,7 +685,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.apic_init_signal_blocked = vt_apic_init_signal_blocked,
.migrate_timers = vmx_migrate_timers,
- .msr_filter_changed = vmx_msr_filter_changed,
+ .msr_filter_changed = vt_msr_filter_changed,
.complete_emulated_msr = kvm_complete_insn_gp,
.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 30f0ceeed7d2..5b556af0f139 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2002,6 +2002,73 @@ void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
*error_code = 0;
}
+bool tdx_has_emulated_msr(u32 index)
+{
+ switch (index) {
+ case MSR_IA32_UCODE_REV:
+ case MSR_IA32_ARCH_CAPABILITIES:
+ case MSR_IA32_POWER_CTL:
+ case MSR_IA32_CR_PAT:
+ case MSR_IA32_TSC_DEADLINE:
+ case MSR_IA32_MISC_ENABLE:
+ case MSR_PLATFORM_INFO:
+ case MSR_MISC_FEATURES_ENABLES:
+ case MSR_IA32_APICBASE:
+ case MSR_EFER:
+ case MSR_IA32_MCG_CAP:
+ case MSR_IA32_MCG_STATUS:
+ case MSR_IA32_MCG_CTL:
+ case MSR_IA32_MCG_EXT_CTL:
+ case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1:
+ case MSR_IA32_MC0_CTL2 ... MSR_IA32_MCx_CTL2(KVM_MAX_MCE_BANKS) - 1:
+ /* MSR_IA32_MCx_{CTL, STATUS, ADDR, MISC, CTL2} */
+ case MSR_KVM_POLL_CONTROL:
+ return true;
+ case APIC_BASE_MSR ... APIC_BASE_MSR + 0xff:
+ /*
+ * x2APIC registers that are virtualized by the CPU can't be
+ * emulated, KVM doesn't have access to the virtual APIC page.
+ */
+ switch (index) {
+ case X2APIC_MSR(APIC_TASKPRI):
+ case X2APIC_MSR(APIC_PROCPRI):
+ case X2APIC_MSR(APIC_EOI):
+ case X2APIC_MSR(APIC_ISR) ... X2APIC_MSR(APIC_ISR + APIC_ISR_NR):
+ case X2APIC_MSR(APIC_TMR) ... X2APIC_MSR(APIC_TMR + APIC_ISR_NR):
+ case X2APIC_MSR(APIC_IRR) ... X2APIC_MSR(APIC_IRR + APIC_ISR_NR):
+ return false;
+ default:
+ return true;
+ }
+ default:
+ return false;
+ }
+}
+
+static bool tdx_is_read_only_msr(u32 index)
+{
+ return index == MSR_IA32_APICBASE || index == MSR_EFER;
+}
+
+int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+ if (!tdx_has_emulated_msr(msr->index))
+ return 1;
+
+ return kvm_get_msr_common(vcpu, msr);
+}
+
+int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+ if (tdx_is_read_only_msr(msr->index))
+ return 1;
+
+ if (!tdx_has_emulated_msr(msr->index))
+ return 1;
+
+ return kvm_set_msr_common(vcpu, msr);
+}
+
static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd)
{
const struct tdx_sys_info_td_conf *td_conf = &tdx_sysinfo->td_conf;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index e5d0c8a03566..19f770b0fc81 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -145,6 +145,9 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
void tdx_inject_nmi(struct kvm_vcpu *vcpu);
void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
+bool tdx_has_emulated_msr(u32 index);
+int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
+int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -188,6 +191,9 @@ static inline void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mo
static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {}
static inline void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, u64 *info1,
u64 *info2, u32 *intr_info, u32 *error_code) {}
+static inline bool tdx_has_emulated_msr(u32 index) { return false; }
+static inline int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
+static inline int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 09/20] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (7 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 08/20] KVM: TDX: Implement callbacks for MSR operations Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 10/20] KVM: TDX: Enable guest access to LMCE related MSRs Binbin Wu
` (10 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Morph PV RDMSR/WRMSR hypercall to EXIT_REASON_MSR_{READ,WRITE} and
wire up KVM backend functions.
For complete_emulated_msr() callback, instead of injecting #GP on error,
implement tdx_complete_emulated_msr() to set return code on error. Also
set return value on MSR read according to the values from kvm x86
registers.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
TDX "the rest" v2:
- Morph PV RDMSR/WRMSR hypercall to EXIT_REASON_MSR_{READ,WRITE}. (Sean)
- Rebased to use tdcall_to_vmx_exit_reason().
- Marshall values to the appropriate x86 registers to leverage the existing
kvm_emulate_{rdmsr,wrmsr}(). (Sean)
- Implement complete_emulated_msr() callback for TDX to set return value
and return code to vp_enter_args.
- Update changelog.
TDX "the rest" v1:
- Use TDVMCALL_STATUS prefix for TDX call status codes (Binbin)
---
arch/x86/kvm/vmx/main.c | 10 +++++++++-
arch/x86/kvm/vmx/tdx.c | 24 ++++++++++++++++++++++++
arch/x86/kvm/vmx/tdx.h | 2 ++
3 files changed, 35 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 463de3add3bd..7f3be1b65ce1 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -235,6 +235,14 @@ static void vt_msr_filter_changed(struct kvm_vcpu *vcpu)
vmx_msr_filter_changed(vcpu);
}
+static int vt_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
+{
+ if (is_td_vcpu(vcpu))
+ return tdx_complete_emulated_msr(vcpu, err);
+
+ return kvm_complete_insn_gp(vcpu, err);
+}
+
#ifdef CONFIG_KVM_SMM
static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
{
@@ -686,7 +694,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.migrate_timers = vmx_migrate_timers,
.msr_filter_changed = vt_msr_filter_changed,
- .complete_emulated_msr = kvm_complete_insn_gp,
+ .complete_emulated_msr = vt_complete_emulated_msr,
.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 5b556af0f139..85ff6e040cf3 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -878,6 +878,8 @@ static __always_inline u32 tdcall_to_vmx_exit_reason(struct kvm_vcpu *vcpu)
case EXIT_REASON_CPUID:
case EXIT_REASON_HLT:
case EXIT_REASON_IO_INSTRUCTION:
+ case EXIT_REASON_MSR_READ:
+ case EXIT_REASON_MSR_WRITE:
return tdvmcall_leaf(vcpu);
case EXIT_REASON_EPT_VIOLATION:
return EXIT_REASON_EPT_MISCONFIG;
@@ -1880,6 +1882,20 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
return ret;
}
+int tdx_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
+{
+ if (err) {
+ tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);
+ return 1;
+ }
+
+ if (vmx_get_exit_reason(vcpu).basic == EXIT_REASON_MSR_READ)
+ tdvmcall_set_return_val(vcpu, kvm_read_edx_eax(vcpu));
+
+ return 1;
+}
+
+
int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
{
struct vcpu_tdx *tdx = to_tdx(vcpu);
@@ -1945,6 +1961,14 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
return tdx_emulate_vmcall(vcpu);
case EXIT_REASON_IO_INSTRUCTION:
return tdx_emulate_io(vcpu);
+ case EXIT_REASON_MSR_READ:
+ kvm_rcx_write(vcpu, tdx->vp_enter_args.r12);
+ return kvm_emulate_rdmsr(vcpu);
+ case EXIT_REASON_MSR_WRITE:
+ kvm_rcx_write(vcpu, tdx->vp_enter_args.r12);
+ kvm_rax_write(vcpu, tdx->vp_enter_args.r13 & -1u);
+ kvm_rdx_write(vcpu, tdx->vp_enter_args.r13 >> 32);
+ return kvm_emulate_wrmsr(vcpu);
case EXIT_REASON_EPT_MISCONFIG:
return tdx_emulate_mmio(vcpu);
case EXIT_REASON_EPT_VIOLATION:
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index cb9014b7a4f1..8f8070d0f55e 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -171,6 +171,7 @@ static __always_inline void td_##lclass##_clearbit##bits(struct vcpu_tdx *tdx, \
bool tdx_interrupt_allowed(struct kvm_vcpu *vcpu);
+int tdx_complete_emulated_msr(struct kvm_vcpu *vcpu, int err);
TDX_BUILD_TDVPS_ACCESSORS(16, VMCS, vmcs);
TDX_BUILD_TDVPS_ACCESSORS(32, VMCS, vmcs);
@@ -194,6 +195,7 @@ struct vcpu_tdx {
};
static inline bool tdx_interrupt_allowed(struct kvm_vcpu *vcpu) { return false; }
+static inline int tdx_complete_emulated_msr(struct kvm_vcpu *vcpu, int err) { return 0; }
#endif
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 10/20] KVM: TDX: Enable guest access to LMCE related MSRs
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (8 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 09/20] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 11/20] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall Binbin Wu
` (9 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Allow TDX guest to configure LMCE (Local Machine Check Event) by handling
MSR IA32_FEAT_CTL and IA32_MCG_EXT_CTL.
MCE and MCA are advertised via cpuid based on the TDX module spec. Guest
kernel can access IA32_FEAT_CTL to check whether LMCE is opted-in by the
platform or not. If LMCE is opted-in by the platform, guest kernel can
access IA32_MCG_EXT_CTL to enable/disable LMCE.
Handle MSR IA32_FEAT_CTL and IA32_MCG_EXT_CTL for TDX guests to avoid
failure when a guest accesses them with TDG.VP.VMCALL<MSR> on #VE. E.g.,
Linux guest will treat the failure as a #GP(0).
Userspace VMM may not opt-in LMCE by default, e.g., QEMU disables it by
default, "-cpu lmce=on" is needed in QEMU command line to opt-in it.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
[binbin: rework changelog]
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No Change.
TDX "the rest" v1:
- Renamed from "KVM: TDX: Handle MSR IA32_FEAT_CTL MSR and IA32_MCG_EXT_CTL"
to "KVM: TDX: Enable guest access to LMCE related MSRs".
- Update changelog.
- Check reserved bits are not set when set MSR_IA32_MCG_EXT_CTL.
---
arch/x86/kvm/vmx/tdx.c | 46 +++++++++++++++++++++++++++++++++---------
1 file changed, 37 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 85ff6e040cf3..76764bf5ba29 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2039,6 +2039,7 @@ bool tdx_has_emulated_msr(u32 index)
case MSR_MISC_FEATURES_ENABLES:
case MSR_IA32_APICBASE:
case MSR_EFER:
+ case MSR_IA32_FEAT_CTL:
case MSR_IA32_MCG_CAP:
case MSR_IA32_MCG_STATUS:
case MSR_IA32_MCG_CTL:
@@ -2071,26 +2072,53 @@ bool tdx_has_emulated_msr(u32 index)
static bool tdx_is_read_only_msr(u32 index)
{
- return index == MSR_IA32_APICBASE || index == MSR_EFER;
+ return index == MSR_IA32_APICBASE || index == MSR_EFER ||
+ index == MSR_IA32_FEAT_CTL;
}
int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
{
- if (!tdx_has_emulated_msr(msr->index))
- return 1;
+ switch (msr->index) {
+ case MSR_IA32_FEAT_CTL:
+ /*
+ * MCE and MCA are advertised via cpuid. Guest kernel could
+ * check if LMCE is enabled or not.
+ */
+ msr->data = FEAT_CTL_LOCKED;
+ if (vcpu->arch.mcg_cap & MCG_LMCE_P)
+ msr->data |= FEAT_CTL_LMCE_ENABLED;
+ return 0;
+ case MSR_IA32_MCG_EXT_CTL:
+ if (!msr->host_initiated && !(vcpu->arch.mcg_cap & MCG_LMCE_P))
+ return 1;
+ msr->data = vcpu->arch.mcg_ext_ctl;
+ return 0;
+ default:
+ if (!tdx_has_emulated_msr(msr->index))
+ return 1;
- return kvm_get_msr_common(vcpu, msr);
+ return kvm_get_msr_common(vcpu, msr);
+ }
}
int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
{
- if (tdx_is_read_only_msr(msr->index))
- return 1;
+ switch (msr->index) {
+ case MSR_IA32_MCG_EXT_CTL:
+ if ((!msr->host_initiated && !(vcpu->arch.mcg_cap & MCG_LMCE_P)) ||
+ (msr->data & ~MCG_EXT_CTL_LMCE_EN))
+ return 1;
+ vcpu->arch.mcg_ext_ctl = msr->data;
+ return 0;
+ default:
+ if (tdx_is_read_only_msr(msr->index))
+ return 1;
- if (!tdx_has_emulated_msr(msr->index))
- return 1;
+ if (!tdx_has_emulated_msr(msr->index))
+ return 1;
- return kvm_set_msr_common(vcpu, msr);
+ return kvm_set_msr_common(vcpu, msr);
+ }
}
static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd)
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 11/20] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (9 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 10/20] KVM: TDX: Enable guest access to LMCE related MSRs Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 12/20] KVM: TDX: Add methods to ignore accesses to CPU state Binbin Wu
` (8 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Implement TDG.VP.VMCALL<GetTdVmCallInfo> hypercall. If the input value is
zero, return success code and zero in output registers.
TDG.VP.VMCALL<GetTdVmCallInfo> hypercall is a subleaf of TDG.VP.VMCALL to
enumerate which TDG.VP.VMCALL sub leaves are supported. This hypercall is
for future enhancement of the Guest-Host-Communication Interface (GHCI)
specification. The GHCI version of 344426-001US defines it to require
input R12 to be zero and to return zero in output registers, R11, R12, R13,
and R14 so that guest TD enumerates no enhancement.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Use vp_enter_args directly instead of helpers.
- Skip setting return code as TDVMCALL_STATUS_SUCCESS.
- Skip setting tdx->vp_enter_args.r12 to 0 because it already is 0.
TDX "the rest" v1:
- Use TDVMCALL_STATUS prefix for TDX call status codes (Binbin)
v19:
- rename TDG_VP_VMCALL_GET_TD_VM_CALL_INFO => TDVMCALL_GET_TD_VM_CALL_INFO
---
arch/x86/include/asm/shared/tdx.h | 1 +
arch/x86/kvm/vmx/tdx.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
index f23657350d28..606d93a1cbac 100644
--- a/arch/x86/include/asm/shared/tdx.h
+++ b/arch/x86/include/asm/shared/tdx.h
@@ -67,6 +67,7 @@
#define TD_CTLS_LOCK BIT_ULL(TD_CTLS_LOCK_BIT)
/* TDX hypercall Leaf IDs */
+#define TDVMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDVMCALL_MAP_GPA 0x10001
#define TDVMCALL_GET_QUOTE 0x10002
#define TDVMCALL_REPORT_FATAL_ERROR 0x10003
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 76764bf5ba29..dbc9fffcbc26 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1422,6 +1422,20 @@ static int tdx_emulate_mmio(struct kvm_vcpu *vcpu)
return 1;
}
+static int tdx_get_td_vm_call_info(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+ if (tdx->vp_enter_args.r12)
+ tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);
+ else {
+ tdx->vp_enter_args.r11 = 0;
+ tdx->vp_enter_args.r13 = 0;
+ tdx->vp_enter_args.r14 = 0;
+ }
+ return 1;
+}
+
static int handle_tdvmcall(struct kvm_vcpu *vcpu)
{
switch (tdvmcall_leaf(vcpu)) {
@@ -1429,6 +1443,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
return tdx_map_gpa(vcpu);
case TDVMCALL_REPORT_FATAL_ERROR:
return tdx_report_fatal_error(vcpu);
+ case TDVMCALL_GET_TD_VM_CALL_INFO:
+ return tdx_get_td_vm_call_info(vcpu);
default:
break;
}
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 12/20] KVM: TDX: Add methods to ignore accesses to CPU state
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (10 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 11/20] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 13/20] KVM: TDX: Add method to ignore guest instruction emulation Binbin Wu
` (7 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
TDX protects TDX guest state from VMM. Implement access methods for TDX
guest state to ignore them or return zero. Because those methods can be
called by kvm ioctls to set/get cpu registers, they don't have KVM_BUG_ON.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Discard the mistakenly introduced change.
https://lore.kernel.org/kvm/67dd44fb-0211-4435-a294-b9f00dc681d8@linux.intel.com
- Drop tdx_cache_reg() (Adrian, Sean)
TDX "the rest" v1:
- Dropped KVM_BUG_ON() in vt_sync_dirty_debug_regs(). (Rick)
Since the KVM_BUG_ON() is removed, change the changlog accordingly.
---
arch/x86/kvm/vmx/main.c | 307 ++++++++++++++++++++++++++++++++++++----
1 file changed, 278 insertions(+), 29 deletions(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 7f3be1b65ce1..b76f39cc56fb 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -335,6 +335,214 @@ static void vt_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
vmx_deliver_interrupt(apic, delivery_mode, trig_mode, vector);
}
+static void vt_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_vcpu_after_set_cpuid(vcpu);
+}
+
+static void vt_update_exception_bitmap(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_update_exception_bitmap(vcpu);
+}
+
+static u64 vt_get_segment_base(struct kvm_vcpu *vcpu, int seg)
+{
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_get_segment_base(vcpu, seg);
+}
+
+static void vt_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+ int seg)
+{
+ if (is_td_vcpu(vcpu)) {
+ memset(var, 0, sizeof(*var));
+ return;
+ }
+
+ vmx_get_segment(vcpu, var, seg);
+}
+
+static void vt_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+ int seg)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_segment(vcpu, var, seg);
+}
+
+static int vt_get_cpl(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_get_cpl(vcpu);
+}
+
+static int vt_get_cpl_no_cache(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_get_cpl_no_cache(vcpu);
+}
+
+static void vt_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
+{
+ if (is_td_vcpu(vcpu)) {
+ *db = 0;
+ *l = 0;
+ return;
+ }
+
+ vmx_get_cs_db_l_bits(vcpu, db, l);
+}
+
+static bool vt_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+{
+ if (is_td_vcpu(vcpu))
+ return true;
+
+ return vmx_is_valid_cr0(vcpu, cr0);
+}
+
+static void vt_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_cr0(vcpu, cr0);
+}
+
+static bool vt_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+{
+ if (is_td_vcpu(vcpu))
+ return true;
+
+ return vmx_is_valid_cr4(vcpu, cr4);
+}
+
+static void vt_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_cr4(vcpu, cr4);
+}
+
+static int vt_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+{
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_set_efer(vcpu, efer);
+}
+
+static void vt_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+ if (is_td_vcpu(vcpu)) {
+ memset(dt, 0, sizeof(*dt));
+ return;
+ }
+
+ vmx_get_idt(vcpu, dt);
+}
+
+static void vt_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_idt(vcpu, dt);
+}
+
+static void vt_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+ if (is_td_vcpu(vcpu)) {
+ memset(dt, 0, sizeof(*dt));
+ return;
+ }
+
+ vmx_get_gdt(vcpu, dt);
+}
+
+static void vt_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_gdt(vcpu, dt);
+}
+
+static void vt_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_dr6(vcpu, val);
+}
+
+static void vt_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_dr7(vcpu, val);
+}
+
+static void vt_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+{
+ /*
+ * MOV-DR exiting is always cleared for TD guest, even in debug mode.
+ * Thus KVM_DEBUGREG_WONT_EXIT can never be set and it should never
+ * reach here for TD vcpu.
+ */
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_sync_dirty_debug_regs(vcpu);
+}
+
+static void vt_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
+{
+ if (WARN_ON_ONCE(is_td_vcpu(vcpu)))
+ return;
+
+ vmx_cache_reg(vcpu, reg);
+}
+
+static unsigned long vt_get_rflags(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_get_rflags(vcpu);
+}
+
+static void vt_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_set_rflags(vcpu, rflags);
+}
+
+static bool vt_get_if_flag(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return false;
+
+ return vmx_get_if_flag(vcpu);
+}
+
static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
{
if (is_td_vcpu(vcpu)) {
@@ -457,6 +665,14 @@ static void vt_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
vmx_inject_irq(vcpu, reinjected);
}
+static void vt_inject_exception(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_inject_exception(vcpu);
+}
+
static void vt_cancel_injection(struct kvm_vcpu *vcpu)
{
if (is_td_vcpu(vcpu))
@@ -504,6 +720,14 @@ static void vt_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
vmx_get_exit_info(vcpu, reason, info1, info2, intr_info, error_code);
}
+static void vt_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_update_cr8_intercept(vcpu, tpr, irr);
+}
+
static void vt_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
{
if (is_td_vcpu(vcpu))
@@ -522,6 +746,30 @@ static void vt_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
vmx_refresh_apicv_exec_ctrl(vcpu);
}
+static void vt_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_load_eoi_exitmap(vcpu, eoi_exit_bitmap);
+}
+
+static int vt_set_tss_addr(struct kvm *kvm, unsigned int addr)
+{
+ if (is_td(kvm))
+ return 0;
+
+ return vmx_set_tss_addr(kvm, addr);
+}
+
+static int vt_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
+{
+ if (is_td(kvm))
+ return 0;
+
+ return vmx_set_identity_map_addr(kvm, ident_addr);
+}
+
static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
{
if (!is_td(kvm))
@@ -583,32 +831,33 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.vcpu_load = vt_vcpu_load,
.vcpu_put = vt_vcpu_put,
- .update_exception_bitmap = vmx_update_exception_bitmap,
+ .update_exception_bitmap = vt_update_exception_bitmap,
.get_feature_msr = vmx_get_feature_msr,
.get_msr = vt_get_msr,
.set_msr = vt_set_msr,
- .get_segment_base = vmx_get_segment_base,
- .get_segment = vmx_get_segment,
- .set_segment = vmx_set_segment,
- .get_cpl = vmx_get_cpl,
- .get_cpl_no_cache = vmx_get_cpl_no_cache,
- .get_cs_db_l_bits = vmx_get_cs_db_l_bits,
- .is_valid_cr0 = vmx_is_valid_cr0,
- .set_cr0 = vmx_set_cr0,
- .is_valid_cr4 = vmx_is_valid_cr4,
- .set_cr4 = vmx_set_cr4,
- .set_efer = vmx_set_efer,
- .get_idt = vmx_get_idt,
- .set_idt = vmx_set_idt,
- .get_gdt = vmx_get_gdt,
- .set_gdt = vmx_set_gdt,
- .set_dr6 = vmx_set_dr6,
- .set_dr7 = vmx_set_dr7,
- .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
- .cache_reg = vmx_cache_reg,
- .get_rflags = vmx_get_rflags,
- .set_rflags = vmx_set_rflags,
- .get_if_flag = vmx_get_if_flag,
+
+ .get_segment_base = vt_get_segment_base,
+ .get_segment = vt_get_segment,
+ .set_segment = vt_set_segment,
+ .get_cpl = vt_get_cpl,
+ .get_cpl_no_cache = vt_get_cpl_no_cache,
+ .get_cs_db_l_bits = vt_get_cs_db_l_bits,
+ .is_valid_cr0 = vt_is_valid_cr0,
+ .set_cr0 = vt_set_cr0,
+ .is_valid_cr4 = vt_is_valid_cr4,
+ .set_cr4 = vt_set_cr4,
+ .set_efer = vt_set_efer,
+ .get_idt = vt_get_idt,
+ .set_idt = vt_set_idt,
+ .get_gdt = vt_get_gdt,
+ .set_gdt = vt_set_gdt,
+ .set_dr6 = vt_set_dr6,
+ .set_dr7 = vt_set_dr7,
+ .sync_dirty_debug_regs = vt_sync_dirty_debug_regs,
+ .cache_reg = vt_cache_reg,
+ .get_rflags = vt_get_rflags,
+ .set_rflags = vt_set_rflags,
+ .get_if_flag = vt_get_if_flag,
.flush_tlb_all = vt_flush_tlb_all,
.flush_tlb_current = vt_flush_tlb_current,
@@ -625,7 +874,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.patch_hypercall = vmx_patch_hypercall,
.inject_irq = vt_inject_irq,
.inject_nmi = vt_inject_nmi,
- .inject_exception = vmx_inject_exception,
+ .inject_exception = vt_inject_exception,
.cancel_injection = vt_cancel_injection,
.interrupt_allowed = vt_interrupt_allowed,
.nmi_allowed = vt_nmi_allowed,
@@ -633,13 +882,13 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.set_nmi_mask = vt_set_nmi_mask,
.enable_nmi_window = vt_enable_nmi_window,
.enable_irq_window = vt_enable_irq_window,
- .update_cr8_intercept = vmx_update_cr8_intercept,
+ .update_cr8_intercept = vt_update_cr8_intercept,
.x2apic_icr_is_split = false,
.set_virtual_apic_mode = vt_set_virtual_apic_mode,
.set_apic_access_page_addr = vt_set_apic_access_page_addr,
.refresh_apicv_exec_ctrl = vt_refresh_apicv_exec_ctrl,
- .load_eoi_exitmap = vmx_load_eoi_exitmap,
+ .load_eoi_exitmap = vt_load_eoi_exitmap,
.apicv_pre_state_restore = vt_apicv_pre_state_restore,
.required_apicv_inhibits = VMX_REQUIRED_APICV_INHIBITS,
.hwapic_isr_update = vt_hwapic_isr_update,
@@ -647,14 +896,14 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.deliver_interrupt = vt_deliver_interrupt,
.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
- .set_tss_addr = vmx_set_tss_addr,
- .set_identity_map_addr = vmx_set_identity_map_addr,
+ .set_tss_addr = vt_set_tss_addr,
+ .set_identity_map_addr = vt_set_identity_map_addr,
.get_mt_mask = vmx_get_mt_mask,
.get_exit_info = vt_get_exit_info,
.get_entry_info = vt_get_entry_info,
- .vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
+ .vcpu_after_set_cpuid = vt_vcpu_after_set_cpuid,
.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 13/20] KVM: TDX: Add method to ignore guest instruction emulation
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (11 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 12/20] KVM: TDX: Add methods to ignore accesses to CPU state Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 14/20] KVM: TDX: Add methods to ignore VMX preemption timer Binbin Wu
` (6 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Skip instruction emulation and let the TDX guest retry for MMIO emulation
after installing the MMIO SPTE with suppress #VE bit cleared.
TDX protects TDX guest state from VMM, instructions in guest memory cannot
be emulated. MMIO emulation is the only case that triggers the instruction
emulation code path for TDX guest.
The MMIO emulation handling flow as following:
- The TDX guest issues a vMMIO instruction. (The GPA must be shared and is
not covered by KVM memory slot.)
- The default SPTE entry for shared-EPT by KVM has suppress #VE bit set. So
EPT violation causes TD exit to KVM.
- Trigger KVM page fault handler and install a new SPTE with suppress #VE
bit cleared.
- Skip instruction emulation and return X86EMU_RETRY_INSTR to let the vCPU
retry.
- TDX guest re-executes the vMMIO instruction.
- TDX guest gets #VE because KVM has cleared #VE suppress bit.
- TDX guest #VE handler converts MMIO into TDG.VP.VMCALL<MMIO>
Return X86EMU_RETRY_INSTR in the callback check_emulate_instruction() for
TDX guests to retry the MMIO instruction. Also, the instruction emulation
handling will be skipped, so that the callback check_intercept() will never
be called for TDX guest.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No change.
TDX "the rest" v1:
- Dropped vt_check_intercept().
- Add a comment in vt_check_emulate_instruction().
- Update the changelog.
---
arch/x86/kvm/vmx/main.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index b76f39cc56fb..035c3ed263b7 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -278,6 +278,22 @@ static void vt_enable_smi_window(struct kvm_vcpu *vcpu)
}
#endif
+static int vt_check_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
+ void *insn, int insn_len)
+{
+ /*
+ * For TDX, this can only be triggered for MMIO emulation. Let the
+ * guest retry after installing the SPTE with suppress #VE bit cleared,
+ * so that the guest will receive #VE when retry. The guest is expected
+ * to call TDG.VP.VMCALL<MMIO> to request VMM to do MMIO emulation on
+ * #VE.
+ */
+ if (is_td_vcpu(vcpu))
+ return X86EMUL_RETRY_INSTR;
+
+ return vmx_check_emulate_instruction(vcpu, emul_type, insn, insn_len);
+}
+
static bool vt_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
{
/*
@@ -938,7 +954,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.enable_smi_window = vt_enable_smi_window,
#endif
- .check_emulate_instruction = vmx_check_emulate_instruction,
+ .check_emulate_instruction = vt_check_emulate_instruction,
.apic_init_signal_blocked = vt_apic_init_signal_blocked,
.migrate_timers = vmx_migrate_timers,
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 14/20] KVM: TDX: Add methods to ignore VMX preemption timer
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (12 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 13/20] KVM: TDX: Add method to ignore guest instruction emulation Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 15/20] KVM: TDX: Add methods to ignore accesses to TSC Binbin Wu
` (5 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
TDX doesn't support VMX preemption timer. Implement access methods for VMM
to ignore VMX preemption timer.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No change.
TDX "the rest" v1:
- Dropped KVM_BUG_ON() in vt_cancel_hv_timer(). (Rick)
---
arch/x86/kvm/vmx/main.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 035c3ed263b7..75e7fef7914e 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -786,6 +786,27 @@ static int vt_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
return vmx_set_identity_map_addr(kvm, ident_addr);
}
+#ifdef CONFIG_X86_64
+static int vt_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+ bool *expired)
+{
+ /* VMX-preemption timer isn't available for TDX. */
+ if (is_td_vcpu(vcpu))
+ return -EINVAL;
+
+ return vmx_set_hv_timer(vcpu, guest_deadline_tsc, expired);
+}
+
+static void vt_cancel_hv_timer(struct kvm_vcpu *vcpu)
+{
+ /* VMX-preemption timer can't be set. See vt_set_hv_timer(). */
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_cancel_hv_timer(vcpu);
+}
+#endif
+
static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
{
if (!is_td(kvm))
@@ -941,8 +962,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.pi_start_assignment = vmx_pi_start_assignment,
#ifdef CONFIG_X86_64
- .set_hv_timer = vmx_set_hv_timer,
- .cancel_hv_timer = vmx_cancel_hv_timer,
+ .set_hv_timer = vt_set_hv_timer,
+ .cancel_hv_timer = vt_cancel_hv_timer,
#endif
.setup_mce = vmx_setup_mce,
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 15/20] KVM: TDX: Add methods to ignore accesses to TSC
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (13 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 14/20] KVM: TDX: Add methods to ignore VMX preemption timer Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 16/20] KVM: TDX: Ignore setting up mce Binbin Wu
` (4 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
TDX protects TDX guest TSC state from VMM. Implement access methods to
ignore guest TSC.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No change.
TDX "the rest" v1:
- Dropped KVM_BUG_ON() in vt_get_l2_tsc_offset(). (Rick)
---
arch/x86/kvm/vmx/main.c | 44 +++++++++++++++++++++++++++++++++++++----
1 file changed, 40 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 75e7fef7914e..554975f053ff 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -786,6 +786,42 @@ static int vt_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
return vmx_set_identity_map_addr(kvm, ident_addr);
}
+static u64 vt_get_l2_tsc_offset(struct kvm_vcpu *vcpu)
+{
+ /* TDX doesn't support L2 guest at the moment. */
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_get_l2_tsc_offset(vcpu);
+}
+
+static u64 vt_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu)
+{
+ /* TDX doesn't support L2 guest at the moment. */
+ if (is_td_vcpu(vcpu))
+ return 0;
+
+ return vmx_get_l2_tsc_multiplier(vcpu);
+}
+
+static void vt_write_tsc_offset(struct kvm_vcpu *vcpu)
+{
+ /* In TDX, tsc offset can't be changed. */
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_write_tsc_offset(vcpu);
+}
+
+static void vt_write_tsc_multiplier(struct kvm_vcpu *vcpu)
+{
+ /* In TDX, tsc multiplier can't be changed. */
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_write_tsc_multiplier(vcpu);
+}
+
#ifdef CONFIG_X86_64
static int vt_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
bool *expired)
@@ -944,10 +980,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
- .get_l2_tsc_offset = vmx_get_l2_tsc_offset,
- .get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
- .write_tsc_offset = vmx_write_tsc_offset,
- .write_tsc_multiplier = vmx_write_tsc_multiplier,
+ .get_l2_tsc_offset = vt_get_l2_tsc_offset,
+ .get_l2_tsc_multiplier = vt_get_l2_tsc_multiplier,
+ .write_tsc_offset = vt_write_tsc_offset,
+ .write_tsc_multiplier = vt_write_tsc_multiplier,
.load_mmu_pgd = vt_load_mmu_pgd,
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 16/20] KVM: TDX: Ignore setting up mce
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (14 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 15/20] KVM: TDX: Add methods to ignore accesses to TSC Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 17/20] KVM: TDX: Add a method to ignore hypercall patching Binbin Wu
` (3 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Because vmx_set_mce function is VMX specific and it cannot be used for TDX.
Add vt stub to ignore setting up mce for TDX.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No change.
---
arch/x86/kvm/vmx/main.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 554975f053ff..d73ea9ce750d 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -843,6 +843,14 @@ static void vt_cancel_hv_timer(struct kvm_vcpu *vcpu)
}
#endif
+static void vt_setup_mce(struct kvm_vcpu *vcpu)
+{
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_setup_mce(vcpu);
+}
+
static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
{
if (!is_td(kvm))
@@ -1002,7 +1010,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.cancel_hv_timer = vt_cancel_hv_timer,
#endif
- .setup_mce = vmx_setup_mce,
+ .setup_mce = vt_setup_mce,
#ifdef CONFIG_KVM_SMM
.smi_allowed = vt_smi_allowed,
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 17/20] KVM: TDX: Add a method to ignore hypercall patching
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (15 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 16/20] KVM: TDX: Ignore setting up mce Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 18/20] KVM: TDX: Enable guest access to MTRR MSRs Binbin Wu
` (2 subsequent siblings)
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Because guest TD memory is protected, VMM patching guest binary for
hypercall instruction isn't possible. Add a method to ignore hypercall
patching. Note: guest TD kernel needs to be modified to use
TDG.VP.VMCALL for hypercall.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No change.
TDX "the rest" v1:
- Renamed from
"KVM: TDX: Add a method to ignore for TDX to ignore hypercall patch"
to "KVM: TDX: Add a method to ignore hypercall patching".
- Dropped KVM_BUG_ON() in vt_patch_hypercall(). (Rick)
- Remove "with a warning" from "Add a method to ignore hypercall
patching with a warning." in changelog to reflect code change.
---
arch/x86/kvm/vmx/main.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index d73ea9ce750d..fa8b5f609666 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -673,6 +673,19 @@ static u32 vt_get_interrupt_shadow(struct kvm_vcpu *vcpu)
return vmx_get_interrupt_shadow(vcpu);
}
+static void vt_patch_hypercall(struct kvm_vcpu *vcpu,
+ unsigned char *hypercall)
+{
+ /*
+ * Because guest memory is protected, guest can't be patched. TD kernel
+ * is modified to use TDG.VP.VMCALL for hypercall.
+ */
+ if (is_td_vcpu(vcpu))
+ return;
+
+ vmx_patch_hypercall(vcpu, hypercall);
+}
+
static void vt_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
{
if (is_td_vcpu(vcpu))
@@ -952,7 +965,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.update_emulated_instruction = vmx_update_emulated_instruction,
.set_interrupt_shadow = vt_set_interrupt_shadow,
.get_interrupt_shadow = vt_get_interrupt_shadow,
- .patch_hypercall = vmx_patch_hypercall,
+ .patch_hypercall = vt_patch_hypercall,
.inject_irq = vt_inject_irq,
.inject_nmi = vt_inject_nmi,
.inject_exception = vt_inject_exception,
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 18/20] KVM: TDX: Enable guest access to MTRR MSRs
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (16 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 17/20] KVM: TDX: Add a method to ignore hypercall patching Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 19/20] KVM: TDX: Make TDX VM type supported Binbin Wu
2025-02-27 1:20 ` [PATCH v2 20/20] Documentation/virt/kvm: Document on Trust Domain Extensions (TDX) Binbin Wu
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
Allow TDX guests to access MTRR MSRs as what KVM does for normal VMs, i.e.,
KVM emulates accesses to MTRR MSRs, but doesn't virtualize guest MTRR
memory types.
TDX module exposes MTRR feature to TDX guests unconditionally. KVM needs
to support MTRR MSRs accesses for TDX guests to match the architectural
behavior.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Add the MTRR access support back, but drop the special handling for
TDX guests, just align with what KVM does for normal VMs.
---
arch/x86/kvm/vmx/tdx.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index dbc9fffcbc26..5b957e5e503d 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2049,6 +2049,9 @@ bool tdx_has_emulated_msr(u32 index)
case MSR_IA32_ARCH_CAPABILITIES:
case MSR_IA32_POWER_CTL:
case MSR_IA32_CR_PAT:
+ case MSR_MTRRcap:
+ case MTRRphysBase_MSR(0) ... MSR_MTRRfix4K_F8000:
+ case MSR_MTRRdefType:
case MSR_IA32_TSC_DEADLINE:
case MSR_IA32_MISC_ENABLE:
case MSR_PLATFORM_INFO:
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 19/20] KVM: TDX: Make TDX VM type supported
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (17 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 18/20] KVM: TDX: Enable guest access to MTRR MSRs Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
2025-02-27 1:20 ` [PATCH v2 20/20] Documentation/virt/kvm: Document on Trust Domain Extensions (TDX) Binbin Wu
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Now all the necessary code for TDX is in place, it's ready to run TDX
guest. Advertise the VM type of KVM_X86_TDX_VM so that the user space
VMM like QEMU can start to use it.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- No change.
TDX "the rest" v1:
- Move down to the end of patch series.
---
arch/x86/kvm/vmx/main.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index fa8b5f609666..320c96e1e80a 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -1092,6 +1092,7 @@ static int __init vt_init(void)
vcpu_align = max_t(unsigned, vcpu_align,
__alignof__(struct vcpu_tdx));
kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_EPT_IGNORE_GUEST_PAT;
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_TDX_VM);
}
/*
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 20/20] Documentation/virt/kvm: Document on Trust Domain Extensions (TDX)
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
` (18 preceding siblings ...)
2025-02-27 1:20 ` [PATCH v2 19/20] KVM: TDX: Make TDX VM type supported Binbin Wu
@ 2025-02-27 1:20 ` Binbin Wu
19 siblings, 0 replies; 21+ messages in thread
From: Binbin Wu @ 2025-02-27 1:20 UTC (permalink / raw)
To: pbonzini, seanjc, kvm
Cc: rick.p.edgecombe, kai.huang, adrian.hunter, reinette.chatre,
xiaoyao.li, tony.lindgren, isaku.yamahata, yan.y.zhao, chao.gao,
linux-kernel, binbin.wu
From: Isaku Yamahata <isaku.yamahata@intel.com>
Add documentation to Intel Trusted Domain Extensions (TDX) support.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
TDX "the rest" v2:
- Update for configurable CPUID comment (Tony)
- Add the documentation for KVM_TDX_GET_CPUID
- Add missing white space. (Kai)
- Consolidate description of KVM_MEMORY_ENCRYPT_OP for SEV and TDX. (Kai)
- Update the "Overview" part. (Kai)
- Sync description of struct kvm_tdx_cmd with source code. (Kai)
- Update description of KVM_TDX_CAPABILITIES. (Xiaoyao)
- Use hw_error instead of error and remove the description of "unused"
for TDX specific sub ioctl commands. (Kai)
- Add description for return values for TDX specific sub-ioctl() commands.
(Kai)
- Replace "SEAM call" with "SEAMCALL", and replace "sub ioctl" with
"sub-ioctl()" for consistency. (Kai)
- Remove the detailed SEAMCALLs when describing TDX specific sub-ioctl()
commands. (Kai, Xiaoyao)
- Update description for KVM_TDX_INIT_VM, KVM_TDX_INIT_VCPU, and
KVM_TDX_INIT_MEM_REGION. (Xiaoyao)
- Update "KVM TDX creation flow". (Kai, Xiaoyao)
- Fix typos in enum/struct comments (Xiaoyao)
- Remove design section (Kai)
TDX "the rest" v1:
- Updates to match code changes (Tony)
---
Documentation/virt/kvm/api.rst | 13 +-
Documentation/virt/kvm/x86/index.rst | 1 +
Documentation/virt/kvm/x86/intel-tdx.rst | 255 +++++++++++++++++++++++
3 files changed, 265 insertions(+), 4 deletions(-)
create mode 100644 Documentation/virt/kvm/x86/intel-tdx.rst
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 641d3537a38d..7e155977a8ed 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1407,6 +1407,9 @@ the memory region are automatically reflected into the guest. For example, an
mmap() that affects the region will be made visible immediately. Another
example is madvise(MADV_DROP).
+For TDX guest, deleting/moving memory region loses guest memory contents.
+Read only region isn't supported. Only as-id 0 is supported.
+
Note: On arm64, a write generated by the page-table walker (to update
the Access and Dirty flags, for example) never results in a
KVM_EXIT_MMIO exit when the slot has the KVM_MEM_READONLY flag. This
@@ -4764,7 +4767,7 @@ H_GET_CPU_CHARACTERISTICS hypercall.
:Capability: basic
:Architectures: x86
-:Type: vm
+:Type: vm ioctl, vcpu ioctl
:Parameters: an opaque platform specific structure (in/out)
:Returns: 0 on success; -1 on error
@@ -4772,9 +4775,11 @@ If the platform supports creating encrypted VMs then this ioctl can be used
for issuing platform-specific memory encryption commands to manage those
encrypted VMs.
-Currently, this ioctl is used for issuing Secure Encrypted Virtualization
-(SEV) commands on AMD Processors. The SEV commands are defined in
-Documentation/virt/kvm/x86/amd-memory-encryption.rst.
+Currently, this ioctl is used for issuing both Secure Encrypted Virtualization
+(SEV) commands on AMD Processors and Trusted Domain Extensions (TDX) commands
+on Intel Processors. The detailed commands are defined in
+Documentation/virt/kvm/x86/amd-memory-encryption.rst and
+Documentation/virt/kvm/x86/intel-tdx.rst.
4.111 KVM_MEMORY_ENCRYPT_REG_REGION
-----------------------------------
diff --git a/Documentation/virt/kvm/x86/index.rst b/Documentation/virt/kvm/x86/index.rst
index 9ece6b8dc817..851e99174762 100644
--- a/Documentation/virt/kvm/x86/index.rst
+++ b/Documentation/virt/kvm/x86/index.rst
@@ -11,6 +11,7 @@ KVM for x86 systems
cpuid
errata
hypercalls
+ intel-tdx
mmu
msr
nested-vmx
diff --git a/Documentation/virt/kvm/x86/intel-tdx.rst b/Documentation/virt/kvm/x86/intel-tdx.rst
new file mode 100644
index 000000000000..de41d4c01e5c
--- /dev/null
+++ b/Documentation/virt/kvm/x86/intel-tdx.rst
@@ -0,0 +1,255 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===================================
+Intel Trust Domain Extensions (TDX)
+===================================
+
+Overview
+========
+Intel's Trust Domain Extensions (TDX) protect confidential guest VMs from the
+host and physical attacks. A CPU-attested software module called 'the TDX
+module' runs inside a new CPU isolated range to provide the functionalities to
+manage and run protected VMs, a.k.a, TDX guests or TDs.
+
+Please refer to [1] for the whitepaper, specifications and other resources.
+
+This documentation describes TDX-specific KVM ABIs. The TDX module needs to be
+initialized before it can be used by KVM to run any TDX guests. The host
+core-kernel provides the support of initializing the TDX module, which is
+described in the Documentation/arch/x86/tdx.rst.
+
+API description
+===============
+
+KVM_MEMORY_ENCRYPT_OP
+---------------------
+:Type: vm ioctl, vcpu ioctl
+
+For TDX operations, KVM_MEMORY_ENCRYPT_OP is re-purposed to be generic
+ioctl with TDX specific sub-ioctl() commands.
+
+::
+
+ /* Trust Domain Extensions sub-ioctl() commands. */
+ enum kvm_tdx_cmd_id {
+ KVM_TDX_CAPABILITIES = 0,
+ KVM_TDX_INIT_VM,
+ KVM_TDX_INIT_VCPU,
+ KVM_TDX_INIT_MEM_REGION,
+ KVM_TDX_FINALIZE_VM,
+ KVM_TDX_GET_CPUID,
+
+ KVM_TDX_CMD_NR_MAX,
+ };
+
+ struct kvm_tdx_cmd {
+ /* enum kvm_tdx_cmd_id */
+ __u32 id;
+ /* flags for sub-command. If sub-command doesn't use this, set zero. */
+ __u32 flags;
+ /*
+ * data for each sub-command. An immediate or a pointer to the actual
+ * data in process virtual address. If sub-command doesn't use it,
+ * set zero.
+ */
+ __u64 data;
+ /*
+ * Auxiliary error code. The sub-command may return TDX SEAMCALL
+ * status code in addition to -Exxx.
+ */
+ __u64 hw_error;
+ };
+
+KVM_TDX_CAPABILITIES
+--------------------
+:Type: vm ioctl
+:Returns: 0 on success, <0 on error
+
+Return the TDX capabilities that current KVM supports with the specific TDX
+module loaded in the system. It reports what features/capabilities are allowed
+to be configured to the TDX guest.
+
+- id: KVM_TDX_CAPABILITIES
+- flags: must be 0
+- data: pointer to struct kvm_tdx_capabilities
+- hw_error: must be 0
+
+::
+
+ struct kvm_tdx_capabilities {
+ __u64 supported_attrs;
+ __u64 supported_xfam;
+ __u64 reserved[254];
+
+ /* Configurable CPUID bits for userspace */
+ struct kvm_cpuid2 cpuid;
+ };
+
+
+KVM_TDX_INIT_VM
+---------------
+:Type: vm ioctl
+:Returns: 0 on success, <0 on error
+
+Perform TDX specific VM initialization. This needs to be called after
+KVM_CREATE_VM and before creating any VCPUs.
+
+- id: KVM_TDX_INIT_VM
+- flags: must be 0
+- data: pointer to struct kvm_tdx_init_vm
+- hw_error: must be 0
+
+::
+
+ struct kvm_tdx_init_vm {
+ __u64 attributes;
+ __u64 xfam;
+ __u64 mrconfigid[6]; /* sha384 digest */
+ __u64 mrowner[6]; /* sha384 digest */
+ __u64 mrownerconfig[6]; /* sha384 digest */
+
+ /* The total space for TD_PARAMS before the CPUIDs is 256 bytes */
+ __u64 reserved[12];
+
+ /*
+ * Call KVM_TDX_INIT_VM before vcpu creation, thus before
+ * KVM_SET_CPUID2.
+ * This configuration supersedes KVM_SET_CPUID2s for VCPUs because the
+ * TDX module directly virtualizes those CPUIDs without VMM. The user
+ * space VMM, e.g. qemu, should make KVM_SET_CPUID2 consistent with
+ * those values. If it doesn't, KVM may have wrong idea of vCPUIDs of
+ * the guest, and KVM may wrongly emulate CPUIDs or MSRs that the TDX
+ * module doesn't virtualize.
+ */
+ struct kvm_cpuid2 cpuid;
+ };
+
+
+KVM_TDX_INIT_VCPU
+-----------------
+:Type: vcpu ioctl
+:Returns: 0 on success, <0 on error
+
+Perform TDX specific VCPU initialization.
+
+- id: KVM_TDX_INIT_VCPU
+- flags: must be 0
+- data: initial value of the guest TD VCPU RCX
+- hw_error: must be 0
+
+KVM_TDX_INIT_MEM_REGION
+-----------------------
+:Type: vcpu ioctl
+:Returns: 0 on success, <0 on error
+
+Initialize @nr_pages TDX guest private memory starting from @gpa with userspace
+provided data from @source_addr.
+
+Note, before calling this sub command, memory attribute of the range
+[gpa, gpa + nr_pages] needs to be private. Userspace can use
+KVM_SET_MEMORY_ATTRIBUTES to set the attribute.
+
+If KVM_TDX_MEASURE_MEMORY_REGION flag is specified, it also extends measurement.
+
+- id: KVM_TDX_INIT_MEM_REGION
+- flags: currently only KVM_TDX_MEASURE_MEMORY_REGION is defined
+- data: pointer to struct kvm_tdx_init_mem_region
+- hw_error: must be 0
+
+::
+
+ #define KVM_TDX_MEASURE_MEMORY_REGION (1UL << 0)
+
+ struct kvm_tdx_init_mem_region {
+ __u64 source_addr;
+ __u64 gpa;
+ __u64 nr_pages;
+ };
+
+
+KVM_TDX_FINALIZE_VM
+-------------------
+:Type: vm ioctl
+:Returns: 0 on success, <0 on error
+
+Complete measurement of the initial TD contents and mark it ready to run.
+
+- id: KVM_TDX_FINALIZE_VM
+- flags: must be 0
+- data: must be 0
+- hw_error: must be 0
+
+
+KVM_TDX_GET_CPUID
+-----------------
+:Type: vcpu ioctl
+:Returns: 0 on success, <0 on error
+
+Get the CPUID values that the TDX module virtualizes for the TD guest.
+When it returns -E2BIG, the user space should allocate a larger buffer and
+retry. The minimum buffer size is updated in the nent field of the
+struct kvm_cpuid2.
+
+- id: KVM_TDX_GET_CPUID
+- flags: must be 0
+- data: pointer to struct kvm_cpuid2 (in/out)
+- hw_error: must be 0 (out)
+
+::
+
+ struct kvm_cpuid2 {
+ __u32 nent;
+ __u32 padding;
+ struct kvm_cpuid_entry2 entries[0];
+ };
+
+ struct kvm_cpuid_entry2 {
+ __u32 function;
+ __u32 index;
+ __u32 flags;
+ __u32 eax;
+ __u32 ebx;
+ __u32 ecx;
+ __u32 edx;
+ __u32 padding[3];
+ };
+
+KVM TDX creation flow
+=====================
+In addition to the standard KVM flow, new TDX ioctls need to be called. The
+control flow is as follows:
+
+#. Check system wide capability
+
+ * KVM_CAP_VM_TYPES: Check if VM type is supported and if KVM_X86_TDX_VM
+ is supported.
+
+#. Create VM
+
+ * KVM_CREATE_VM
+ * KVM_TDX_CAPABILITIES: Query TDX capabilities for creating TDX guests.
+ * KVM_CHECK_EXTENSION(KVM_CAP_MAX_VCPUS): Query maximum VCPUs the TD can
+ support at VM level (TDX has its own limitation on this).
+ * KVM_SET_TSC_KHZ: Configure TD's TSC frequency if a different TSC frequency
+ than host is desired. This is Optional.
+ * KVM_TDX_INIT_VM: Pass TDX specific VM parameters.
+
+#. Create VCPU
+
+ * KVM_CREATE_VCPU
+ * KVM_TDX_INIT_VCPU: Pass TDX specific VCPU parameters.
+ * KVM_SET_CPUID2: Configure TD's CPUIDs.
+ * KVM_SET_MSRS: Configure TD's MSRs.
+
+#. Initialize initial guest memory
+
+ * Prepare content of initial guest memory.
+ * KVM_TDX_INIT_MEM_REGION: Add initial guest memory.
+ * KVM_TDX_FINALIZE_VM: Finalize the measurement of the TDX guest.
+
+#. Run VCPU
+
+References
+==========
+
+.. [1] https://www.intel.com/content/www/us/en/developer/tools/trust-domain-extensions/documentation.html
--
2.46.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-02-27 1:19 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-27 1:20 [PATCH v2 00/20] KVM: TDX: TDX "the rest" part Binbin Wu
2025-02-27 1:20 ` [PATCH v2 01/20] KVM: TDX: Handle EPT violation/misconfig exit Binbin Wu
2025-02-27 1:20 ` [PATCH v2 02/20] KVM: TDX: Detect unexpected SEPT violations due to pending SPTEs Binbin Wu
2025-02-27 1:20 ` [PATCH v2 03/20] KVM: TDX: Retry locally in TDX EPT violation handler on RET_PF_RETRY Binbin Wu
2025-02-27 1:20 ` [PATCH v2 04/20] KVM: TDX: Kick off vCPUs when SEAMCALL is busy during TD page removal Binbin Wu
2025-02-27 1:20 ` [PATCH v2 05/20] KVM: TDX: Handle TDX PV CPUID hypercall Binbin Wu
2025-02-27 1:20 ` [PATCH v2 06/20] KVM: TDX: Handle TDX PV HLT hypercall Binbin Wu
2025-02-27 1:20 ` [PATCH v2 07/20] KVM: x86: Move KVM_MAX_MCE_BANKS to header file Binbin Wu
2025-02-27 1:20 ` [PATCH v2 08/20] KVM: TDX: Implement callbacks for MSR operations Binbin Wu
2025-02-27 1:20 ` [PATCH v2 09/20] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall Binbin Wu
2025-02-27 1:20 ` [PATCH v2 10/20] KVM: TDX: Enable guest access to LMCE related MSRs Binbin Wu
2025-02-27 1:20 ` [PATCH v2 11/20] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall Binbin Wu
2025-02-27 1:20 ` [PATCH v2 12/20] KVM: TDX: Add methods to ignore accesses to CPU state Binbin Wu
2025-02-27 1:20 ` [PATCH v2 13/20] KVM: TDX: Add method to ignore guest instruction emulation Binbin Wu
2025-02-27 1:20 ` [PATCH v2 14/20] KVM: TDX: Add methods to ignore VMX preemption timer Binbin Wu
2025-02-27 1:20 ` [PATCH v2 15/20] KVM: TDX: Add methods to ignore accesses to TSC Binbin Wu
2025-02-27 1:20 ` [PATCH v2 16/20] KVM: TDX: Ignore setting up mce Binbin Wu
2025-02-27 1:20 ` [PATCH v2 17/20] KVM: TDX: Add a method to ignore hypercall patching Binbin Wu
2025-02-27 1:20 ` [PATCH v2 18/20] KVM: TDX: Enable guest access to MTRR MSRs Binbin Wu
2025-02-27 1:20 ` [PATCH v2 19/20] KVM: TDX: Make TDX VM type supported Binbin Wu
2025-02-27 1:20 ` [PATCH v2 20/20] Documentation/virt/kvm: Document on Trust Domain Extensions (TDX) Binbin Wu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox