From: Paolo Bonzini <pbonzini@redhat.com>
To: isaku.yamahata@intel.com, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Cc: isaku.yamahata@gmail.com, Jim Mattson <jmattson@google.com>,
erdemaktas@google.com, Connor Kuehl <ckuehl@redhat.com>,
Sean Christopherson <seanjc@google.com>
Subject: Re: [RFC PATCH v5 001/104] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
Date: Sun, 13 Mar 2022 14:45:59 +0100 [thread overview]
Message-ID: <5700cdf1-906b-b6c1-ce6f-f21aadb87edd@redhat.com> (raw)
In-Reply-To: <62f4489b91fe88bf43737a44b548b9bd704111fc.1646422845.git.isaku.yamahata@intel.com>
On 3/4/22 20:48, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
>
> KVM accesses Virtual Machine Control Structure (VMCS) with VMX instructions
> to operate on VM. TDX defines its data structure and TDX SEAMCALL APIs for
> VMM to operate on Trust Domain (TD) instead.
>
> Trust Domain Virtual Processor State (TDVPS) is the root control structure
> of a TD VCPU. It helps the TDX module control the operation of the VCPU,
> and holds the VCPU state while the VCPU is not running. TDVPS is opaque to
> software and DMA access, accessible only by using the TDX module interface
> functions (such as TDH.VP.RD, TDH.VP.WR ,..). TDVPS includes TD VMCS, and
> TD VMCS auxiliary structures, such as virtual APIC page, virtualization
> exception information, etc. TDVPS is composed of Trust Domain Virtual
> Processor Root (TDVPR) which is the root page of TDVPS and Trust Domain
> Virtual Processor eXtension (TDVPX) pages which extend TDVPR to help
> provide enough physical space for the logical TDVPS structure.
>
> Also, we have a new structure, Trust Domain Control Structure (TDCS) is the
> main control structure of a guest TD, and encrypted (using the guest TD's
> ephemeral private key). At a high level, TDCS holds information for
> controlling TD operation as a whole, execution, EPTP, MSR bitmaps, etc. KVM
> needs to set it up. Note that MSR bitmaps are held as part of TDCS (unlike
> VMX) because they are meant to have the same value for all VCPUs of the
> same TD. TDCS is a multi-page logical structure composed of multiple Trust
> Domain Control Extension (TDCX) physical pages. Trust Domain Root (TDR) is
> the root control structure of a guest TD and is encrypted using the TDX
> global private key. It holds a minimal set of state variables that enable
> guest TD control even during times when the TD's private key is not known,
> or when the TD's key management state does not permit access to memory
> encrypted using the TD's private key.
>
> The following shows the relationship between those structures.
>
> TDR--> TDCS per-TD
> | \--> TDCX
> \
> \--> TDVPS per-TD VCPU
> \--> TDVPR and TDVPX
>
> The existing global struct kvm_x86_ops already defines an interface which
> fits with TDX. But kvm_x86_ops is system-wide, not per-VM structure. To
> allow VMX to coexist with TDs, the kvm_x86_ops callbacks will have wrappers
> "if (tdx) tdx_op() else vmx_op()" to switch VMX or TDX at run time.
>
> To split the runtime switch, the VMX implementation, and the TDX
> implementation, add main.c, and move out the vmx_x86_ops hooks in
> preparation for adding TDX, which can coexist with VMX, i.e. KVM can run
> both VMs and TDs. Use 'vt' for the naming scheme as a nod to VT-x and as a
> concatenation of VmxTdx.
>
> The current code looks as follows.
> In vmx.c
> static vmx_op() { ... }
> static struct kvm_x86_ops vmx_x86_ops = {
> .op = vmx_op,
> initialization code
>
> The eventually converted code will look like
> In vmx.c, keep the VMX operations.
> vmx_op() { ... }
> VMX initialization
> In tdx.c, define the TDX operations.
> tdx_op() { ... }
> TDX initialization
> In x86_ops.h, declare the VMX and TDX operations.
> vmx_op();
> tdx_op();
> In main.c, define common wrappers for VMX and VMX.
> static vt_ops() { if (tdx) tdx_ops() else vmx_ops() }
> static struct kvm_x86_ops vt_x86_ops = {
> .op = vt_op,
> initialization to call VMX and TDX initialization
>
> Opportunistically, fix the name inconsistency from vmx_create_vcpu() and
> vmx_free_vcpu() to vmx_vcpu_create() and vxm_vcpu_free().
>
> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/kvm/Makefile | 2 +-
> arch/x86/kvm/vmx/main.c | 154 ++++++++++++++++
> arch/x86/kvm/vmx/vmx.c | 360 +++++++++++--------------------------
> arch/x86/kvm/vmx/x86_ops.h | 126 +++++++++++++
> 4 files changed, 385 insertions(+), 257 deletions(-)
> create mode 100644 arch/x86/kvm/vmx/main.c
> create mode 100644 arch/x86/kvm/vmx/x86_ops.h
>
> diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
> index 30f244b64523..ee4d0999f20f 100644
> --- a/arch/x86/kvm/Makefile
> +++ b/arch/x86/kvm/Makefile
> @@ -22,7 +22,7 @@ kvm-$(CONFIG_X86_64) += mmu/tdp_iter.o mmu/tdp_mmu.o
> kvm-$(CONFIG_KVM_XEN) += xen.o
>
> kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
> - vmx/evmcs.o vmx/nested.o vmx/posted_intr.o
> + vmx/evmcs.o vmx/nested.o vmx/posted_intr.o vmx/main.o
> kvm-intel-$(CONFIG_X86_SGX_KVM) += vmx/sgx.o
>
> kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> new file mode 100644
> index 000000000000..b08ea9c42a11
> --- /dev/null
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -0,0 +1,154 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/moduleparam.h>
> +
> +#include "x86_ops.h"
> +#include "vmx.h"
> +#include "nested.h"
> +#include "pmu.h"
> +
> +struct kvm_x86_ops vt_x86_ops __initdata = {
> + .name = "kvm_intel",
> +
> + .hardware_unsetup = vmx_hardware_unsetup,
> +
> + .hardware_enable = vmx_hardware_enable,
> + .hardware_disable = vmx_hardware_disable,
> + .cpu_has_accelerated_tpr = report_flexpriority,
> + .has_emulated_msr = vmx_has_emulated_msr,
> +
> + .vm_size = sizeof(struct kvm_vmx),
> + .vm_init = vmx_vm_init,
> +
> + .vcpu_create = vmx_vcpu_create,
> + .vcpu_free = vmx_vcpu_free,
> + .vcpu_reset = vmx_vcpu_reset,
> +
> + .prepare_guest_switch = vmx_prepare_switch_to_guest,
> + .vcpu_load = vmx_vcpu_load,
> + .vcpu_put = vmx_vcpu_put,
> +
> + .update_exception_bitmap = vmx_update_exception_bitmap,
> + .get_msr_feature = vmx_get_msr_feature,
> + .get_msr = vmx_get_msr,
> + .set_msr = vmx_set_msr,
> + .get_segment_base = vmx_get_segment_base,
> + .get_segment = vmx_get_segment,
> + .set_segment = vmx_set_segment,
> + .get_cpl = vmx_get_cpl,
> + .get_cs_db_l_bits = vmx_get_cs_db_l_bits,
> + .set_cr0 = vmx_set_cr0,
> + .is_valid_cr4 = vmx_is_valid_cr4,
> + .set_cr4 = vmx_set_cr4,
> + .set_efer = vmx_set_efer,
> + .get_idt = vmx_get_idt,
> + .set_idt = vmx_set_idt,
> + .get_gdt = vmx_get_gdt,
> + .set_gdt = vmx_set_gdt,
> + .set_dr7 = vmx_set_dr7,
> + .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
> + .cache_reg = vmx_cache_reg,
> + .get_rflags = vmx_get_rflags,
> + .set_rflags = vmx_set_rflags,
> + .get_if_flag = vmx_get_if_flag,
> +
> + .tlb_flush_all = vmx_flush_tlb_all,
> + .tlb_flush_current = vmx_flush_tlb_current,
> + .tlb_flush_gva = vmx_flush_tlb_gva,
> + .tlb_flush_guest = vmx_flush_tlb_guest,
> +
> + .vcpu_pre_run = vmx_vcpu_pre_run,
> + .run = vmx_vcpu_run,
> + .handle_exit = vmx_handle_exit,
> + .skip_emulated_instruction = vmx_skip_emulated_instruction,
> + .update_emulated_instruction = vmx_update_emulated_instruction,
> + .set_interrupt_shadow = vmx_set_interrupt_shadow,
> + .get_interrupt_shadow = vmx_get_interrupt_shadow,
> + .patch_hypercall = vmx_patch_hypercall,
> + .set_irq = vmx_inject_irq,
> + .set_nmi = vmx_inject_nmi,
> + .queue_exception = vmx_queue_exception,
> + .cancel_injection = vmx_cancel_injection,
> + .interrupt_allowed = vmx_interrupt_allowed,
> + .nmi_allowed = vmx_nmi_allowed,
> + .get_nmi_mask = vmx_get_nmi_mask,
> + .set_nmi_mask = vmx_set_nmi_mask,
> + .enable_nmi_window = vmx_enable_nmi_window,
> + .enable_irq_window = vmx_enable_irq_window,
> + .update_cr8_intercept = vmx_update_cr8_intercept,
> + .set_virtual_apic_mode = vmx_set_virtual_apic_mode,
> + .set_apic_access_page_addr = vmx_set_apic_access_page_addr,
> + .refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
> + .load_eoi_exitmap = vmx_load_eoi_exitmap,
> + .apicv_post_state_restore = vmx_apicv_post_state_restore,
> + .check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
> + .hwapic_irr_update = vmx_hwapic_irr_update,
> + .hwapic_isr_update = vmx_hwapic_isr_update,
> + .guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
> + .sync_pir_to_irr = vmx_sync_pir_to_irr,
> + .deliver_interrupt = vmx_deliver_interrupt,
> + .dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
> +
> + .set_tss_addr = vmx_set_tss_addr,
> + .set_identity_map_addr = vmx_set_identity_map_addr,
> + .get_mt_mask = vmx_get_mt_mask,
> +
> + .get_exit_info = vmx_get_exit_info,
> +
> + .vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
> +
> + .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
> +
> + .get_l2_tsc_offset = vmx_get_l2_tsc_offset,
> + .get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
> + .write_tsc_offset = vmx_write_tsc_offset,
> + .write_tsc_multiplier = vmx_write_tsc_multiplier,
> +
> + .load_mmu_pgd = vmx_load_mmu_pgd,
> +
> + .check_intercept = vmx_check_intercept,
> + .handle_exit_irqoff = vmx_handle_exit_irqoff,
> +
> + .request_immediate_exit = vmx_request_immediate_exit,
> +
> + .sched_in = vmx_sched_in,
> +
> + .cpu_dirty_log_size = PML_ENTITY_NUM,
> + .update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
> +
> + .pmu_ops = &intel_pmu_ops,
> + .nested_ops = &vmx_nested_ops,
> +
> + .update_pi_irte = pi_update_irte,
> + .start_assignment = vmx_pi_start_assignment,
> +
> +#ifdef CONFIG_X86_64
> + .set_hv_timer = vmx_set_hv_timer,
> + .cancel_hv_timer = vmx_cancel_hv_timer,
> +#endif
> +
> + .setup_mce = vmx_setup_mce,
> +
> + .smi_allowed = vmx_smi_allowed,
> + .enter_smm = vmx_enter_smm,
> + .leave_smm = vmx_leave_smm,
> + .enable_smi_window = vmx_enable_smi_window,
> +
> + .can_emulate_instruction = vmx_can_emulate_instruction,
> + .apic_init_signal_blocked = vmx_apic_init_signal_blocked,
> + .migrate_timers = vmx_migrate_timers,
> +
> + .msr_filter_changed = vmx_msr_filter_changed,
> + .complete_emulated_msr = kvm_complete_insn_gp,
> +
> + .vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
> +};
> +
> +struct kvm_x86_init_ops vt_init_ops __initdata = {
> + .cpu_has_kvm_support = vmx_cpu_has_kvm_support,
> + .disabled_by_bios = vmx_disabled_by_bios,
> + .check_processor_compatibility = vmx_check_processor_compat,
> + .hardware_setup = vmx_hardware_setup,
> + .handle_intel_pt_intr = NULL,
> +
> + .runtime_ops = &vt_x86_ops,
> +};
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index efda5e4d6247..f6f5d0dac579 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -66,6 +66,7 @@
> #include "vmcs12.h"
> #include "vmx.h"
> #include "x86.h"
> +#include "x86_ops.h"
>
> MODULE_AUTHOR("Qumranet");
> MODULE_LICENSE("GPL");
> @@ -541,7 +542,7 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu)
> return flexpriority_enabled && lapic_in_kernel(vcpu);
> }
>
> -static inline bool report_flexpriority(void)
> +bool report_flexpriority(void)
> {
> return flexpriority_enabled;
> }
> @@ -1316,7 +1317,7 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
> * Switches to specified vcpu, until a matching vcpu_put(), but assumes
> * vcpu mutex is already taken.
> */
> -static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> +void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -1327,7 +1328,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> vmx->host_debugctlmsr = get_debugctlmsr();
> }
>
> -static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
> +void vmx_vcpu_put(struct kvm_vcpu *vcpu)
> {
> vmx_vcpu_pi_put(vcpu);
>
> @@ -1381,7 +1382,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
> vmx->emulation_required = vmx_emulation_required(vcpu);
> }
>
> -static bool vmx_get_if_flag(struct kvm_vcpu *vcpu)
> +bool vmx_get_if_flag(struct kvm_vcpu *vcpu)
> {
> return vmx_get_rflags(vcpu) & X86_EFLAGS_IF;
> }
> @@ -1487,8 +1488,8 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data)
> return 0;
> }
>
> -static bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
> - void *insn, int insn_len)
> +bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
> + void *insn, int insn_len)
> {
> /*
> * Emulation of instructions in SGX enclaves is impossible as RIP does
> @@ -1572,7 +1573,7 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
> * Recognizes a pending MTF VM-exit and records the nested state for later
> * delivery.
> */
> -static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
> +void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
> {
> struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> @@ -1595,7 +1596,7 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
> vmx->nested.mtf_pending = false;
> }
>
> -static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)
> +int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)
> {
> vmx_update_emulated_instruction(vcpu);
> return skip_emulated_instruction(vcpu);
> @@ -1614,7 +1615,7 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
> vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
> }
>
> -static void vmx_queue_exception(struct kvm_vcpu *vcpu)
> +void vmx_queue_exception(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> unsigned nr = vcpu->arch.exception.nr;
> @@ -1727,12 +1728,12 @@ u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu)
> return kvm_default_tsc_scaling_ratio;
> }
>
> -static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
> +void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
> {
> vmcs_write64(TSC_OFFSET, offset);
> }
>
> -static void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
> +void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
> {
> vmcs_write64(TSC_MULTIPLIER, multiplier);
> }
> @@ -1756,7 +1757,7 @@ static inline bool vmx_feature_control_msr_valid(struct kvm_vcpu *vcpu,
> return !(val & ~valid_bits);
> }
>
> -static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
> +int vmx_get_msr_feature(struct kvm_msr_entry *msr)
> {
> switch (msr->index) {
> case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
> @@ -1776,7 +1777,7 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
> * Returns 0 on success, non-0 otherwise.
> * Assumes vcpu_load() was already called.
> */
> -static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> +int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> struct vmx_uret_msr *msr;
> @@ -1954,7 +1955,7 @@ static u64 vcpu_supported_debugctl(struct kvm_vcpu *vcpu)
> * Returns 0 on success, non-0 otherwise.
> * Assumes vcpu_load() was already called.
> */
> -static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> +int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> struct vmx_uret_msr *msr;
> @@ -2267,7 +2268,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> return ret;
> }
>
> -static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
> +void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
> {
> unsigned long guest_owned_bits;
>
> @@ -2310,12 +2311,12 @@ static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
> }
> }
>
> -static __init int cpu_has_kvm_support(void)
> +__init int vmx_cpu_has_kvm_support(void)
> {
> return cpu_has_vmx();
> }
>
> -static __init int vmx_disabled_by_bios(void)
> +__init int vmx_disabled_by_bios(void)
> {
> return !boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) ||
> !boot_cpu_has(X86_FEATURE_VMX);
> @@ -2341,7 +2342,7 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer)
> return -EFAULT;
> }
>
> -static int hardware_enable(void)
> +int vmx_hardware_enable(void)
> {
> int cpu = raw_smp_processor_id();
> u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
> @@ -2382,7 +2383,7 @@ static void vmclear_local_loaded_vmcss(void)
> __loaded_vmcs_clear(v);
> }
>
> -static void hardware_disable(void)
> +void vmx_hardware_disable(void)
> {
> vmclear_local_loaded_vmcss();
>
> @@ -2924,7 +2925,7 @@ static void exit_lmode(struct kvm_vcpu *vcpu)
>
> #endif
>
> -static void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
> +void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -2954,7 +2955,7 @@ static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)
> return to_vmx(vcpu)->vpid;
> }
>
> -static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
> +void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
> {
> struct kvm_mmu *mmu = vcpu->arch.mmu;
> u64 root_hpa = mmu->root_hpa;
> @@ -2970,7 +2971,7 @@ static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
> vpid_sync_context(vmx_get_current_vpid(vcpu));
> }
>
> -static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
> +void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
> {
> /*
> * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in
> @@ -2979,7 +2980,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
> vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr);
> }
>
> -static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
> +void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
> {
> /*
> * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a
> @@ -3134,8 +3135,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level)
> return eptp;
> }
>
> -static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
> - int root_level)
> +void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level)
> {
> struct kvm *kvm = vcpu->kvm;
> bool update_guest_cr3 = true;
> @@ -3163,8 +3163,7 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
> vmcs_writel(GUEST_CR3, guest_cr3);
> }
>
> -
> -static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> +bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> {
> /*
> * We operate under the default treatment of SMM, so VMX cannot be
> @@ -3280,7 +3279,7 @@ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
> var->g = (ar >> 15) & 1;
> }
>
> -static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
> +u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
> {
> struct kvm_segment s;
>
> @@ -3360,14 +3359,14 @@ void __vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
> vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(var));
> }
>
> -static void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
> +void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
> {
> __vmx_set_segment(vcpu, var, seg);
>
> to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
> }
>
> -static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
> +void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
> {
> u32 ar = vmx_read_guest_seg_ar(to_vmx(vcpu), VCPU_SREG_CS);
>
> @@ -3375,25 +3374,25 @@ static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
> *l = (ar >> 13) & 1;
> }
>
> -static void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> +void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> {
> dt->size = vmcs_read32(GUEST_IDTR_LIMIT);
> dt->address = vmcs_readl(GUEST_IDTR_BASE);
> }
>
> -static void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> +void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> {
> vmcs_write32(GUEST_IDTR_LIMIT, dt->size);
> vmcs_writel(GUEST_IDTR_BASE, dt->address);
> }
>
> -static void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> +void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> {
> dt->size = vmcs_read32(GUEST_GDTR_LIMIT);
> dt->address = vmcs_readl(GUEST_GDTR_BASE);
> }
>
> -static void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> +void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
> {
> vmcs_write32(GUEST_GDTR_LIMIT, dt->size);
> vmcs_writel(GUEST_GDTR_BASE, dt->address);
> @@ -3889,7 +3888,7 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
> }
> }
>
> -static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
> +bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> void *vapic_page;
> @@ -3909,7 +3908,7 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
> return ((rvi & 0xf0) > (vppr & 0xf0));
> }
>
> -static void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
> +void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> u32 i;
> @@ -4041,8 +4040,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
> return 0;
> }
>
> -static void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
> - int trig_mode, int vector)
> +void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
> + int trig_mode, int vector)
> {
> struct kvm_vcpu *vcpu = apic->vcpu;
>
> @@ -4185,7 +4184,7 @@ static u32 vmx_vmexit_ctrl(void)
> ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
> }
>
> -static void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
> +void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -4508,7 +4507,7 @@ static void __vmx_vcpu_reset(struct kvm_vcpu *vcpu)
> vmx->pi_desc.sn = 1;
> }
>
> -static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> +void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -4565,12 +4564,12 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> vpid_sync_context(vmx->vpid);
> }
>
> -static void vmx_enable_irq_window(struct kvm_vcpu *vcpu)
> +void vmx_enable_irq_window(struct kvm_vcpu *vcpu)
> {
> exec_controls_setbit(to_vmx(vcpu), CPU_BASED_INTR_WINDOW_EXITING);
> }
>
> -static void vmx_enable_nmi_window(struct kvm_vcpu *vcpu)
> +void vmx_enable_nmi_window(struct kvm_vcpu *vcpu)
> {
> if (!enable_vnmi ||
> vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI) {
> @@ -4581,7 +4580,7 @@ static void vmx_enable_nmi_window(struct kvm_vcpu *vcpu)
> exec_controls_setbit(to_vmx(vcpu), CPU_BASED_NMI_WINDOW_EXITING);
> }
>
> -static void vmx_inject_irq(struct kvm_vcpu *vcpu)
> +void vmx_inject_irq(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> uint32_t intr;
> @@ -4609,7 +4608,7 @@ static void vmx_inject_irq(struct kvm_vcpu *vcpu)
> vmx_clear_hlt(vcpu);
> }
>
> -static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
> +void vmx_inject_nmi(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -4687,7 +4686,7 @@ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu)
> GUEST_INTR_STATE_NMI));
> }
>
> -static int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> +int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> {
> if (to_vmx(vcpu)->nested.nested_run_pending)
> return -EBUSY;
> @@ -4709,7 +4708,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
> (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
> }
>
> -static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> +int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> {
> if (to_vmx(vcpu)->nested.nested_run_pending)
> return -EBUSY;
> @@ -4724,7 +4723,7 @@ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> return !vmx_interrupt_blocked(vcpu);
> }
>
> -static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
> +int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
> {
> void __user *ret;
>
> @@ -4744,7 +4743,7 @@ static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
> return init_rmode_tss(kvm, ret);
> }
>
> -static int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
> +int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
> {
> to_kvm_vmx(kvm)->ept_identity_map_addr = ident_addr;
> return 0;
> @@ -5023,8 +5022,7 @@ static int handle_io(struct kvm_vcpu *vcpu)
> return kvm_fast_pio(vcpu, size, port, in);
> }
>
> -static void
> -vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
> +void vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
> {
> /*
> * Patch in the VMCALL instruction:
> @@ -5234,7 +5232,7 @@ static int handle_dr(struct kvm_vcpu *vcpu)
> return kvm_complete_insn_gp(vcpu, err);
> }
>
> -static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
> +void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
> {
> get_debugreg(vcpu->arch.db[0], 0);
> get_debugreg(vcpu->arch.db[1], 1);
> @@ -5253,7 +5251,7 @@ static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
> set_debugreg(DR6_RESERVED, 6);
> }
>
> -static void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
> +void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
> {
> vmcs_writel(GUEST_DR7, val);
> }
> @@ -5519,7 +5517,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
> return 1;
> }
>
> -static int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu)
> +int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu)
> {
> if (vmx_emulation_required_with_pending_exception(vcpu)) {
> kvm_prepare_emulation_failure_exit(vcpu);
> @@ -5756,9 +5754,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
> static const int kvm_vmx_max_exit_handlers =
> ARRAY_SIZE(kvm_vmx_exit_handlers);
>
> -static void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
> - u64 *info1, u64 *info2,
> - u32 *intr_info, u32 *error_code)
> +void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
> + u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -6191,7 +6188,7 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
> return 0;
> }
>
> -static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
> +int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
> {
> int ret = __vmx_handle_exit(vcpu, exit_fastpath);
>
> @@ -6279,7 +6276,7 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu)
> : "eax", "ebx", "ecx", "edx");
> }
>
> -static void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
> +void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
> {
> struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> int tpr_threshold;
> @@ -6349,7 +6346,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
> vmx_update_msr_bitmap_x2apic(vcpu);
> }
>
> -static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
> +void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
> {
> struct page *page;
>
> @@ -6377,7 +6374,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
> put_page(page);
> }
>
> -static void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
> +void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
> {
> u16 status;
> u8 old;
> @@ -6411,7 +6408,7 @@ static void vmx_set_rvi(int vector)
> }
> }
>
> -static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
> +void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
> {
> /*
> * When running L2, updating RVI is only relevant when
> @@ -6425,7 +6422,7 @@ static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
> vmx_set_rvi(max_irr);
> }
>
> -static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
> +int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> int max_irr;
> @@ -6471,7 +6468,7 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
> return max_irr;
> }
>
> -static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
> +void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
> {
> if (!kvm_vcpu_apicv_active(vcpu))
> return;
> @@ -6482,7 +6479,7 @@ static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
> vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]);
> }
>
> -static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
> +void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -6554,7 +6551,7 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
> handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
> }
>
> -static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
> +void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -6571,7 +6568,7 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
> * The kvm parameter can be NULL (module initialization, or invocation before
> * VM creation). Be sure to check the kvm parameter before using it.
> */
> -static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index)
> +bool vmx_has_emulated_msr(struct kvm *kvm, u32 index)
> {
> switch (index) {
> case MSR_IA32_SMBASE:
> @@ -6692,7 +6689,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
> IDT_VECTORING_ERROR_CODE);
> }
>
> -static void vmx_cancel_injection(struct kvm_vcpu *vcpu)
> +void vmx_cancel_injection(struct kvm_vcpu *vcpu)
> {
> __vmx_complete_interrupts(vcpu,
> vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
> @@ -6788,7 +6785,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> guest_state_exit_irqoff();
> }
>
> -static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
> +fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> unsigned long cr4;
> @@ -6969,7 +6966,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
> return vmx_exit_handlers_fastpath(vcpu);
> }
>
> -static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
> +void vmx_vcpu_free(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -6980,7 +6977,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
> free_loaded_vmcs(vmx->loaded_vmcs);
> }
>
> -static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
> +int vmx_vcpu_create(struct kvm_vcpu *vcpu)
> {
> struct vmx_uret_msr *tsx_ctrl;
> struct vcpu_vmx *vmx;
> @@ -7085,7 +7082,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
> #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
> #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
>
> -static int vmx_vm_init(struct kvm *kvm)
> +int vmx_vm_init(struct kvm *kvm)
> {
> if (!ple_gap)
> kvm->arch.pause_in_guest = true;
> @@ -7116,7 +7113,7 @@ static int vmx_vm_init(struct kvm *kvm)
> return 0;
> }
>
> -static int __init vmx_check_processor_compat(void)
> +int __init vmx_check_processor_compat(void)
> {
> struct vmcs_config vmcs_conf;
> struct vmx_capability vmx_cap;
> @@ -7139,7 +7136,7 @@ static int __init vmx_check_processor_compat(void)
> return 0;
> }
>
> -static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> +u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> {
> u8 cache;
>
> @@ -7328,7 +7325,7 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
> vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
> }
>
> -static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> +void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -7433,7 +7430,7 @@ static __init void vmx_set_cpu_caps(void)
> kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
> }
>
> -static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
> +void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
> {
> to_vmx(vcpu)->req_immediate_exit = true;
> }
> @@ -7472,10 +7469,10 @@ static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
> return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
> }
>
> -static int vmx_check_intercept(struct kvm_vcpu *vcpu,
> - struct x86_instruction_info *info,
> - enum x86_intercept_stage stage,
> - struct x86_exception *exception)
> +int vmx_check_intercept(struct kvm_vcpu *vcpu,
> + struct x86_instruction_info *info,
> + enum x86_intercept_stage stage,
> + struct x86_exception *exception)
> {
> struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
>
> @@ -7540,8 +7537,8 @@ static inline int u64_shl_div_u64(u64 a, unsigned int shift,
> return 0;
> }
>
> -static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
> - bool *expired)
> +int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
> + bool *expired)
> {
> struct vcpu_vmx *vmx;
> u64 tscl, guest_tscl, delta_tsc, lapic_timer_advance_cycles;
> @@ -7580,13 +7577,13 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
> return 0;
> }
>
> -static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
> +void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
> {
> to_vmx(vcpu)->hv_deadline_tsc = -1;
> }
> #endif
>
> -static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
> +void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
> {
> if (!kvm_pause_in_guest(vcpu->kvm))
> shrink_ple_window(vcpu);
> @@ -7612,7 +7609,7 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> secondary_exec_controls_clearbit(vmx, SECONDARY_EXEC_ENABLE_PML);
> }
>
> -static void vmx_setup_mce(struct kvm_vcpu *vcpu)
> +void vmx_setup_mce(struct kvm_vcpu *vcpu)
> {
> if (vcpu->arch.mcg_cap & MCG_LMCE_P)
> to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |=
> @@ -7622,7 +7619,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
> ~FEAT_CTL_LMCE_ENABLED;
> }
>
> -static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> +int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> {
> /* we need a nested vmexit to enter SMM, postpone if run is pending */
> if (to_vmx(vcpu)->nested.nested_run_pending)
> @@ -7630,7 +7627,7 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> return !is_smm(vcpu);
> }
>
> -static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
> +int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> @@ -7644,7 +7641,7 @@ static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
> return 0;
> }
>
> -static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> +int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> int ret;
> @@ -7665,17 +7662,17 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> return 0;
> }
>
> -static void vmx_enable_smi_window(struct kvm_vcpu *vcpu)
> +void vmx_enable_smi_window(struct kvm_vcpu *vcpu)
> {
> /* RSM will cause a vmexit anyway. */
> }
>
> -static bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
> +bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
> {
> return to_vmx(vcpu)->nested.vmxon && !is_guest_mode(vcpu);
> }
>
> -static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
> +void vmx_migrate_timers(struct kvm_vcpu *vcpu)
> {
> if (is_guest_mode(vcpu)) {
> struct hrtimer *timer = &to_vmx(vcpu)->nested.preemption_timer;
> @@ -7685,7 +7682,7 @@ static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
> }
> }
>
> -static void hardware_unsetup(void)
> +void vmx_hardware_unsetup(void)
> {
> kvm_set_posted_intr_wakeup_handler(NULL);
>
> @@ -7695,7 +7692,7 @@ static void hardware_unsetup(void)
> free_kvm_area();
> }
>
> -static bool vmx_check_apicv_inhibit_reasons(ulong bit)
> +bool vmx_check_apicv_inhibit_reasons(ulong bit)
> {
> ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |
> BIT(APICV_INHIBIT_REASON_ABSENT) |
> @@ -7705,143 +7702,6 @@ static bool vmx_check_apicv_inhibit_reasons(ulong bit)
> return supported & BIT(bit);
> }
>
> -static struct kvm_x86_ops vmx_x86_ops __initdata = {
> - .name = "kvm_intel",
> -
> - .hardware_unsetup = hardware_unsetup,
> -
> - .hardware_enable = hardware_enable,
> - .hardware_disable = hardware_disable,
> - .cpu_has_accelerated_tpr = report_flexpriority,
> - .has_emulated_msr = vmx_has_emulated_msr,
> -
> - .vm_size = sizeof(struct kvm_vmx),
> - .vm_init = vmx_vm_init,
> -
> - .vcpu_create = vmx_create_vcpu,
> - .vcpu_free = vmx_free_vcpu,
> - .vcpu_reset = vmx_vcpu_reset,
> -
> - .prepare_guest_switch = vmx_prepare_switch_to_guest,
> - .vcpu_load = vmx_vcpu_load,
> - .vcpu_put = vmx_vcpu_put,
> -
> - .update_exception_bitmap = vmx_update_exception_bitmap,
> - .get_msr_feature = vmx_get_msr_feature,
> - .get_msr = vmx_get_msr,
> - .set_msr = vmx_set_msr,
> - .get_segment_base = vmx_get_segment_base,
> - .get_segment = vmx_get_segment,
> - .set_segment = vmx_set_segment,
> - .get_cpl = vmx_get_cpl,
> - .get_cs_db_l_bits = vmx_get_cs_db_l_bits,
> - .set_cr0 = vmx_set_cr0,
> - .is_valid_cr4 = vmx_is_valid_cr4,
> - .set_cr4 = vmx_set_cr4,
> - .set_efer = vmx_set_efer,
> - .get_idt = vmx_get_idt,
> - .set_idt = vmx_set_idt,
> - .get_gdt = vmx_get_gdt,
> - .set_gdt = vmx_set_gdt,
> - .set_dr7 = vmx_set_dr7,
> - .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
> - .cache_reg = vmx_cache_reg,
> - .get_rflags = vmx_get_rflags,
> - .set_rflags = vmx_set_rflags,
> - .get_if_flag = vmx_get_if_flag,
> -
> - .tlb_flush_all = vmx_flush_tlb_all,
> - .tlb_flush_current = vmx_flush_tlb_current,
> - .tlb_flush_gva = vmx_flush_tlb_gva,
> - .tlb_flush_guest = vmx_flush_tlb_guest,
> -
> - .vcpu_pre_run = vmx_vcpu_pre_run,
> - .run = vmx_vcpu_run,
> - .handle_exit = vmx_handle_exit,
> - .skip_emulated_instruction = vmx_skip_emulated_instruction,
> - .update_emulated_instruction = vmx_update_emulated_instruction,
> - .set_interrupt_shadow = vmx_set_interrupt_shadow,
> - .get_interrupt_shadow = vmx_get_interrupt_shadow,
> - .patch_hypercall = vmx_patch_hypercall,
> - .set_irq = vmx_inject_irq,
> - .set_nmi = vmx_inject_nmi,
> - .queue_exception = vmx_queue_exception,
> - .cancel_injection = vmx_cancel_injection,
> - .interrupt_allowed = vmx_interrupt_allowed,
> - .nmi_allowed = vmx_nmi_allowed,
> - .get_nmi_mask = vmx_get_nmi_mask,
> - .set_nmi_mask = vmx_set_nmi_mask,
> - .enable_nmi_window = vmx_enable_nmi_window,
> - .enable_irq_window = vmx_enable_irq_window,
> - .update_cr8_intercept = vmx_update_cr8_intercept,
> - .set_virtual_apic_mode = vmx_set_virtual_apic_mode,
> - .set_apic_access_page_addr = vmx_set_apic_access_page_addr,
> - .refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
> - .load_eoi_exitmap = vmx_load_eoi_exitmap,
> - .apicv_post_state_restore = vmx_apicv_post_state_restore,
> - .check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
> - .hwapic_irr_update = vmx_hwapic_irr_update,
> - .hwapic_isr_update = vmx_hwapic_isr_update,
> - .guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
> - .sync_pir_to_irr = vmx_sync_pir_to_irr,
> - .deliver_interrupt = vmx_deliver_interrupt,
> - .dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
> -
> - .set_tss_addr = vmx_set_tss_addr,
> - .set_identity_map_addr = vmx_set_identity_map_addr,
> - .get_mt_mask = vmx_get_mt_mask,
> -
> - .get_exit_info = vmx_get_exit_info,
> -
> - .vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
> -
> - .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
> -
> - .get_l2_tsc_offset = vmx_get_l2_tsc_offset,
> - .get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
> - .write_tsc_offset = vmx_write_tsc_offset,
> - .write_tsc_multiplier = vmx_write_tsc_multiplier,
> -
> - .load_mmu_pgd = vmx_load_mmu_pgd,
> -
> - .check_intercept = vmx_check_intercept,
> - .handle_exit_irqoff = vmx_handle_exit_irqoff,
> -
> - .request_immediate_exit = vmx_request_immediate_exit,
> -
> - .sched_in = vmx_sched_in,
> -
> - .cpu_dirty_log_size = PML_ENTITY_NUM,
> - .update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
> -
> - .pmu_ops = &intel_pmu_ops,
> - .nested_ops = &vmx_nested_ops,
> -
> - .update_pi_irte = pi_update_irte,
> - .start_assignment = vmx_pi_start_assignment,
> -
> -#ifdef CONFIG_X86_64
> - .set_hv_timer = vmx_set_hv_timer,
> - .cancel_hv_timer = vmx_cancel_hv_timer,
> -#endif
> -
> - .setup_mce = vmx_setup_mce,
> -
> - .smi_allowed = vmx_smi_allowed,
> - .enter_smm = vmx_enter_smm,
> - .leave_smm = vmx_leave_smm,
> - .enable_smi_window = vmx_enable_smi_window,
> -
> - .can_emulate_instruction = vmx_can_emulate_instruction,
> - .apic_init_signal_blocked = vmx_apic_init_signal_blocked,
> - .migrate_timers = vmx_migrate_timers,
> -
> - .msr_filter_changed = vmx_msr_filter_changed,
> - .complete_emulated_msr = kvm_complete_insn_gp,
> -
> - .vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
> -};
> -
> static unsigned int vmx_handle_intel_pt_intr(void)
> {
> struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
> @@ -7882,9 +7742,7 @@ static __init void vmx_setup_user_return_msrs(void)
> kvm_add_user_return_msr(vmx_uret_msrs_list[i]);
> }
>
> -static struct kvm_x86_init_ops vmx_init_ops __initdata;
> -
> -static __init int hardware_setup(void)
> +__init int vmx_hardware_setup(void)
> {
> unsigned long host_bndcfgs;
> struct desc_ptr dt;
> @@ -7944,16 +7802,16 @@ static __init int hardware_setup(void)
> * using the APIC_ACCESS_ADDR VMCS field.
> */
> if (!flexpriority_enabled)
> - vmx_x86_ops.set_apic_access_page_addr = NULL;
> + vt_x86_ops.set_apic_access_page_addr = NULL;
>
> if (!cpu_has_vmx_tpr_shadow())
> - vmx_x86_ops.update_cr8_intercept = NULL;
> + vt_x86_ops.update_cr8_intercept = NULL;
>
> #if IS_ENABLED(CONFIG_HYPERV)
> if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
> && enable_ept) {
> - vmx_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
> - vmx_x86_ops.tlb_remote_flush_with_range =
> + vt_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
> + vt_x86_ops.tlb_remote_flush_with_range =
> hv_remote_flush_tlb_with_range;
> }
> #endif
> @@ -7969,7 +7827,7 @@ static __init int hardware_setup(void)
> if (!cpu_has_vmx_apicv())
> enable_apicv = 0;
> if (!enable_apicv)
> - vmx_x86_ops.sync_pir_to_irr = NULL;
> + vt_x86_ops.sync_pir_to_irr = NULL;
>
> if (cpu_has_vmx_tsc_scaling()) {
> kvm_has_tsc_control = true;
> @@ -7996,7 +7854,7 @@ static __init int hardware_setup(void)
> enable_pml = 0;
>
> if (!enable_pml)
> - vmx_x86_ops.cpu_dirty_log_size = 0;
> + vt_x86_ops.cpu_dirty_log_size = 0;
>
> if (!cpu_has_vmx_preemption_timer())
> enable_preemption_timer = false;
> @@ -8023,9 +7881,9 @@ static __init int hardware_setup(void)
> }
>
> if (!enable_preemption_timer) {
> - vmx_x86_ops.set_hv_timer = NULL;
> - vmx_x86_ops.cancel_hv_timer = NULL;
> - vmx_x86_ops.request_immediate_exit = __kvm_request_immediate_exit;
> + vt_x86_ops.set_hv_timer = NULL;
> + vt_x86_ops.cancel_hv_timer = NULL;
> + vt_x86_ops.request_immediate_exit = __kvm_request_immediate_exit;
> }
>
> kvm_mce_cap_supported |= MCG_LMCE_P;
> @@ -8035,9 +7893,9 @@ static __init int hardware_setup(void)
> if (!enable_ept || !cpu_has_vmx_intel_pt())
> pt_mode = PT_MODE_SYSTEM;
> if (pt_mode == PT_MODE_HOST_GUEST)
> - vmx_init_ops.handle_intel_pt_intr = vmx_handle_intel_pt_intr;
> + vt_init_ops.handle_intel_pt_intr = vmx_handle_intel_pt_intr;
> else
> - vmx_init_ops.handle_intel_pt_intr = NULL;
> + vt_init_ops.handle_intel_pt_intr = NULL;
>
> setup_default_sgx_lepubkeyhash();
>
> @@ -8061,16 +7919,6 @@ static __init int hardware_setup(void)
> return r;
> }
>
> -static struct kvm_x86_init_ops vmx_init_ops __initdata = {
> - .cpu_has_kvm_support = cpu_has_kvm_support,
> - .disabled_by_bios = vmx_disabled_by_bios,
> - .check_processor_compatibility = vmx_check_processor_compat,
> - .hardware_setup = hardware_setup,
> - .handle_intel_pt_intr = NULL,
> -
> - .runtime_ops = &vmx_x86_ops,
> -};
> -
> static void vmx_cleanup_l1d_flush(void)
> {
> if (vmx_l1d_flush_pages) {
> @@ -8149,7 +7997,7 @@ static int __init vmx_init(void)
> }
>
> if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
> - vmx_x86_ops.enable_direct_tlbflush
> + vt_x86_ops.enable_direct_tlbflush
> = hv_enable_direct_tlbflush;
>
> } else {
> @@ -8157,8 +8005,8 @@ static int __init vmx_init(void)
> }
> #endif
>
> - r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
> - __alignof__(struct vcpu_vmx), THIS_MODULE);
> + r = kvm_init(&vt_init_ops, sizeof(struct vcpu_vmx),
> + __alignof__(struct vcpu_vmx), THIS_MODULE);
> if (r)
> return r;
>
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> new file mode 100644
> index 000000000000..40c64fb1f505
> --- /dev/null
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -0,0 +1,126 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __KVM_X86_VMX_X86_OPS_H
> +#define __KVM_X86_VMX_X86_OPS_H
> +
> +#include <linux/kvm_host.h>
> +
> +#include <asm/virtext.h>
> +
> +#include "x86.h"
> +
> +extern struct kvm_x86_init_ops vt_init_ops __initdata;
> +
> +__init int vmx_cpu_has_kvm_support(void);
> +__init int vmx_disabled_by_bios(void);
> +int __init vmx_check_processor_compat(void);
> +__init int vmx_hardware_setup(void);
> +
> +extern struct kvm_x86_ops vt_x86_ops __initdata;
> +extern struct kvm_x86_init_ops vt_init_ops __initdata;
> +
> +void vmx_hardware_unsetup(void);
> +int vmx_hardware_enable(void);
> +void vmx_hardware_disable(void);
> +bool report_flexpriority(void);
> +int vmx_vm_init(struct kvm *kvm);
> +int vmx_vcpu_create(struct kvm_vcpu *vcpu);
> +int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu);
> +fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu);
> +void vmx_vcpu_free(struct kvm_vcpu *vcpu);
> +void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
> +void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
> +void vmx_vcpu_put(struct kvm_vcpu *vcpu);
> +int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath);
> +void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu);
> +int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu);
> +void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu);
> +int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
> +int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
> +int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate);
> +int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate);
> +void vmx_enable_smi_window(struct kvm_vcpu *vcpu);
> +bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
> + void *insn, int insn_len);
> +int vmx_check_intercept(struct kvm_vcpu *vcpu,
> + struct x86_instruction_info *info,
> + enum x86_intercept_stage stage,
> + struct x86_exception *exception);
> +bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
> +void vmx_migrate_timers(struct kvm_vcpu *vcpu);
> +void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
> +void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu);
> +bool vmx_check_apicv_inhibit_reasons(ulong bit);
> +void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
> +void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
> +bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu);
> +int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu);
> +void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
> + int trig_mode, int vector);
> +void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu);
> +bool vmx_has_emulated_msr(struct kvm *kvm, u32 index);
> +void vmx_msr_filter_changed(struct kvm_vcpu *vcpu);
> +void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
> +void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
> +int vmx_get_msr_feature(struct kvm_msr_entry *msr);
> +int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
> +u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg);
> +void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
> +void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
> +int vmx_get_cpl(struct kvm_vcpu *vcpu);
> +void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l);
> +void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
> +void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
> +void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
> +bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
> +int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer);
> +void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
> +void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
> +void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
> +void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
> +void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val);
> +void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
> +void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
> +unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
> +void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
> +bool vmx_get_if_flag(struct kvm_vcpu *vcpu);
> +void vmx_flush_tlb_all(struct kvm_vcpu *vcpu);
> +void vmx_flush_tlb_current(struct kvm_vcpu *vcpu);
> +void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr);
> +void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu);
> +void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
> +u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);
> +void vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall);
> +void vmx_inject_irq(struct kvm_vcpu *vcpu);
> +void vmx_inject_nmi(struct kvm_vcpu *vcpu);
> +void vmx_queue_exception(struct kvm_vcpu *vcpu);
> +void vmx_cancel_injection(struct kvm_vcpu *vcpu);
> +int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection);
> +int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
> +bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
> +void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
> +void vmx_enable_nmi_window(struct kvm_vcpu *vcpu);
> +void vmx_enable_irq_window(struct kvm_vcpu *vcpu);
> +void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr);
> +void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu);
> +void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
> +void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
> +int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr);
> +int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr);
> +u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
> +void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
> + u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
> +u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
> +u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
> +void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset);
> +void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier);
> +void vmx_request_immediate_exit(struct kvm_vcpu *vcpu);
> +void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu);
> +void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
> +#ifdef CONFIG_X86_64
> +int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
> + bool *expired);
> +void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
> +#endif
> +void vmx_setup_mce(struct kvm_vcpu *vcpu);
> +
> +#endif /* __KVM_X86_VMX_X86_OPS_H */
next prev parent reply other threads:[~2022-03-13 13:46 UTC|newest]
Thread overview: 310+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-04 19:48 [RFC PATCH v5 000/104] KVM TDX basic feature support isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 001/104] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2022-03-13 13:45 ` Paolo Bonzini [this message]
2022-03-04 19:48 ` [RFC PATCH v5 002/104] x86/virt/tdx: export platform_has_tdx isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 003/104] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
2022-03-13 13:49 ` Paolo Bonzini
2022-03-14 18:34 ` Isaku Yamahata
2022-04-08 16:46 ` Sean Christopherson
2022-03-04 19:48 ` [RFC PATCH v5 004/104] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
2022-03-13 14:00 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 005/104] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
2022-03-13 13:54 ` Paolo Bonzini
2022-03-14 19:22 ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 006/104] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2022-03-13 13:55 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 007/104] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2022-03-13 13:59 ` Paolo Bonzini
2022-03-13 23:02 ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 008/104] KVM: TDX: Add a function to initialize " isaku.yamahata
2022-03-13 14:03 ` Paolo Bonzini
2022-03-14 19:45 ` Isaku Yamahata
2022-03-31 0:03 ` Sean Christopherson
2022-03-31 1:02 ` Kai Huang
2022-03-31 17:03 ` Isaku Yamahata
2022-03-31 19:34 ` Sean Christopherson
[not found] ` <20220401032741.GA2806@gao-cwp>
2022-04-01 5:07 ` Chao Gao
2022-03-31 3:31 ` Kai Huang
2022-03-31 19:41 ` Isaku Yamahata
2022-04-01 6:56 ` Xiaoyao Li
2022-04-01 20:18 ` Isaku Yamahata
2022-04-02 2:40 ` Xiaoyao Li
2022-03-04 19:48 ` [RFC PATCH v5 009/104] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
2022-03-13 14:07 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 010/104] KVM: TDX: Make TDX VM type supported isaku.yamahata
2022-03-13 23:08 ` Kai Huang
2022-03-15 21:03 ` Isaku Yamahata
2022-03-15 21:47 ` Kai Huang
2022-03-15 21:49 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 011/104] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 012/104] KVM: TDX: Define " isaku.yamahata
2022-03-13 14:30 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 013/104] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2022-03-13 14:08 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 014/104] KVM: TDX: Add a function for KVM to invoke SEAMCALL isaku.yamahata
2022-03-13 14:10 ` Paolo Bonzini
2022-03-13 22:42 ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 015/104] KVM: TDX: add a helper function for KVM to issue SEAMCALL isaku.yamahata
2022-03-13 14:11 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 016/104] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 017/104] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2022-03-13 14:12 ` Paolo Bonzini
2022-04-15 16:54 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 018/104] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 019/104] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
2022-04-15 16:55 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 020/104] KVM: TDX: allocate per-package mutex isaku.yamahata
2022-04-05 12:39 ` Paolo Bonzini
2022-04-08 0:44 ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 021/104] KVM: x86: Introduce hooks to free VM callback prezap and vm_free isaku.yamahata
2022-03-31 3:02 ` Kai Huang
2022-03-31 19:54 ` Isaku Yamahata
2022-04-05 12:40 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 022/104] KVM: Add max_vcpus field in common 'struct kvm' isaku.yamahata
2022-04-05 12:42 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 023/104] x86/cpu: Add helper functions to allocate/free MKTME keyid isaku.yamahata
2022-03-31 1:21 ` Kai Huang
2022-03-31 20:15 ` Isaku Yamahata
2022-04-06 1:55 ` Kai Huang
2022-04-07 1:00 ` Kai Huang
2022-04-05 13:08 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 024/104] KVM: TDX: create/destroy VM structure isaku.yamahata
2022-03-31 4:17 ` Kai Huang
2022-03-31 22:12 ` Isaku Yamahata
2022-03-31 23:41 ` Kai Huang
2022-04-05 12:44 ` Paolo Bonzini
2022-04-08 0:51 ` Isaku Yamahata
2022-04-15 13:47 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 025/104] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2022-04-05 12:50 ` Paolo Bonzini
2022-04-08 0:56 ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 026/104] KVM: TDX: x86: Add vm ioctl to get TDX systemwide parameters isaku.yamahata
2022-04-05 12:52 ` Paolo Bonzini
2022-04-06 1:54 ` Xiaoyao Li
2022-04-07 1:07 ` Kai Huang
2022-04-07 1:17 ` Xiaoyao Li
2022-04-08 0:58 ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 027/104] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2022-03-31 4:55 ` Kai Huang
2022-04-05 13:01 ` Paolo Bonzini
2022-04-06 2:06 ` Xiaoyao Li
2022-04-06 11:27 ` Paolo Bonzini
2022-04-08 2:18 ` Isaku Yamahata
2022-04-05 12:58 ` Paolo Bonzini
2022-04-07 1:29 ` Xiaoyao Li
2022-04-07 1:51 ` Kai Huang
2022-04-08 3:33 ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 028/104] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 029/104] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2022-04-05 13:04 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 030/104] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 031/104] [MARKER] The start of TDX KVM patch series: KVM MMU GPA stolen bits isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 032/104] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2022-03-31 11:23 ` Kai Huang
2022-04-01 1:51 ` Isaku Yamahata
2022-04-01 2:13 ` Kai Huang
2022-04-05 13:48 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 033/104] KVM: x86: Add infrastructure for stolen GPA bits isaku.yamahata
2022-03-31 11:16 ` Kai Huang
2022-04-01 2:10 ` Kai Huang
2022-04-01 2:34 ` Isaku Yamahata
2022-04-05 14:02 ` Paolo Bonzini
2022-04-05 14:02 ` Paolo Bonzini
2022-04-05 13:55 ` Paolo Bonzini
2022-04-06 2:23 ` Kai Huang
2022-04-06 11:26 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 034/104] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 035/104] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
2022-04-05 13:09 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
2022-04-05 13:17 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 037/104] KVM: x86/mmu: Allow non-zero init value for shadow PTE isaku.yamahata
2022-04-01 5:13 ` Kai Huang
2022-04-01 7:13 ` Kai Huang
2022-04-05 14:14 ` Paolo Bonzini
2022-04-08 18:38 ` Isaku Yamahata
2022-04-05 14:13 ` Paolo Bonzini
2022-04-05 14:10 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 038/104] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2022-04-01 5:15 ` Kai Huang
2022-04-01 14:08 ` Sean Christopherson
2022-04-01 20:28 ` Isaku Yamahata
2022-04-01 20:53 ` Sean Christopherson
2022-04-01 22:27 ` Kai Huang
2022-04-02 0:08 ` Sean Christopherson
2022-04-04 0:41 ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 039/104] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2022-04-05 13:22 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 040/104] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2022-04-05 14:43 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 041/104] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2022-04-05 14:48 ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 042/104] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
2022-04-05 15:25 ` Paolo Bonzini
2022-04-08 18:46 ` Isaku Yamahata
2022-04-19 19:55 ` Sean Christopherson
2022-04-06 11:06 ` Kai Huang
2022-04-07 3:05 ` Kai Huang
2022-04-08 19:12 ` Isaku Yamahata
2022-04-08 23:34 ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 043/104] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2022-04-05 14:51 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 044/104] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 045/104] KVM: x86/tdp_mmu: make REMOVED_SPTE include shadow_initial value isaku.yamahata
2022-04-05 14:22 ` Paolo Bonzini
2022-04-06 23:35 ` Sean Christopherson
2022-04-07 13:52 ` Paolo Bonzini
2022-04-06 23:30 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 046/104] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
2022-04-05 14:53 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 047/104] KVM: x86/mmu: add a private pointer to struct kvm_mmu_page isaku.yamahata
2022-04-05 14:58 ` Paolo Bonzini
2022-04-06 23:43 ` Kai Huang
2022-04-07 13:52 ` Paolo Bonzini
2022-04-07 22:53 ` Kai Huang
2022-04-07 23:03 ` Paolo Bonzini
2022-04-07 23:24 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 048/104] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2022-04-07 0:50 ` Kai Huang
2022-04-25 19:10 ` Sagi Shahar
2022-04-26 21:12 ` Isaku Yamahata
2022-04-29 0:28 ` Sagi Shahar
2022-04-29 0:46 ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 049/104] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
2022-04-05 15:15 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 050/104] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 051/104] KVM: TDX: TDP MMU TDX support isaku.yamahata
2022-04-07 2:20 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 052/104] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 053/104] KVM: x86/mmu: steal software usable bit for EPT to represent shared page isaku.yamahata
2022-04-15 15:21 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 054/104] KVM: x86/tdp_mmu: Keep PRIVATE_PROHIBIT bit when zapping isaku.yamahata
2022-04-07 1:43 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 055/104] KVM: x86/tdp_mmu: prevent private/shared map based on PRIVATE_PROHIBIT isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 056/104] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 057/104] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 058/104] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
2022-04-07 1:49 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 059/104] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 060/104] KVM: TDX: Create initial guest memory isaku.yamahata
2022-04-07 2:30 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 061/104] KVM: TDX: Finalize VM initialization isaku.yamahata
2022-04-15 13:52 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 062/104] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 063/104] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 064/104] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2022-03-22 17:28 ` Erdem Aktas
2022-03-23 17:55 ` Isaku Yamahata
2022-03-23 20:05 ` Erdem Aktas
2022-03-23 22:48 ` Isaku Yamahata
2022-03-04 19:49 ` [RFC PATCH v5 065/104] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2022-04-15 13:56 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 066/104] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 067/104] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2022-04-15 14:02 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 068/104] KVM: TDX: restore user ret MSRs isaku.yamahata
2022-04-15 14:06 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 069/104] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 070/104] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2022-04-15 14:07 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 071/104] KVM: TDX: restore debug store when TD exit isaku.yamahata
2022-04-15 14:20 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 072/104] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2022-04-15 14:14 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 073/104] KVM: TDX: track LP tdx vcpu run and teardown vcpus on descroing the guest TD isaku.yamahata
2022-03-23 0:54 ` Erdem Aktas
2022-03-23 19:08 ` Isaku Yamahata
2022-03-23 20:17 ` Erdem Aktas
2022-03-04 19:49 ` [RFC PATCH v5 074/104] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2022-04-05 15:32 ` Paolo Bonzini
2022-04-06 23:28 ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 075/104] KVM: x86: Check for pending APICv interrupt in kvm_vcpu_has_events() isaku.yamahata
2022-04-08 16:24 ` Sean Christopherson
2022-04-15 14:20 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 076/104] KVM: x86: Add option to force LAPIC expiration wait isaku.yamahata
2022-04-05 15:33 ` Paolo Bonzini
2022-04-08 16:36 ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 077/104] KVM: TDX: Use vcpu_to_pi_desc() uniformly in posted_intr.c isaku.yamahata
2022-04-05 15:36 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 078/104] KVM: TDX: Implement interrupt injection isaku.yamahata
2022-04-06 11:47 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 079/104] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2022-04-06 12:49 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 080/104] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2022-04-06 12:47 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 081/104] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2022-04-06 12:49 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 082/104] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 083/104] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2022-03-21 18:32 ` Sagi Shahar
2022-03-23 17:53 ` Isaku Yamahata
2022-04-07 13:12 ` Paolo Bonzini
2022-04-08 5:34 ` Isaku Yamahata
2022-03-04 19:49 ` [RFC PATCH v5 084/104] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2022-04-15 14:20 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 085/104] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2022-04-15 14:29 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 086/104] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2022-04-06 20:50 ` Sagi Shahar
2022-04-07 1:09 ` Xiaoyao Li
2022-03-04 19:49 ` [RFC PATCH v5 087/104] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2022-04-15 14:49 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 088/104] KVM: TDX: Add TDG.VP.VMCALL accessors to access guest vcpu registers isaku.yamahata
2022-04-07 4:06 ` Kai Huang
2022-04-15 14:50 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 089/104] KVM: TDX: Add a placeholder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2022-04-07 4:15 ` Kai Huang
2022-04-07 13:14 ` Paolo Bonzini
2022-04-07 14:39 ` Sean Christopherson
2022-04-07 18:04 ` Paolo Bonzini
2022-04-07 18:11 ` Sean Christopherson
2022-04-07 23:20 ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 090/104] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 091/104] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2022-04-07 13:16 ` Paolo Bonzini
2022-04-07 14:48 ` Sean Christopherson
2022-04-07 18:03 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 092/104] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2022-04-07 13:56 ` Paolo Bonzini
2022-04-07 15:02 ` Sean Christopherson
2022-04-07 15:56 ` Paolo Bonzini
2022-04-07 16:08 ` Sean Christopherson
2022-04-08 4:58 ` Isaku Yamahata
2022-04-08 9:57 ` Paolo Bonzini
2022-04-08 14:51 ` Sean Christopherson
2022-04-11 17:40 ` Paolo Bonzini
2022-04-14 17:09 ` Sean Christopherson
2022-04-07 14:51 ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 093/104] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2022-04-15 14:59 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 094/104] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2022-04-15 15:05 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 095/104] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2022-04-15 15:07 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 096/104] KVM: TDX: Handle TDX PV rdmsr hypercall isaku.yamahata
2022-04-15 15:08 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 097/104] KVM: TDX: Handle TDX PV wrmsr hypercall isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 098/104] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
2022-04-15 15:13 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 099/104] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 100/104] KVM: TDX: Silently discard SMI request isaku.yamahata
2022-04-05 15:41 ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 101/104] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2022-04-05 15:48 ` Paolo Bonzini
2022-04-05 17:53 ` Tom Lendacky
2022-04-07 11:09 ` Xiaoyao Li
2022-04-07 12:12 ` Paolo Bonzini
2022-04-08 3:40 ` Isaku Yamahata
2022-03-04 19:49 ` [RFC PATCH v5 102/104] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2022-04-05 15:56 ` Paolo Bonzini
2022-04-08 3:50 ` Isaku Yamahata
2022-04-12 6:49 ` Xiaoyao Li
2022-04-12 6:52 ` Paolo Bonzini
2022-04-12 7:31 ` Xiaoyao Li
2022-03-04 19:49 ` [RFC PATCH v5 103/104] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2022-03-04 19:50 ` [RFC PATCH v5 104/104] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
2022-03-07 7:44 ` [RFC PATCH v5 000/104] KVM TDX basic feature support Christoph Hellwig
2022-03-13 14:00 ` Paolo Bonzini
2022-04-15 15:18 ` Paolo Bonzini
2022-04-15 17:05 ` Paolo Bonzini
2022-04-15 21:19 ` Isaku Yamahata
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5700cdf1-906b-b6c1-ce6f-f21aadb87edd@redhat.com \
--to=pbonzini@redhat.com \
--cc=ckuehl@redhat.com \
--cc=erdemaktas@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).