From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Xiaoyao Li <xiaoyao.li@intel.com>
Subject: Re: [PATCH v2 38/66] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
Date: Tue, 03 Mar 2020 16:51:35 +0100 [thread overview]
Message-ID: <87imjlfnco.fsf@vitty.brq.redhat.com> (raw)
In-Reply-To: <20200302235709.27467-39-sean.j.christopherson@intel.com>
Sean Christopherson <sean.j.christopherson@intel.com> writes:
> Calculate the CPUID masks for KVM_GET_SUPPORTED_CPUID at load time using
> what is effectively a KVM-adjusted copy of boot_cpu_data, or more
> precisely, the x86_capability array in boot_cpu_data.
>
> In terms of KVM support, the vast majority of CPUID feature bits are
> constant, and *all* feature support is known at KVM load time. Rather
> than apply boot_cpu_data, which is effectively read-only after init,
> at runtime, copy it into a KVM-specific array and use *that* to mask
> CPUID registers.
>
> In additional to consolidating the masking, kvm_cpu_caps can be adjusted
> by SVM/VMX at load time and thus eliminate all feature bit manipulation
> in ->set_supported_cpuid().
>
> Opportunistically clean up a few warts:
>
> - Replace bare "unsigned" with "unsigned int" when a feature flag is
> captured in a local variable, e.g. f_nx.
>
> - Sort the CPUID masks by function, index and register (alphabetically
> for registers, i.e. EBX comes before ECX/EDX).
>
> - Remove the superfluous /* cpuid 7.0.ecx */ comments.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
> arch/x86/kvm/cpuid.c | 231 +++++++++++++++++++++++--------------------
> arch/x86/kvm/cpuid.h | 19 ++++
> arch/x86/kvm/x86.c | 2 +
> 3 files changed, 144 insertions(+), 108 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index c2e70cd0dbf1..f0b6885d2415 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -24,6 +24,13 @@
> #include "trace.h"
> #include "pmu.h"
>
> +/*
> + * Unlike "struct cpuinfo_x86.x86_capability", kvm_cpu_caps doesn't need to be
> + * aligned to sizeof(unsigned long) because it's not accessed via bitops.
> + */
> +u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
> +EXPORT_SYMBOL_GPL(kvm_cpu_caps);
> +
> static u32 xstate_required_size(u64 xstate_bv, bool compacted)
> {
> int feature_bit = 0;
> @@ -259,7 +266,121 @@ static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
> {
> u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
>
> - *reg &= boot_cpu_data.x86_capability[leaf];
> + BUILD_BUG_ON(leaf >= ARRAY_SIZE(kvm_cpu_caps));
> + *reg &= kvm_cpu_caps[leaf];
> +}
> +
> +static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
> +{
> + reverse_cpuid_check(leaf);
> + kvm_cpu_caps[leaf] &= mask;
> +}
> +
> +void kvm_set_cpu_caps(void)
> +{
> + unsigned int f_nx = is_efer_nx() ? F(NX) : 0;
> +#ifdef CONFIG_X86_64
> + unsigned int f_gbpages = F(GBPAGES);
> + unsigned int f_lm = F(LM);
> +#else
> + unsigned int f_gbpages = 0;
> + unsigned int f_lm = 0;
> +#endif
> +
> + BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
> + sizeof(boot_cpu_data.x86_capability));
> +
> + memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
> + sizeof(kvm_cpu_caps));
> +
> + kvm_cpu_cap_mask(CPUID_1_ECX,
> + /*
> + * NOTE: MONITOR (and MWAIT) are emulated as NOP, but *not*
> + * advertised to guests via CPUID!
> + */
> + F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> + 0 /* DS-CPL, VMX, SMX, EST */ |
> + 0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> + F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> + F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> + F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> + 0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> + F(F16C) | F(RDRAND)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_1_EDX,
> + F(FPU) | F(VME) | F(DE) | F(PSE) |
> + F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> + F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
> + F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> + F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
> + 0 /* Reserved, DS, ACPI */ | F(MMX) |
> + F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
> + 0 /* HTT, TM, Reserved, PBE */
> + );
> +
> + kvm_cpu_cap_mask(CPUID_7_0_EBX,
> + F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> + F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> + F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
> + F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> + F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/
> + );
> +
> + kvm_cpu_cap_mask(CPUID_7_ECX,
> + F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) |
> + F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
> + F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
> + F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/
> + );
> + /* Set LA57 based on hardware capability. */
> + if (cpuid_ecx(7) & F(LA57))
> + kvm_cpu_cap_set(X86_FEATURE_LA57);
> +
> + kvm_cpu_cap_mask(CPUID_7_EDX,
> + F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
> + F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
> + F(MD_CLEAR)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_7_1_EAX,
> + F(AVX512_BF16)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_D_1_EAX,
> + F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_8000_0001_ECX,
> + F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
> + F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
> + F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
> + 0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) |
> + F(TOPOEXT) | F(PERFCTR_CORE)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_8000_0001_EDX,
> + F(FPU) | F(VME) | F(DE) | F(PSE) |
> + F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> + F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
> + F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> + F(PAT) | F(PSE36) | 0 /* Reserved */ |
> + f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> + F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
> + 0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_8000_0008_EBX,
> + F(CLZERO) | F(XSAVEERPTR) |
> + F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
> + F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON)
> + );
> +
> + kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
> + F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> + F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
> + F(PMM) | F(PMM_EN)
> + );
> }
>
> struct kvm_cpuid_array {
> @@ -339,48 +460,13 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>
> static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> {
> - unsigned f_la57;
> -
> - /* cpuid 7.0.ebx */
> - const u32 kvm_cpuid_7_0_ebx_x86_features =
> - F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> - F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> - F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
> - F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> - F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/;
> -
> - /* cpuid 7.0.ecx*/
> - const u32 kvm_cpuid_7_0_ecx_x86_features =
> - F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) |
> - F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
> - F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
> - F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/;
> -
> - /* cpuid 7.0.edx*/
> - const u32 kvm_cpuid_7_0_edx_x86_features =
> - F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
> - F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
> - F(MD_CLEAR);
> -
> - /* cpuid 7.1.eax */
> - const u32 kvm_cpuid_7_1_eax_x86_features =
> - F(AVX512_BF16);
> -
> switch (entry->index) {
> case 0:
> entry->eax = min(entry->eax, 1u);
> - entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
> cpuid_entry_mask(entry, CPUID_7_0_EBX);
> /* TSC_ADJUST is emulated */
> cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
> -
> - entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
> - f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
> cpuid_entry_mask(entry, CPUID_7_ECX);
> - /* Set LA57 based on hardware capability. */
> - entry->ecx |= f_la57;
> -
> - entry->edx &= kvm_cpuid_7_0_edx_x86_features;
> cpuid_entry_mask(entry, CPUID_7_EDX);
> if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
> cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
> @@ -395,7 +481,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
> break;
> case 1:
> - entry->eax &= kvm_cpuid_7_1_eax_x86_features;
> + cpuid_entry_mask(entry, CPUID_7_1_EAX);
> entry->ebx = 0;
> entry->ecx = 0;
> entry->edx = 0;
> @@ -414,72 +500,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> {
> struct kvm_cpuid_entry2 *entry;
> int r, i, max_idx;
> - unsigned f_nx = is_efer_nx() ? F(NX) : 0;
> -#ifdef CONFIG_X86_64
> - unsigned f_gbpages = F(GBPAGES);
> - unsigned f_lm = F(LM);
> -#else
> - unsigned f_gbpages = 0;
> - unsigned f_lm = 0;
> -#endif
> unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>
> - /* cpuid 1.edx */
> - const u32 kvm_cpuid_1_edx_x86_features =
> - F(FPU) | F(VME) | F(DE) | F(PSE) |
> - F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> - F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
> - F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> - F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
> - 0 /* Reserved, DS, ACPI */ | F(MMX) |
> - F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
> - 0 /* HTT, TM, Reserved, PBE */;
> - /* cpuid 0x80000001.edx */
> - const u32 kvm_cpuid_8000_0001_edx_x86_features =
> - F(FPU) | F(VME) | F(DE) | F(PSE) |
> - F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> - F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
> - F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> - F(PAT) | F(PSE36) | 0 /* Reserved */ |
> - f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> - F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
> - 0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW);
> - /* cpuid 1.ecx */
> - const u32 kvm_cpuid_1_ecx_x86_features =
> - /* NOTE: MONITOR (and MWAIT) are emulated as NOP,
> - * but *not* advertised to guests via CPUID ! */
> - F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> - 0 /* DS-CPL, VMX, SMX, EST */ |
> - 0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> - F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> - F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> - F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> - 0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> - F(F16C) | F(RDRAND);
> - /* cpuid 0x80000001.ecx */
> - const u32 kvm_cpuid_8000_0001_ecx_x86_features =
> - F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
> - F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
> - F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
> - 0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) |
> - F(TOPOEXT) | F(PERFCTR_CORE);
> -
> - /* cpuid 0x80000008.ebx */
> - const u32 kvm_cpuid_8000_0008_ebx_x86_features =
> - F(CLZERO) | F(XSAVEERPTR) |
> - F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
> - F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON);
> -
> - /* cpuid 0xC0000001.edx */
> - const u32 kvm_cpuid_C000_0001_edx_x86_features =
> - F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> - F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
> - F(PMM) | F(PMM_EN);
> -
> - /* cpuid 0xD.1.eax */
> - const u32 kvm_cpuid_D_1_eax_x86_features =
> - F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES);
> -
> /* all calls to cpuid_count() should be made on the same cpu */
> get_cpu();
>
> @@ -495,9 +517,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> entry->eax = min(entry->eax, 0x1fU);
> break;
> case 1:
> - entry->edx &= kvm_cpuid_1_edx_x86_features;
> cpuid_entry_mask(entry, CPUID_1_EDX);
> - entry->ecx &= kvm_cpuid_1_ecx_x86_features;
> cpuid_entry_mask(entry, CPUID_1_ECX);
> /* we support x2apic emulation even if host does not support
> * it since we emulate x2apic in software */
> @@ -607,7 +627,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> if (!entry)
> goto out;
>
> - entry->eax &= kvm_cpuid_D_1_eax_x86_features;
> cpuid_entry_mask(entry, CPUID_D_1_EAX);
>
> if (!kvm_x86_ops->xsaves_supported())
> @@ -691,9 +710,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> entry->eax = min(entry->eax, 0x8000001f);
> break;
> case 0x80000001:
> - entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
> cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
> - entry->ecx &= kvm_cpuid_8000_0001_ecx_x86_features;
> cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
> break;
> case 0x80000007: /* Advanced power management */
> @@ -712,7 +729,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> g_phys_as = phys_as;
> entry->eax = g_phys_as | (virt_as << 8);
> entry->edx = 0;
> - entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
> /*
> * AMD has separate bits for each SPEC_CTRL bit.
> @@ -755,7 +771,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> entry->eax = min(entry->eax, 0xC0000004);
> break;
> case 0xC0000001:
> - entry->edx &= kvm_cpuid_C000_0001_edx_x86_features;
> cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
> break;
> case 3: /* Processor serial number */
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index de3c6c365a5a..b899ba4bc918 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -6,6 +6,9 @@
> #include <asm/cpu.h>
> #include <asm/processor.h>
>
> +extern u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
> +void kvm_set_cpu_caps(void);
> +
> int kvm_update_cpuid(struct kvm_vcpu *vcpu);
> struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
> u32 function, u32 index);
> @@ -254,4 +257,20 @@ static inline bool cpuid_fault_enabled(struct kvm_vcpu *vcpu)
> MSR_MISC_FEATURES_ENABLES_CPUID_FAULT;
> }
>
> +static __always_inline void kvm_cpu_cap_clear(unsigned int x86_feature)
> +{
> + unsigned int x86_leaf = x86_feature / 32;
> +
> + reverse_cpuid_check(x86_leaf);
> + kvm_cpu_caps[x86_leaf] &= ~__feature_bit(x86_feature);
> +}
> +
> +static __always_inline void kvm_cpu_cap_set(unsigned int x86_feature)
> +{
> + unsigned int x86_leaf = x86_feature / 32;
> +
> + reverse_cpuid_check(x86_leaf);
> + kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
> +}
> +
> #endif
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e3598fe171a5..b032fd144073 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9560,6 +9560,8 @@ int kvm_arch_hardware_setup(void)
> {
> int r;
>
> + kvm_set_cpu_caps();
> +
> r = kvm_x86_ops->hardware_setup();
> if (r != 0)
> return r;
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
--
Vitaly
next prev parent reply other threads:[~2020-03-03 15:51 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-02 23:56 [PATCH v2 00/66] KVM: x86: Introduce KVM cpu caps Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 01/66] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries Sean Christopherson
2020-03-03 14:16 ` Paolo Bonzini
2020-03-03 15:17 ` Sean Christopherson
2020-03-03 19:47 ` Jim Mattson
2020-03-02 23:56 ` [PATCH v2 02/66] KVM: x86: Refactor loop around do_cpuid_func() to separate helper Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 03/66] KVM: x86: Simplify handling of Centaur CPUID leafs Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 04/66] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid() Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 05/66] KVM: x86: Check userapce CPUID array size after validating sub-leaf Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 06/66] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 07/66] KVM: x86: Check for CPUID 0xD.N support before validating array size Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 08/66] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 09/66] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 10/66] KVM: x86: Clean up CPUID 0x7 sub-leaf loop Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 11/66] KVM: x86: Drop the explicit @index from do_cpuid_7_mask() Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 12/66] KVM: x86: Drop redundant boot cpu checks on SSBD feature bits Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 13/66] KVM: x86: Consolidate CPUID array max num entries checking Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 14/66] KVM: x86: Hoist loop counter and terminator to top of __do_cpuid_func() Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 15/66] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 16/66] KVM: x86: Encapsulate CPUID entries and metadata in struct Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 17/66] KVM: x86: Drop redundant array size check Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 18/66] KVM: x86: Use common loop iterator when handling CPUID 0xD.N Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 19/66] KVM: VMX: Add helpers to query Intel PT mode Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 20/66] KVM: x86: Calculate the supported xcr0 mask at load time Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 21/66] KVM: x86: Use supported_xcr0 to detect MPX support Sean Christopherson
2020-03-03 14:34 ` Paolo Bonzini
2020-03-02 23:56 ` [PATCH v2 22/66] KVM: x86: Make kvm_mpx_supported() an inline function Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 23/66] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 24/66] KVM: x86: Drop explicit @func param from ->set_supported_cpuid() Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 25/66] KVM: x86: Use u32 for holding CPUID register value in helpers Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 26/66] KVM: x86: Replace bare "unsigned" with "unsigned int" in cpuid helpers Sean Christopherson
2020-03-03 15:43 ` Vitaly Kuznetsov
2020-03-02 23:56 ` [PATCH v2 27/66] KVM: x86: Introduce cpuid_entry_{get,has}() accessors Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 28/66] KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 29/66] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 30/66] KVM: x86: Handle MPX CPUID adjustment in VMX code Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 31/66] KVM: x86: Handle INVPCID " Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 32/66] KVM: x86: Handle UMIP emulation " Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 33/66] KVM: x86: Handle PKU " Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 34/66] KVM: x86: Handle RDTSCP " Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 35/66] KVM: x86: Handle Intel PT " Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 36/66] KVM: x86: Handle GBPAGE CPUID adjustment for EPT " Sean Christopherson
2020-03-03 14:59 ` Paolo Bonzini
2020-03-03 15:35 ` Sean Christopherson
2020-03-03 15:40 ` Paolo Bonzini
2020-03-03 15:44 ` Sean Christopherson
2020-03-03 15:47 ` Paolo Bonzini
2020-03-03 15:54 ` Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 37/66] KVM: x86: Refactor handling of XSAVES CPUID adjustment Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 38/66] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking Sean Christopherson
2020-03-03 15:51 ` Vitaly Kuznetsov [this message]
2020-03-02 23:56 ` [PATCH v2 39/66] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 40/66] KVM: VMX: " Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 41/66] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 42/66] KVM: x86: Add a helper to check kernel support when setting cpu cap Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 43/66] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 44/66] KVM: x86: Use KVM cpu caps to track UMIP emulation Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 45/66] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func() Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 46/66] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 47/66] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 48/66] KVM: x86: Remove stateful CPUID handling Sean Christopherson
2020-03-03 15:59 ` Vitaly Kuznetsov
2020-03-03 19:23 ` Jim Mattson
2020-03-02 23:56 ` [PATCH v2 49/66] KVM: x86: Do host CPUID at load time to mask KVM cpu caps Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 50/66] KVM: x86: Override host CPUID results with kvm_cpu_caps Sean Christopherson
2020-03-03 15:22 ` Paolo Bonzini
2020-03-03 15:56 ` Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 51/66] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 52/66] KVM: x86: Use kvm_cpu_caps to detect Intel PT support Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 53/66] KVM: x86: Do kvm_cpuid_array capacity checks in terminal functions Sean Christopherson
2020-03-03 16:03 ` Vitaly Kuznetsov
2020-03-02 23:56 ` [PATCH v2 54/66] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 55/66] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support Sean Christopherson
2020-03-02 23:56 ` [PATCH v2 56/66] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 57/66] KVM: VMX: Directly query Intel PT mode when refreshing PMUs Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 58/66] KVM: SVM: Refactor logging of NPT enabled/disabled Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 59/66] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 60/66] KVM: x86/mmu: Configure max page level during hardware setup Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 61/66] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage Sean Christopherson
2020-03-03 15:31 ` Paolo Bonzini
2020-03-03 16:00 ` Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 62/66] KVM: Drop largepages_enabled and its accessor/mutator Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 63/66] KVM: x86: Move VMX's host_efer to common x86 code Sean Christopherson
2020-03-02 23:57 ` [PATCH v2 64/66] KVM: nSVM: Expose SVM features to L1 iff nested is enabled Sean Christopherson
2020-03-03 16:12 ` Vitaly Kuznetsov
2020-03-03 18:37 ` Jim Mattson
2020-03-02 23:57 ` [PATCH v2 65/66] KVM: nSVM: Advertise and enable NRIPS for L1 iff nrips " Sean Christopherson
2020-03-03 16:14 ` Vitaly Kuznetsov
2020-03-02 23:57 ` [PATCH v2 66/66] KVM: x86: Move nSVM CPUID 0x8000000A handing into common x86 code Sean Christopherson
2020-03-03 15:35 ` Paolo Bonzini
2020-03-03 15:37 ` Sean Christopherson
2020-03-03 16:48 ` [PATCH v2 00/66] KVM: x86: Introduce KVM cpu caps Vitaly Kuznetsov
2020-03-06 8:27 ` Paolo Bonzini
2020-03-09 20:11 ` Sean Christopherson
2020-03-11 18:37 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87imjlfnco.fsf@vitty.brq.redhat.com \
--to=vkuznets@redhat.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=sean.j.christopherson@intel.com \
--cc=wanpengli@tencent.com \
--cc=xiaoyao.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox