* [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check @ 2020-05-29 13:04 Vitaly Kuznetsov 2020-05-29 13:04 ` [PATCH 2/2] selftests: kvm: fix smm test on SVM Vitaly Kuznetsov 2020-05-29 15:18 ` [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Sean Christopherson 0 siblings, 2 replies; 5+ messages in thread From: Vitaly Kuznetsov @ 2020-05-29 13:04 UTC (permalink / raw) To: kvm, Paolo Bonzini Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel More tests may want to check if the CPU is Intel or AMD in guest code, separate cpu_has_svm() and put it as static inline to svm_util.h. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> --- tools/testing/selftests/kvm/include/x86_64/svm_util.h | 10 ++++++++++ tools/testing/selftests/kvm/x86_64/state_test.c | 9 +-------- 2 files changed, 11 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h index cd037917fece..b1057773206a 100644 --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h @@ -35,4 +35,14 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa); void nested_svm_check_supported(void); +static inline bool cpu_has_svm(void) +{ + u32 eax = 0x80000001, ecx; + + asm volatile("cpuid" : + "=a" (eax), "=c" (ecx) : "0" (eax) : "ebx", "edx"); + + return ecx & CPUID_SVM; +} + #endif /* SELFTEST_KVM_SVM_UTILS_H */ diff --git a/tools/testing/selftests/kvm/x86_64/state_test.c b/tools/testing/selftests/kvm/x86_64/state_test.c index af8b6df6a13e..d43b6f99b66c 100644 --- a/tools/testing/selftests/kvm/x86_64/state_test.c +++ b/tools/testing/selftests/kvm/x86_64/state_test.c @@ -137,20 +137,13 @@ static void vmx_l1_guest_code(struct vmx_pages *vmx_pages) GUEST_ASSERT(vmresume()); } -static u32 cpuid_ecx(u32 eax) -{ - u32 ecx; - asm volatile("cpuid" : "=a" (eax), "=c" (ecx) : "0" (eax) : "ebx", "edx"); - return ecx; -} - static void __attribute__((__flatten__)) guest_code(void *arg) { GUEST_SYNC(1); GUEST_SYNC(2); if (arg) { - if (cpuid_ecx(0x80000001) & CPUID_SVM) + if (cpu_has_svm()) svm_l1_guest_code(arg); else vmx_l1_guest_code(arg); -- 2.25.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] selftests: kvm: fix smm test on SVM 2020-05-29 13:04 [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Vitaly Kuznetsov @ 2020-05-29 13:04 ` Vitaly Kuznetsov 2020-05-29 15:28 ` Paolo Bonzini 2020-05-29 15:18 ` [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Sean Christopherson 1 sibling, 1 reply; 5+ messages in thread From: Vitaly Kuznetsov @ 2020-05-29 13:04 UTC (permalink / raw) To: kvm, Paolo Bonzini Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel KVM_CAP_NESTED_STATE is now supported for AMD too but smm test acts like it is still Intel only. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> --- tools/testing/selftests/kvm/x86_64/smm_test.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/smm_test.c b/tools/testing/selftests/kvm/x86_64/smm_test.c index 8230b6bc6b8f..6f8f478b3ceb 100644 --- a/tools/testing/selftests/kvm/x86_64/smm_test.c +++ b/tools/testing/selftests/kvm/x86_64/smm_test.c @@ -17,6 +17,7 @@ #include "kvm_util.h" #include "vmx.h" +#include "svm_util.h" #define VCPU_ID 1 @@ -58,7 +59,7 @@ void self_smi(void) APIC_DEST_SELF | APIC_INT_ASSERT | APIC_DM_SMI); } -void guest_code(struct vmx_pages *vmx_pages) +void guest_code(void *arg) { uint64_t apicbase = rdmsr(MSR_IA32_APICBASE); @@ -72,8 +73,11 @@ void guest_code(struct vmx_pages *vmx_pages) sync_with_host(4); - if (vmx_pages) { - GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); + if (arg) { + if (cpu_has_svm()) + generic_svm_setup(arg, NULL, NULL); + else + GUEST_ASSERT(prepare_for_vmx_operation(arg)); sync_with_host(5); @@ -87,7 +91,7 @@ void guest_code(struct vmx_pages *vmx_pages) int main(int argc, char *argv[]) { - vm_vaddr_t vmx_pages_gva = 0; + vm_vaddr_t nested_gva = 0; struct kvm_regs regs; struct kvm_vm *vm; @@ -114,8 +118,11 @@ int main(int argc, char *argv[]) vcpu_set_msr(vm, VCPU_ID, MSR_IA32_SMBASE, SMRAM_GPA); if (kvm_check_cap(KVM_CAP_NESTED_STATE)) { - vcpu_alloc_vmx(vm, &vmx_pages_gva); - vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); + if (kvm_get_supported_cpuid_entry(0x80000001)->ecx & CPUID_SVM) + vcpu_alloc_svm(vm, &nested_gva); + else + vcpu_alloc_vmx(vm, &nested_gva); + vcpu_args_set(vm, VCPU_ID, 1, nested_gva); } else { pr_info("will skip SMM test with VMX enabled\n"); vcpu_args_set(vm, VCPU_ID, 1, 0); -- 2.25.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] selftests: kvm: fix smm test on SVM 2020-05-29 13:04 ` [PATCH 2/2] selftests: kvm: fix smm test on SVM Vitaly Kuznetsov @ 2020-05-29 15:28 ` Paolo Bonzini 0 siblings, 0 replies; 5+ messages in thread From: Paolo Bonzini @ 2020-05-29 15:28 UTC (permalink / raw) To: Vitaly Kuznetsov, kvm Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel On 29/05/20 15:04, Vitaly Kuznetsov wrote: > KVM_CAP_NESTED_STATE is now supported for AMD too but smm test acts like > it is still Intel only. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> > --- > tools/testing/selftests/kvm/x86_64/smm_test.c | 19 +++++++++++++------ > 1 file changed, 13 insertions(+), 6 deletions(-) > > diff --git a/tools/testing/selftests/kvm/x86_64/smm_test.c b/tools/testing/selftests/kvm/x86_64/smm_test.c > index 8230b6bc6b8f..6f8f478b3ceb 100644 > --- a/tools/testing/selftests/kvm/x86_64/smm_test.c > +++ b/tools/testing/selftests/kvm/x86_64/smm_test.c > @@ -17,6 +17,7 @@ > #include "kvm_util.h" > > #include "vmx.h" > +#include "svm_util.h" > > #define VCPU_ID 1 > > @@ -58,7 +59,7 @@ void self_smi(void) > APIC_DEST_SELF | APIC_INT_ASSERT | APIC_DM_SMI); > } > > -void guest_code(struct vmx_pages *vmx_pages) > +void guest_code(void *arg) > { > uint64_t apicbase = rdmsr(MSR_IA32_APICBASE); > > @@ -72,8 +73,11 @@ void guest_code(struct vmx_pages *vmx_pages) > > sync_with_host(4); > > - if (vmx_pages) { > - GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); > + if (arg) { > + if (cpu_has_svm()) > + generic_svm_setup(arg, NULL, NULL); > + else > + GUEST_ASSERT(prepare_for_vmx_operation(arg)); > > sync_with_host(5); > > @@ -87,7 +91,7 @@ void guest_code(struct vmx_pages *vmx_pages) > > int main(int argc, char *argv[]) > { > - vm_vaddr_t vmx_pages_gva = 0; > + vm_vaddr_t nested_gva = 0; > > struct kvm_regs regs; > struct kvm_vm *vm; > @@ -114,8 +118,11 @@ int main(int argc, char *argv[]) > vcpu_set_msr(vm, VCPU_ID, MSR_IA32_SMBASE, SMRAM_GPA); > > if (kvm_check_cap(KVM_CAP_NESTED_STATE)) { > - vcpu_alloc_vmx(vm, &vmx_pages_gva); > - vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); > + if (kvm_get_supported_cpuid_entry(0x80000001)->ecx & CPUID_SVM) > + vcpu_alloc_svm(vm, &nested_gva); > + else > + vcpu_alloc_vmx(vm, &nested_gva); > + vcpu_args_set(vm, VCPU_ID, 1, nested_gva); > } else { > pr_info("will skip SMM test with VMX enabled\n"); > vcpu_args_set(vm, VCPU_ID, 1, 0); > Thanks, I'll include this in v3 of the nSVM series. Paolo ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check 2020-05-29 13:04 [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Vitaly Kuznetsov 2020-05-29 13:04 ` [PATCH 2/2] selftests: kvm: fix smm test on SVM Vitaly Kuznetsov @ 2020-05-29 15:18 ` Sean Christopherson 2020-06-01 8:14 ` Vitaly Kuznetsov 1 sibling, 1 reply; 5+ messages in thread From: Sean Christopherson @ 2020-05-29 15:18 UTC (permalink / raw) To: Vitaly Kuznetsov Cc: kvm, Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel On Fri, May 29, 2020 at 03:04:06PM +0200, Vitaly Kuznetsov wrote: > More tests may want to check if the CPU is Intel or AMD in > guest code, separate cpu_has_svm() and put it as static > inline to svm_util.h. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> > --- > tools/testing/selftests/kvm/include/x86_64/svm_util.h | 10 ++++++++++ > tools/testing/selftests/kvm/x86_64/state_test.c | 9 +-------- > 2 files changed, 11 insertions(+), 8 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h > index cd037917fece..b1057773206a 100644 > --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h > +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h > @@ -35,4 +35,14 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r > void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa); > void nested_svm_check_supported(void); > > +static inline bool cpu_has_svm(void) > +{ > + u32 eax = 0x80000001, ecx; > + > + asm volatile("cpuid" : > + "=a" (eax), "=c" (ecx) : "0" (eax) : "ebx", "edx"); u32 eax, ecx; asm("cpuid" : "=a" (eax), "=c" (ecx) : "a" (0x80000001) : "ebx", "edx"); The volatile shouldn't be needed, e.g. no one should be using this purely for its seralization properties, and I don't see any reason to put the leaf number into a variable. Alternatively, adding a proper cpuid framework to processor.h would likely be useful in the long run. > + > + return ecx & CPUID_SVM; > +} > + ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check 2020-05-29 15:18 ` [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Sean Christopherson @ 2020-06-01 8:14 ` Vitaly Kuznetsov 0 siblings, 0 replies; 5+ messages in thread From: Vitaly Kuznetsov @ 2020-06-01 8:14 UTC (permalink / raw) To: Sean Christopherson Cc: kvm, Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel Sean Christopherson <sean.j.christopherson@intel.com> writes: > On Fri, May 29, 2020 at 03:04:06PM +0200, Vitaly Kuznetsov wrote: >> More tests may want to check if the CPU is Intel or AMD in >> guest code, separate cpu_has_svm() and put it as static >> inline to svm_util.h. >> >> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> >> --- >> tools/testing/selftests/kvm/include/x86_64/svm_util.h | 10 ++++++++++ >> tools/testing/selftests/kvm/x86_64/state_test.c | 9 +-------- >> 2 files changed, 11 insertions(+), 8 deletions(-) >> >> diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h >> index cd037917fece..b1057773206a 100644 >> --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h >> +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h >> @@ -35,4 +35,14 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r >> void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa); >> void nested_svm_check_supported(void); >> >> +static inline bool cpu_has_svm(void) >> +{ >> + u32 eax = 0x80000001, ecx; >> + >> + asm volatile("cpuid" : >> + "=a" (eax), "=c" (ecx) : "0" (eax) : "ebx", "edx"); > > u32 eax, ecx; > > asm("cpuid" : "=a" (eax), "=c" (ecx) : "a" (0x80000001) : "ebx", "edx"); > > The volatile shouldn't be needed, e.g. no one should be using this purely > for its seralization properties, and I don't see any reason to put the leaf > number into a variable. > > Alternatively, adding a proper cpuid framework to processor.h would likely > be useful in the long run. > All true, even better would be to find a way to include the definition of native_cpuid*() from arch/x86/include/asm/processor.h but I didn't explore these options yet, was trying to address the immediate issue with Paolo's SVM series. It can probably be done when there is a second user of cpuid in tests which needs to check something different from SVM bit. -- Vitaly ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-06-01 8:14 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2020-05-29 13:04 [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Vitaly Kuznetsov 2020-05-29 13:04 ` [PATCH 2/2] selftests: kvm: fix smm test on SVM Vitaly Kuznetsov 2020-05-29 15:28 ` Paolo Bonzini 2020-05-29 15:18 ` [PATCH 1/2] selftests: kvm: introduce cpu_has_svm() check Sean Christopherson 2020-06-01 8:14 ` Vitaly Kuznetsov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).