public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] qemu-kvm: Add some nested svm tests
@ 2010-08-02 13:33 Joerg Roedel
  2010-08-02 13:33 ` [PATCH 1/4] test: Run tests with asid 1 Joerg Roedel
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Joerg Roedel @ 2010-08-02 13:33 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm

Hi Avi, Marcelo,

here are 3 additional nested svm tests. The first two test the
features/fixes I posted last week for nested svm. To keep the rate of
only 66.7% passing tests I added the asid_zero test which currently
fails ;-)
I will post patches to fix the vmrun_intercept and asid_zero test
failures shortly.

	Joerg



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] test: Run tests with asid 1
  2010-08-02 13:33 [PATCH 0/4] qemu-kvm: Add some nested svm tests Joerg Roedel
@ 2010-08-02 13:33 ` Joerg Roedel
  2010-08-02 13:33 ` [PATCH 2/4] test: Add nested svm next_rip test Joerg Roedel
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2010-08-02 13:33 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, Joerg Roedel

This is correct since vmrun with asid 0 is not allowed.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 kvm/test/x86/svm.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/kvm/test/x86/svm.c b/kvm/test/x86/svm.c
index 628f3aa..dd4a8da 100644
--- a/kvm/test/x86/svm.c
+++ b/kvm/test/x86/svm.c
@@ -43,6 +43,7 @@ static void vmcb_ident(struct vmcb *vmcb)
     vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
     sidt(&desc_table_ptr);
     vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
+    ctrl->asid = 1;
     save->cpl = 0;
     save->efer = rdmsr(MSR_EFER);
     save->cr4 = read_cr4();
-- 
1.7.0.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] test: Add nested svm next_rip test
  2010-08-02 13:33 [PATCH 0/4] qemu-kvm: Add some nested svm tests Joerg Roedel
  2010-08-02 13:33 ` [PATCH 1/4] test: Run tests with asid 1 Joerg Roedel
@ 2010-08-02 13:33 ` Joerg Roedel
  2010-08-02 13:33 ` [PATCH 3/4] test: Add mode-switch test for nested svm Joerg Roedel
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2010-08-02 13:33 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, Joerg Roedel

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 kvm/test/x86/svm.c |   29 +++++++++++++++++++++++++++++
 1 files changed, 29 insertions(+), 0 deletions(-)

diff --git a/kvm/test/x86/svm.c b/kvm/test/x86/svm.c
index dd4a8da..4a7a662 100644
--- a/kvm/test/x86/svm.c
+++ b/kvm/test/x86/svm.c
@@ -209,6 +209,32 @@ static void test_cr3_intercept_bypass(struct test *test)
     test->scratch = a;
 }
 
+static bool next_rip_supported(void)
+{
+    return (cpuid(SVM_CPUID_FUNC).d & 8);
+}
+
+static void prepare_next_rip(struct test *test)
+{
+    test->vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
+}
+
+
+static void test_next_rip(struct test *test)
+{
+    asm volatile ("rdtsc\n\t"
+                  ".globl exp_next_rip\n\t"
+                  "exp_next_rip:\n\t" ::: "eax", "edx");
+}
+
+static bool check_next_rip(struct test *test)
+{
+    extern char exp_next_rip;
+    unsigned long address = (unsigned long)&exp_next_rip;
+
+    return address == test->vmcb->control.next_rip;
+}
+
 static struct test tests[] = {
     { "null", default_supported, default_prepare, null_test,
       default_finished, null_check },
@@ -223,6 +249,9 @@ static struct test tests[] = {
     { "cr3 read intercept emulate", default_supported,
       prepare_cr3_intercept_bypass, test_cr3_intercept_bypass,
       default_finished, check_cr3_intercept },
+    { "next_rip", next_rip_supported, prepare_next_rip, test_next_rip,
+      default_finished, check_next_rip },
+
 };
 
 int main(int ac, char **av)
-- 
1.7.0.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] test: Add mode-switch test for nested svm
  2010-08-02 13:33 [PATCH 0/4] qemu-kvm: Add some nested svm tests Joerg Roedel
  2010-08-02 13:33 ` [PATCH 1/4] test: Run tests with asid 1 Joerg Roedel
  2010-08-02 13:33 ` [PATCH 2/4] test: Add nested svm next_rip test Joerg Roedel
@ 2010-08-02 13:33 ` Joerg Roedel
  2010-08-02 13:55   ` Avi Kivity
  2010-08-02 13:33 ` [PATCH 4/4] test: Add test to check if asid 0 is allowed Joerg Roedel
  2010-08-02 14:44 ` [PATCH 0/4] qemu-kvm: Add some nested svm tests Avi Kivity
  4 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2010-08-02 13:33 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, Joerg Roedel

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 kvm/test/x86/cstart64.S |    5 ++
 kvm/test/x86/svm.c      |  109 +++++++++++++++++++++++++++++++++++++++++++++++
 kvm/test/x86/types.h    |   20 +++++++++
 3 files changed, 134 insertions(+), 0 deletions(-)
 create mode 100644 kvm/test/x86/types.h

diff --git a/kvm/test/x86/cstart64.S b/kvm/test/x86/cstart64.S
index f1a9d09..46e9d5c 100644
--- a/kvm/test/x86/cstart64.S
+++ b/kvm/test/x86/cstart64.S
@@ -51,6 +51,11 @@ gdt64:
 	.quad 0x00cf93000000ffff // 64-bit data segment
 	.quad 0x00affb000000ffff // 64-bit code segment (user)
 	.quad 0x00cff3000000ffff // 64-bit data segment (user)
+	.quad 0x00cf9b000000ffff // 32-bit code segment
+	.quad 0x00cf92000000ffff // 32-bit code segment
+	.quad 0x008F9A000000FFFF // 16-bit code segment
+	.quad 0x008F92000000FFFF // 16-bit data segment
+
 tss_descr:
 	.rept max_cpus
 	.quad 0x000089000000ffff // 64-bit avail tss
diff --git a/kvm/test/x86/svm.c b/kvm/test/x86/svm.c
index 4a7a662..fd98505 100644
--- a/kvm/test/x86/svm.c
+++ b/kvm/test/x86/svm.c
@@ -4,6 +4,7 @@
 #include "msr.h"
 #include "vm.h"
 #include "smp.h"
+#include "types.h"
 
 static void setup_svm(void)
 {
@@ -235,6 +236,112 @@ static bool check_next_rip(struct test *test)
     return address == test->vmcb->control.next_rip;
 }
 
+static void prepare_mode_switch(struct test *test)
+{
+    test->vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
+                                             |  (1ULL << UD_VECTOR)
+                                             |  (1ULL << DF_VECTOR)
+                                             |  (1ULL << PF_VECTOR);
+    test->scratch = 0;
+}
+
+static void test_mode_switch(struct test *test)
+{
+    asm volatile("	cli\n"
+		 "	ljmp *1f\n" /* jump to 32-bit code segment */
+		 "1:\n"
+		 "	.long 2f\n"
+		 "	.long 40\n"
+		 ".code32\n"
+		 "2:\n"
+		 "	movl %%cr0, %%eax\n"
+		 "	btcl  $31, %%eax\n" /* clear PG */
+		 "	movl %%eax, %%cr0\n"
+		 "	movl $0xc0000080, %%ecx\n" /* EFER */
+		 "	rdmsr\n"
+		 "	btcl $8, %%eax\n" /* clear LME */
+		 "	wrmsr\n"
+		 "	movl %%cr4, %%eax\n"
+		 "	btcl $5, %%eax\n" /* clear PAE */
+		 "	movl %%eax, %%cr4\n"
+		 "	movw $64, %%ax\n"
+		 "	movw %%ax, %%ds\n"
+		 "	ljmpl $56, $3f\n" /* jump to 16 bit protected-mode */
+		 ".code16\n"
+		 "3:\n"
+		 "	movl %%cr0, %%eax\n"
+		 "	btcl $0, %%eax\n" /* clear PE  */
+		 "	movl %%eax, %%cr0\n"
+		 "	ljmpl $0, $4f\n"   /* jump to real-mode */
+		 "4:\n"
+		 "	vmmcall\n"
+		 "	movl %%cr0, %%eax\n"
+		 "	btsl $0, %%eax\n" /* set PE  */
+		 "	movl %%eax, %%cr0\n"
+		 "	ljmpl $40, $5f\n" /* back to protected mode */
+		 ".code32\n"
+		 "5:\n"
+		 "	movl %%cr4, %%eax\n"
+		 "	btsl $5, %%eax\n" /* set PAE */
+		 "	movl %%eax, %%cr4\n"
+		 "	movl $0xc0000080, %%ecx\n" /* EFER */
+		 "	rdmsr\n"
+		 "	btsl $8, %%eax\n" /* set LME */
+		 "	wrmsr\n"
+		 "	movl %%cr0, %%eax\n"
+		 "	btsl  $31, %%eax\n" /* set PG */
+		 "	movl %%eax, %%cr0\n"
+		 "	ljmpl $8, $6f\n"    /* back to long mode */
+		 ".code64\n\t"
+		 "6:\n"
+		 "	vmmcall\n"
+		 ::: "rax", "rbx", "rcx", "rdx", "memory");
+}
+
+static bool mode_switch_finished(struct test *test)
+{
+    u64 cr0, cr4, efer;
+
+    cr0  = test->vmcb->save.cr0;
+    cr4  = test->vmcb->save.cr4;
+    efer = test->vmcb->save.efer;
+
+    /* Only expect VMMCALL intercepts */
+    if (test->vmcb->control.exit_code != SVM_EXIT_VMMCALL)
+	    return true;
+
+    /* Jump over VMMCALL instruction */
+    test->vmcb->save.rip += 3;
+
+    /* Do sanity checks */
+    switch (test->scratch) {
+    case 0:
+        /* Test should be in real mode now - check for this */
+        if ((cr0  & 0x80000001) || /* CR0.PG, CR0.PE */
+            (cr4  & 0x00000020) || /* CR4.PAE */
+            (efer & 0x00000500))   /* EFER.LMA, EFER.LME */
+                return true;
+        break;
+    case 2:
+        /* Test should be back in long-mode now - check for this */
+        if (((cr0  & 0x80000001) != 0x80000001) || /* CR0.PG, CR0.PE */
+            ((cr4  & 0x00000020) != 0x00000020) || /* CR4.PAE */
+            ((efer & 0x00000500) != 0x00000500))   /* EFER.LMA, EFER.LME */
+		    return true;
+	break;
+    }
+
+    /* one step forward */
+    test->scratch += 1;
+
+    return test->scratch == 2;
+}
+
+static bool check_mode_switch(struct test *test)
+{
+	return test->scratch == 2;
+}
+
 static struct test tests[] = {
     { "null", default_supported, default_prepare, null_test,
       default_finished, null_check },
@@ -251,6 +358,8 @@ static struct test tests[] = {
       default_finished, check_cr3_intercept },
     { "next_rip", next_rip_supported, prepare_next_rip, test_next_rip,
       default_finished, check_next_rip },
+    { "mode_switch", default_supported, prepare_mode_switch, test_mode_switch,
+       mode_switch_finished, check_mode_switch },
 
 };
 
diff --git a/kvm/test/x86/types.h b/kvm/test/x86/types.h
new file mode 100644
index 0000000..fd22743
--- /dev/null
+++ b/kvm/test/x86/types.h
@@ -0,0 +1,20 @@
+#ifndef __TYPES_H
+#define __TYPES_H
+
+#define DE_VECTOR 0
+#define DB_VECTOR 1
+#define BP_VECTOR 3
+#define OF_VECTOR 4
+#define BR_VECTOR 5
+#define UD_VECTOR 6
+#define NM_VECTOR 7
+#define DF_VECTOR 8
+#define TS_VECTOR 10
+#define NP_VECTOR 11
+#define SS_VECTOR 12
+#define GP_VECTOR 13
+#define PF_VECTOR 14
+#define MF_VECTOR 16
+#define MC_VECTOR 18
+
+#endif
-- 
1.7.0.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] test: Add test to check if asid 0 is allowed
  2010-08-02 13:33 [PATCH 0/4] qemu-kvm: Add some nested svm tests Joerg Roedel
                   ` (2 preceding siblings ...)
  2010-08-02 13:33 ` [PATCH 3/4] test: Add mode-switch test for nested svm Joerg Roedel
@ 2010-08-02 13:33 ` Joerg Roedel
  2010-08-02 14:44 ` [PATCH 0/4] qemu-kvm: Add some nested svm tests Avi Kivity
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2010-08-02 13:33 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, Joerg Roedel

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 kvm/test/x86/svm.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/kvm/test/x86/svm.c b/kvm/test/x86/svm.c
index fd98505..2f1c900 100644
--- a/kvm/test/x86/svm.c
+++ b/kvm/test/x86/svm.c
@@ -342,6 +342,21 @@ static bool check_mode_switch(struct test *test)
 	return test->scratch == 2;
 }
 
+static void prepare_asid_zero(struct test *test)
+{
+    test->vmcb->control.asid = 0;
+}
+
+static void test_asid_zero(struct test *test)
+{
+    asm volatile ("vmmcall\n\t");
+}
+
+static bool check_asid_zero(struct test *test)
+{
+    return test->vmcb->control.exit_code == SVM_EXIT_ERR;
+}
+
 static struct test tests[] = {
     { "null", default_supported, default_prepare, null_test,
       default_finished, null_check },
@@ -360,6 +375,8 @@ static struct test tests[] = {
       default_finished, check_next_rip },
     { "mode_switch", default_supported, prepare_mode_switch, test_mode_switch,
        mode_switch_finished, check_mode_switch },
+    { "asid_zero", default_supported, prepare_asid_zero, test_asid_zero,
+       default_finished, check_asid_zero },
 
 };
 
-- 
1.7.0.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] test: Add mode-switch test for nested svm
  2010-08-02 13:33 ` [PATCH 3/4] test: Add mode-switch test for nested svm Joerg Roedel
@ 2010-08-02 13:55   ` Avi Kivity
  2010-08-02 14:11     ` Roedel, Joerg
  0 siblings, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2010-08-02 13:55 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm

  On 08/02/2010 04:33 PM, Joerg Roedel wrote:
> Signed-off-by: Joerg Roedel<joerg.roedel@amd.com>
> ---
>   kvm/test/x86/cstart64.S |    5 ++
>   kvm/test/x86/svm.c      |  109 +++++++++++++++++++++++++++++++++++++++++++++++
>   kvm/test/x86/types.h    |   20 +++++++++
>   3 files changed, 134 insertions(+), 0 deletions(-)
>   create mode 100644 kvm/test/x86/types.h
>
> diff --git a/kvm/test/x86/cstart64.S b/kvm/test/x86/cstart64.S
> index f1a9d09..46e9d5c 100644
> --- a/kvm/test/x86/cstart64.S
> +++ b/kvm/test/x86/cstart64.S
> @@ -51,6 +51,11 @@ gdt64:
>   	.quad 0x00cf93000000ffff // 64-bit data segment
>   	.quad 0x00affb000000ffff // 64-bit code segment (user)
>   	.quad 0x00cff3000000ffff // 64-bit data segment (user)
> +	.quad 0x00cf9b000000ffff // 32-bit code segment
> +	.quad 0x00cf92000000ffff // 32-bit code segment
> +	.quad 0x008F9A000000FFFF // 16-bit code segment
> +	.quad 0x008F92000000FFFF // 16-bit data segment
> +
>   tss_descr:
>   	.rept max_cpus
>   	.quad 0x000089000000ffff // 64-bit avail tss
> diff --git a/kvm/test/x86/svm.c b/kvm/test/x86/svm.c
> index 4a7a662..fd98505 100644
> --- a/kvm/test/x86/svm.c
> +++ b/kvm/test/x86/svm.c
> @@ -4,6 +4,7 @@
>   #include "msr.h"
>   #include "vm.h"
>   #include "smp.h"
> +#include "types.h"
>
>   static void setup_svm(void)
>   {
> @@ -235,6 +236,112 @@ static bool check_next_rip(struct test *test)
>       return address == test->vmcb->control.next_rip;
>   }
>
> +static void prepare_mode_switch(struct test *test)
> +{
> +    test->vmcb->control.intercept_exceptions |= (1ULL<<  GP_VECTOR)
> +                                             |  (1ULL<<  UD_VECTOR)
> +                                             |  (1ULL<<  DF_VECTOR)
> +                                             |  (1ULL<<  PF_VECTOR);
> +    test->scratch = 0;
> +}
> +
> +static void test_mode_switch(struct test *test)
> +{
> +    asm volatile("	cli\n"
> +		 "	ljmp *1f\n" /* jump to 32-bit code segment */
> +		 "1:\n"
> +		 "	.long 2f\n"
> +		 "	.long 40\n"
> +		 ".code32\n"
> +		 "2:\n"
> +		 "	movl %%cr0, %%eax\n"
> +		 "	btcl  $31, %%eax\n" /* clear PG */
> +		 "	movl %%eax, %%cr0\n"
> +		 "	movl $0xc0000080, %%ecx\n" /* EFER */
> +		 "	rdmsr\n"
> +		 "	btcl $8, %%eax\n" /* clear LME */
> +		 "	wrmsr\n"
> +		 "	movl %%cr4, %%eax\n"
> +		 "	btcl $5, %%eax\n" /* clear PAE */
> +		 "	movl %%eax, %%cr4\n"
> +		 "	movw $64, %%ax\n"
> +		 "	movw %%ax, %%ds\n"
> +		 "	ljmpl $56, $3f\n" /* jump to 16 bit protected-mode */
> +		 ".code16\n"
> +		 "3:\n"
> +		 "	movl %%cr0, %%eax\n"
> +		 "	btcl $0, %%eax\n" /* clear PE  */
> +		 "	movl %%eax, %%cr0\n"
> +		 "	ljmpl $0, $4f\n"   /* jump to real-mode */
> +		 "4:\n"
> +		 "	vmmcall\n"
> +		 "	movl %%cr0, %%eax\n"
> +		 "	btsl $0, %%eax\n" /* set PE  */
> +		 "	movl %%eax, %%cr0\n"
> +		 "	ljmpl $40, $5f\n" /* back to protected mode */
> +		 ".code32\n"
> +		 "5:\n"
> +		 "	movl %%cr4, %%eax\n"
> +		 "	btsl $5, %%eax\n" /* set PAE */
> +		 "	movl %%eax, %%cr4\n"
> +		 "	movl $0xc0000080, %%ecx\n" /* EFER */
> +		 "	rdmsr\n"
> +		 "	btsl $8, %%eax\n" /* set LME */
> +		 "	wrmsr\n"
> +		 "	movl %%cr0, %%eax\n"
> +		 "	btsl  $31, %%eax\n" /* set PG */
> +		 "	movl %%eax, %%cr0\n"
> +		 "	ljmpl $8, $6f\n"    /* back to long mode */
> +		 ".code64\n\t"
> +		 "6:\n"
> +		 "	vmmcall\n"
> +		 ::: "rax", "rbx", "rcx", "rdx", "memory");
> +}
> +

What is this testing exactly?  There is no svm function directly 
associated with mode switch.  In fact, most L1s will intercept cr and 
efer access and emulate the mode switch, rather than letting L2 perform 
the mode switch directly.

So, it's testing that when cr and msr intercepts are disabled, those 
operations indeed aren't intercepted, and that writes to those registers 
are reflected in L1.  But this can be tested individually, like the cr3 
tests we already have, by picking a relatively unimportant bit (cr4.pge, 
cr0.ts) and playing with it with intercepts enabled and disabled.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] test: Add mode-switch test for nested svm
  2010-08-02 13:55   ` Avi Kivity
@ 2010-08-02 14:11     ` Roedel, Joerg
  2010-08-02 14:24       ` Avi Kivity
  0 siblings, 1 reply; 10+ messages in thread
From: Roedel, Joerg @ 2010-08-02 14:11 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm@vger.kernel.org

On Mon, Aug 02, 2010 at 09:55:42AM -0400, Avi Kivity wrote:
>   On 08/02/2010 04:33 PM, Joerg Roedel wrote:
> > +static void test_mode_switch(struct test *test)
> > +{
> > +    asm volatile("	cli\n"
> > +		 "	ljmp *1f\n" /* jump to 32-bit code segment */
> > +		 "1:\n"
> > +		 "	.long 2f\n"
> > +		 "	.long 40\n"
> > +		 ".code32\n"
> > +		 "2:\n"
> > +		 "	movl %%cr0, %%eax\n"
> > +		 "	btcl  $31, %%eax\n" /* clear PG */
> > +		 "	movl %%eax, %%cr0\n"
> > +		 "	movl $0xc0000080, %%ecx\n" /* EFER */
> > +		 "	rdmsr\n"
> > +		 "	btcl $8, %%eax\n" /* clear LME */
> > +		 "	wrmsr\n"
> > +		 "	movl %%cr4, %%eax\n"
> > +		 "	btcl $5, %%eax\n" /* clear PAE */
> > +		 "	movl %%eax, %%cr4\n"
> > +		 "	movw $64, %%ax\n"
> > +		 "	movw %%ax, %%ds\n"
> > +		 "	ljmpl $56, $3f\n" /* jump to 16 bit protected-mode */
> > +		 ".code16\n"
> > +		 "3:\n"
> > +		 "	movl %%cr0, %%eax\n"
> > +		 "	btcl $0, %%eax\n" /* clear PE  */
> > +		 "	movl %%eax, %%cr0\n"
> > +		 "	ljmpl $0, $4f\n"   /* jump to real-mode */
> > +		 "4:\n"
> > +		 "	vmmcall\n"
> > +		 "	movl %%cr0, %%eax\n"
> > +		 "	btsl $0, %%eax\n" /* set PE  */
> > +		 "	movl %%eax, %%cr0\n"
> > +		 "	ljmpl $40, $5f\n" /* back to protected mode */
> > +		 ".code32\n"
> > +		 "5:\n"
> > +		 "	movl %%cr4, %%eax\n"
> > +		 "	btsl $5, %%eax\n" /* set PAE */
> > +		 "	movl %%eax, %%cr4\n"
> > +		 "	movl $0xc0000080, %%ecx\n" /* EFER */
> > +		 "	rdmsr\n"
> > +		 "	btsl $8, %%eax\n" /* set LME */
> > +		 "	wrmsr\n"
> > +		 "	movl %%cr0, %%eax\n"
> > +		 "	btsl  $31, %%eax\n" /* set PG */
> > +		 "	movl %%eax, %%cr0\n"
> > +		 "	ljmpl $8, $6f\n"    /* back to long mode */
> > +		 ".code64\n\t"
> > +		 "6:\n"
> > +		 "	vmmcall\n"
> > +		 ::: "rax", "rbx", "rcx", "rdx", "memory");
> > +}
> > +
> 
> What is this testing exactly?  There is no svm function directly 
> associated with mode switch.  In fact, most L1s will intercept cr and 
> efer access and emulate the mode switch, rather than letting L2 perform 
> the mode switch directly.

This is testing the failure case without the nested-svm efer patch I
submitted last week. The sequence above (which switches from long mode
to real mode and back to long mode) fails without this patch.

	Joerg

-- 
AMD Operating System Research Center

Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach
General Managers: Alberto Bozzo, Andrew Bowd
Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] test: Add mode-switch test for nested svm
  2010-08-02 14:11     ` Roedel, Joerg
@ 2010-08-02 14:24       ` Avi Kivity
  2010-08-02 14:56         ` Roedel, Joerg
  0 siblings, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2010-08-02 14:24 UTC (permalink / raw)
  To: Roedel, Joerg; +Cc: Marcelo Tosatti, kvm@vger.kernel.org

  On 08/02/2010 05:11 PM, Roedel, Joerg wrote:
>
>> What is this testing exactly?  There is no svm function directly
>> associated with mode switch.  In fact, most L1s will intercept cr and
>> efer access and emulate the mode switch, rather than letting L2 perform
>> the mode switch directly.
> This is testing the failure case without the nested-svm efer patch I
> submitted last week. The sequence above (which switches from long mode
> to real mode and back to long mode) fails without this patch.

A direct test would be to

   mov $MSR_EFER, %ecx
   rdmsr
   xor $EFER_NX, %eax
   wrmsr

and see that L1 EFER was updated.

I don't object to the more complicated test, but in general prefer 
simpler, direct tests so that when they fail we know exactly why.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] qemu-kvm: Add some nested svm tests
  2010-08-02 13:33 [PATCH 0/4] qemu-kvm: Add some nested svm tests Joerg Roedel
                   ` (3 preceding siblings ...)
  2010-08-02 13:33 ` [PATCH 4/4] test: Add test to check if asid 0 is allowed Joerg Roedel
@ 2010-08-02 14:44 ` Avi Kivity
  4 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2010-08-02 14:44 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm

  On 08/02/2010 04:33 PM, Joerg Roedel wrote:
> Hi Avi, Marcelo,
>
> here are 3 additional nested svm tests. The first two test the
> features/fixes I posted last week for nested svm. To keep the rate of
> only 66.7% passing tests I added the asid_zero test which currently
> fails ;-)
> I will post patches to fix the vmrun_intercept and asid_zero test
> failures shortly.
>

All applied, thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] test: Add mode-switch test for nested svm
  2010-08-02 14:24       ` Avi Kivity
@ 2010-08-02 14:56         ` Roedel, Joerg
  0 siblings, 0 replies; 10+ messages in thread
From: Roedel, Joerg @ 2010-08-02 14:56 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm@vger.kernel.org

On Mon, Aug 02, 2010 at 10:24:43AM -0400, Avi Kivity wrote:
>   On 08/02/2010 05:11 PM, Roedel, Joerg wrote:

> > This is testing the failure case without the nested-svm efer patch I
> > submitted last week. The sequence above (which switches from long mode
> > to real mode and back to long mode) fails without this patch.
> 
> A direct test would be to
> 
>    mov $MSR_EFER, %ecx
>    rdmsr
>    xor $EFER_NX, %eax
>    wrmsr
> 
> and see that L1 EFER was updated.
> 
> I don't object to the more complicated test, but in general prefer 
> simpler, direct tests so that when they fail we know exactly why.

True, smaller tests are generally better. But I think its good to also
test that the whole sequence works. I can add a seperate test for the
efer bug if you want.
It gets more tricky to test cr0 or cr4 writes because this will require
to put the nested guest into real-mode or protected-mode directly.

	Joerg

-- 
AMD Operating System Research Center

Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach
General Managers: Alberto Bozzo, Andrew Bowd
Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-08-02 15:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-02 13:33 [PATCH 0/4] qemu-kvm: Add some nested svm tests Joerg Roedel
2010-08-02 13:33 ` [PATCH 1/4] test: Run tests with asid 1 Joerg Roedel
2010-08-02 13:33 ` [PATCH 2/4] test: Add nested svm next_rip test Joerg Roedel
2010-08-02 13:33 ` [PATCH 3/4] test: Add mode-switch test for nested svm Joerg Roedel
2010-08-02 13:55   ` Avi Kivity
2010-08-02 14:11     ` Roedel, Joerg
2010-08-02 14:24       ` Avi Kivity
2010-08-02 14:56         ` Roedel, Joerg
2010-08-02 13:33 ` [PATCH 4/4] test: Add test to check if asid 0 is allowed Joerg Roedel
2010-08-02 14:44 ` [PATCH 0/4] qemu-kvm: Add some nested svm tests Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox