xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] x86/emul: MSR emulation improvements
@ 2017-02-20 10:28 Andrew Cooper
  2017-02-20 10:28 ` [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses Andrew Cooper
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-02-20 10:28 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

Split out from a previous general emulation series.  Substantially rebased.

Andrew Cooper (3):
  x86/hvm: Don't raise #GP behind the emulators back for MSR accesses
  x86/emul: Introduce common msr_val for emulation
  x86/emul: Support CPUID faulting via a speculative MSR read

 xen/arch/x86/hvm/emulate.c             |  23 +++----
 xen/arch/x86/hvm/hvm.c                 |   7 +-
 xen/arch/x86/hvm/svm/svm.c             |   4 +-
 xen/arch/x86/hvm/vmx/vmx.c             |  23 +++++--
 xen/arch/x86/hvm/vmx/vvmx.c            |  19 ++++--
 xen/arch/x86/traps.c                   |  18 +-----
 xen/arch/x86/x86_emulate/x86_emulate.c | 114 ++++++++++++++++-----------------
 xen/arch/x86/x86_emulate/x86_emulate.h |   7 +-
 xen/include/asm-x86/hvm/support.h      |  12 +++-
 9 files changed, 118 insertions(+), 109 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses
  2017-02-20 10:28 [PATCH v2 0/3] x86/emul: MSR emulation improvements Andrew Cooper
@ 2017-02-20 10:28 ` Andrew Cooper
  2017-02-20 10:34   ` Paul Durrant
  2017-02-21 13:46   ` Boris Ostrovsky
  2017-02-20 10:28 ` [PATCH v2 2/3] x86/emul: Introduce common msr_val for emulation Andrew Cooper
  2017-02-20 10:28 ` [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read Andrew Cooper
  2 siblings, 2 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-02-20 10:28 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Boris Ostrovsky, Paul Durrant,
	Suravee Suthikulpanit

The current hvm_msr_{read,write}_intercept() infrastructure calls
hvm_inject_hw_exception() directly to latch a fault, and returns
X86EMUL_EXCEPTION to its caller.

This behaviour is problematic for the hvmemul_{read,write}_msr() paths, as the
fault is raised behind the back of the x86 emulator.

Alter the behaviour so hvm_msr_{read,write}_intercept() simply returns
X86EMUL_EXCEPTION, leaving the callers to actually inject the #GP fault.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

v2:
 * Substantial rebase
 * Introduce __must_check for hvm_msr_{read,write}_intercept()
---
 xen/arch/x86/hvm/emulate.c        | 14 ++++++++++++--
 xen/arch/x86/hvm/hvm.c            |  7 ++++---
 xen/arch/x86/hvm/svm/svm.c        |  4 ++--
 xen/arch/x86/hvm/vmx/vmx.c        | 23 ++++++++++++++++++-----
 xen/arch/x86/hvm/vmx/vvmx.c       | 19 ++++++++++++++-----
 xen/include/asm-x86/hvm/support.h | 12 +++++++++---
 6 files changed, 59 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 14f9b43..edcae5e 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1544,7 +1544,12 @@ static int hvmemul_read_msr(
     uint64_t *val,
     struct x86_emulate_ctxt *ctxt)
 {
-    return hvm_msr_read_intercept(reg, val);
+    int rc = hvm_msr_read_intercept(reg, val);
+
+    if ( rc == X86EMUL_EXCEPTION )
+        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
+
+    return rc;
 }
 
 static int hvmemul_write_msr(
@@ -1552,7 +1557,12 @@ static int hvmemul_write_msr(
     uint64_t val,
     struct x86_emulate_ctxt *ctxt)
 {
-    return hvm_msr_write_intercept(reg, val, 1);
+    int rc = hvm_msr_write_intercept(reg, val, 1);
+
+    if ( rc == X86EMUL_EXCEPTION )
+        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
+
+    return rc;
 }
 
 static int hvmemul_wbinvd(
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 6621d62..08855c2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -518,7 +518,10 @@ void hvm_do_resume(struct vcpu *v)
 
         if ( w->do_write.msr )
         {
-            hvm_msr_write_intercept(w->msr, w->value, 0);
+            if ( hvm_msr_write_intercept(w->msr, w->value, 0) ==
+                 X86EMUL_EXCEPTION )
+                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
             w->do_write.msr = 0;
         }
 
@@ -3455,7 +3458,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     return ret;
 
  gp_fault:
-    hvm_inject_hw_exception(TRAP_gp_fault, 0);
     ret = X86EMUL_EXCEPTION;
     *msr_content = -1ull;
     goto out;
@@ -3600,7 +3602,6 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
     return ret;
 
 gp_fault:
-    hvm_inject_hw_exception(TRAP_gp_fault, 0);
     return X86EMUL_EXCEPTION;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 894c457..b864535 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1744,7 +1744,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     return X86EMUL_OKAY;
 
  gpf:
-    hvm_inject_hw_exception(TRAP_gp_fault, 0);
     return X86EMUL_EXCEPTION;
 }
 
@@ -1897,7 +1896,6 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     return result;
 
  gpf:
-    hvm_inject_hw_exception(TRAP_gp_fault, 0);
     return X86EMUL_EXCEPTION;
 }
 
@@ -1924,6 +1922,8 @@ static void svm_do_msr_access(struct cpu_user_regs *regs)
 
     if ( rc == X86EMUL_OKAY )
         __update_guest_eip(regs, inst_len);
+    else if ( rc == X86EMUL_EXCEPTION )
+        hvm_inject_hw_exception(TRAP_gp_fault, 0);
 }
 
 static void svm_vmexit_do_hlt(struct vmcb_struct *vmcb,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 597d7ac..b5bfa05 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2734,7 +2734,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     return X86EMUL_OKAY;
 
 gp_fault:
-    hvm_inject_hw_exception(TRAP_gp_fault, 0);
     return X86EMUL_EXCEPTION;
 }
 
@@ -2971,7 +2970,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     return X86EMUL_OKAY;
 
 gp_fault:
-    hvm_inject_hw_exception(TRAP_gp_fault, 0);
     return X86EMUL_EXCEPTION;
 }
 
@@ -3664,18 +3662,33 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         break;
     case EXIT_REASON_MSR_READ:
     {
-        uint64_t msr_content;
-        if ( hvm_msr_read_intercept(regs->_ecx, &msr_content) == X86EMUL_OKAY )
+        uint64_t msr_content = 0;
+
+        switch ( hvm_msr_read_intercept(regs->_ecx, &msr_content) )
         {
+        case X86EMUL_OKAY:
             msr_split(regs, msr_content);
             update_guest_eip(); /* Safe: RDMSR */
+            break;
+
+        case X86EMUL_EXCEPTION:
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            break;
         }
         break;
     }
 
     case EXIT_REASON_MSR_WRITE:
-        if ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) == X86EMUL_OKAY )
+        switch ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) )
+        {
+        case X86EMUL_OKAY:
             update_guest_eip(); /* Safe: WRMSR */
+            break;
+
+        case X86EMUL_EXCEPTION:
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            break;
+        }
         break;
 
     case EXIT_REASON_VMXOFF:
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f6a25a6..c830d16 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1032,6 +1032,7 @@ static void load_shadow_guest_state(struct vcpu *v)
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     u32 control;
     u64 cr_gh_mask, cr_read_shadow;
+    int rc;
 
     static const u16 vmentry_fields[] = {
         VM_ENTRY_INTR_INFO,
@@ -1053,8 +1054,12 @@ static void load_shadow_guest_state(struct vcpu *v)
     if ( control & VM_ENTRY_LOAD_GUEST_PAT )
         hvm_set_guest_pat(v, get_vvmcs(v, GUEST_PAT));
     if ( control & VM_ENTRY_LOAD_PERF_GLOBAL_CTRL )
-        hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
-                                get_vvmcs(v, GUEST_PERF_GLOBAL_CTRL), 0);
+    {
+        rc = hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
+                                     get_vvmcs(v, GUEST_PERF_GLOBAL_CTRL), 0);
+        if ( rc == X86EMUL_EXCEPTION )
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+    }
 
     hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
 
@@ -1222,7 +1227,7 @@ static void sync_vvmcs_ro(struct vcpu *v)
 
 static void load_vvmcs_host_state(struct vcpu *v)
 {
-    int i;
+    int i, rc;
     u64 r;
     u32 control;
 
@@ -1240,8 +1245,12 @@ static void load_vvmcs_host_state(struct vcpu *v)
     if ( control & VM_EXIT_LOAD_HOST_PAT )
         hvm_set_guest_pat(v, get_vvmcs(v, HOST_PAT));
     if ( control & VM_EXIT_LOAD_PERF_GLOBAL_CTRL )
-        hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
-                                get_vvmcs(v, HOST_PERF_GLOBAL_CTRL), 1);
+    {
+        rc = hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
+                                     get_vvmcs(v, HOST_PERF_GLOBAL_CTRL), 1);
+        if ( rc == X86EMUL_EXCEPTION )
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+    }
 
     hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
 
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index 262955d..5e25698 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -121,13 +121,19 @@ int hvm_set_efer(uint64_t value);
 int hvm_set_cr0(unsigned long value, bool_t may_defer);
 int hvm_set_cr3(unsigned long value, bool_t may_defer);
 int hvm_set_cr4(unsigned long value, bool_t may_defer);
-int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
-int hvm_msr_write_intercept(
-    unsigned int msr, uint64_t msr_content, bool_t may_defer);
 int hvm_mov_to_cr(unsigned int cr, unsigned int gpr);
 int hvm_mov_from_cr(unsigned int cr, unsigned int gpr);
 void hvm_ud_intercept(struct cpu_user_regs *);
 
+/*
+ * May return X86EMUL_EXCEPTION, at which point the caller is responsible for
+ * injecting a #GP fault.  Used to support speculative reads.
+ */
+int __must_check hvm_msr_read_intercept(
+    unsigned int msr, uint64_t *msr_content);
+int __must_check hvm_msr_write_intercept(
+    unsigned int msr, uint64_t msr_content, bool_t may_defer);
+
 #endif /* __ASM_X86_HVM_SUPPORT_H__ */
 
 /*
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/3] x86/emul: Introduce common msr_val for emulation
  2017-02-20 10:28 [PATCH v2 0/3] x86/emul: MSR emulation improvements Andrew Cooper
  2017-02-20 10:28 ` [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses Andrew Cooper
@ 2017-02-20 10:28 ` Andrew Cooper
  2017-02-20 10:55   ` Jan Beulich
  2017-02-20 10:28 ` [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read Andrew Cooper
  2 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2017-02-20 10:28 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich

Use it consistently in place of local tsc_aux, msr_content and val
declarations, and replace opencoded uses of X86EMUL_OKAY.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>

v2:
 * New
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 95 ++++++++++++++--------------------
 1 file changed, 40 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index dc0c28a..f339d36 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2841,6 +2841,7 @@ x86_emulate(
         struct segment_register cs, sreg;
         unsigned long cr4;
         struct cpuid_leaf cpuid_leaf;
+        uint64_t msr_val;
 
     case 0x00 ... 0x05: add: /* add */
         emulate_2op_SrcV("add", src, dst, _regs._eflags);
@@ -4691,14 +4692,12 @@ x86_emulate(
             goto complete_insn;
 
         case 0xf9: /* rdtscp */
-        {
-            uint64_t tsc_aux;
             fail_if(ops->read_msr == NULL);
-            if ( (rc = ops->read_msr(MSR_TSC_AUX, &tsc_aux, ctxt)) != 0 )
+            if ( (rc = ops->read_msr(MSR_TSC_AUX,
+                                     &msr_val, ctxt)) != X86EMUL_OKAY )
                 goto done;
-            _regs.r(cx) = (uint32_t)tsc_aux;
+            _regs.r(cx) = (uint32_t)msr_val;
             goto rdtsc;
-        }
 
         case 0xfc: /* clzero */
         {
@@ -4919,21 +4918,19 @@ x86_emulate(
             dst.type = OP_NONE;
         break;
 
-    case X86EMUL_OPC(0x0f, 0x05): /* syscall */ {
-        uint64_t msr_content;
-
+    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
         generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
 
         /* Inject #UD if syscall/sysret are disabled. */
         fail_if(ops->read_msr == NULL);
-        if ( (rc = ops->read_msr(MSR_EFER, &msr_content, ctxt)) != 0 )
+        if ( (rc = ops->read_msr(MSR_EFER, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        generate_exception_if((msr_content & EFER_SCE) == 0, EXC_UD);
+        generate_exception_if((msr_val & EFER_SCE) == 0, EXC_UD);
 
-        if ( (rc = ops->read_msr(MSR_STAR, &msr_content, ctxt)) != 0 )
+        if ( (rc = ops->read_msr(MSR_STAR, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
 
-        cs.sel = (msr_content >> 32) & ~3; /* SELECTOR_RPL_MASK */
+        cs.sel = (msr_val >> 32) & ~3; /* SELECTOR_RPL_MASK */
         sreg.sel = cs.sel + 8;
 
         cs.base = sreg.base = 0; /* flat segment */
@@ -4952,13 +4949,14 @@ x86_emulate(
             _regs.r11 = _regs._eflags & ~X86_EFLAGS_RF;
 
             if ( (rc = ops->read_msr(mode_64bit() ? MSR_LSTAR : MSR_CSTAR,
-                                     &msr_content, ctxt)) != 0 )
+                                     &msr_val, ctxt)) != X86EMUL_OKAY )
                 goto done;
-            _regs.rip = msr_content;
+            _regs.rip = msr_val;
 
-            if ( (rc = ops->read_msr(MSR_SYSCALL_MASK, &msr_content, ctxt)) != 0 )
+            if ( (rc = ops->read_msr(MSR_SYSCALL_MASK,
+                                     &msr_val, ctxt)) != X86EMUL_OKAY )
                 goto done;
-            _regs._eflags &= ~(msr_content | X86_EFLAGS_RF);
+            _regs._eflags &= ~(msr_val | X86_EFLAGS_RF);
         }
         else
 #endif
@@ -4966,7 +4964,7 @@ x86_emulate(
             cs.attr.bytes = 0xc9b; /* G+DB+P+S+Code */
 
             _regs.r(cx) = _regs._eip;
-            _regs._eip = msr_content;
+            _regs._eip = msr_val;
             _regs._eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF | X86_EFLAGS_RF);
         }
 
@@ -4991,9 +4989,7 @@ x86_emulate(
          * #DB (or to not use enable EFER.SCE to start with).
          */
         singlestep = _regs._eflags & X86_EFLAGS_TF;
-
         break;
-    }
 
     case X86EMUL_OPC(0x0f, 0x06): /* clts */
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
@@ -5169,9 +5165,7 @@ x86_emulate(
             goto done;
         break;
 
-    case X86EMUL_OPC(0x0f, 0x31): rdtsc: /* rdtsc */ {
-        uint64_t val;
-
+    case X86EMUL_OPC(0x0f, 0x31): rdtsc: /* rdtsc */
         if ( !mode_ring0() )
         {
             fail_if(ops->read_cr == NULL);
@@ -5180,23 +5174,21 @@ x86_emulate(
             generate_exception_if(cr4 & X86_CR4_TSD, EXC_GP, 0);
         }
         fail_if(ops->read_msr == NULL);
-        if ( (rc = ops->read_msr(MSR_IA32_TSC, &val, ctxt)) != 0 )
+        if ( (rc = ops->read_msr(MSR_IA32_TSC,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        _regs.r(dx) = val >> 32;
-        _regs.r(ax) = (uint32_t)val;
+        _regs.r(dx) = msr_val >> 32;
+        _regs.r(ax) = (uint32_t)msr_val;
         break;
-    }
 
-    case X86EMUL_OPC(0x0f, 0x32): /* rdmsr */ {
-        uint64_t val;
+    case X86EMUL_OPC(0x0f, 0x32): /* rdmsr */
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         fail_if(ops->read_msr == NULL);
-        if ( (rc = ops->read_msr(_regs._ecx, &val, ctxt)) != 0 )
+        if ( (rc = ops->read_msr(_regs._ecx, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        _regs.r(dx) = val >> 32;
-        _regs.r(ax) = (uint32_t)val;
+        _regs.r(dx) = msr_val >> 32;
+        _regs.r(ax) = (uint32_t)msr_val;
         break;
-    }
 
     case X86EMUL_OPC(0x0f, 0x40) ... X86EMUL_OPC(0x0f, 0x4f): /* cmovcc */
         vcpu_must_have(cmov);
@@ -5205,7 +5197,6 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0x34): /* sysenter */ {
-        uint64_t msr_content;
         int lm;
 
         vcpu_must_have(sep);
@@ -5213,18 +5204,18 @@ x86_emulate(
         generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
 
         fail_if(ops->read_msr == NULL);
-        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_CS, &msr_content, ctxt))
-             != 0 )
+        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_CS,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
 
-        generate_exception_if(!(msr_content & 0xfffc), EXC_GP, 0);
+        generate_exception_if(!(msr_val & 0xfffc), EXC_GP, 0);
         lm = in_longmode(ctxt, ops);
         if ( lm < 0 )
             goto cannot_emulate;
 
         _regs._eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF | X86_EFLAGS_RF);
 
-        cs.sel = msr_content & ~3; /* SELECTOR_RPL_MASK */
+        cs.sel = msr_val & ~3; /* SELECTOR_RPL_MASK */
         cs.base = 0;   /* flat segment */
         cs.limit = ~0u;  /* 4GB limit */
         cs.attr.bytes = lm ? 0xa9b  /* G+L+P+S+Code */
@@ -5240,40 +5231,37 @@ x86_emulate(
              (rc = ops->write_segment(x86_seg_ss, &sreg, ctxt)) != 0 )
             goto done;
 
-        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_EIP, &msr_content, ctxt))
-             != 0 )
+        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_EIP,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        _regs.r(ip) = lm ? msr_content : (uint32_t)msr_content;
+        _regs.r(ip) = lm ? msr_val : (uint32_t)msr_val;
 
-        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_ESP, &msr_content, ctxt))
-             != 0 )
+        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_ESP,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        _regs.r(sp) = lm ? msr_content : (uint32_t)msr_content;
+        _regs.r(sp) = lm ? msr_val : (uint32_t)msr_val;
 
         singlestep = _regs._eflags & X86_EFLAGS_TF;
         break;
     }
 
     case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
-    {
-        uint64_t msr_content;
-
         vcpu_must_have(sep);
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
 
         fail_if(ops->read_msr == NULL);
-        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_CS, &msr_content, ctxt))
-             != 0 )
+        if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_CS,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
 
-        generate_exception_if(!(msr_content & 0xfffc), EXC_GP, 0);
+        generate_exception_if(!(msr_val & 0xfffc), EXC_GP, 0);
         generate_exception_if(op_bytes == 8 &&
                               (!is_canonical_address(_regs.r(dx)) ||
                                !is_canonical_address(_regs.r(cx))),
                               EXC_GP, 0);
 
-        cs.sel = (msr_content | 3) + /* SELECTOR_RPL_MASK */
+        cs.sel = (msr_val | 3) + /* SELECTOR_RPL_MASK */
                  (op_bytes == 8 ? 32 : 16);
         cs.base = 0;   /* flat segment */
         cs.limit = ~0u;  /* 4GB limit */
@@ -5295,7 +5283,6 @@ x86_emulate(
 
         singlestep = _regs._eflags & X86_EFLAGS_TF;
         break;
-    }
 
     case X86EMUL_OPC(0x0f, 0xe7):        /* movntq mm,m64 */
     case X86EMUL_OPC_66(0x0f, 0xe7):     /* movntdq xmm,m128 */
@@ -5780,16 +5767,14 @@ x86_emulate(
             case 7: /* rdseed / rdpid */
                 if ( repe_prefix() ) /* rdpid */
                 {
-                    uint64_t tsc_aux;
-
                     generate_exception_if(ea.type != OP_REG, EXC_UD);
                     vcpu_must_have(rdpid);
                     fail_if(!ops->read_msr);
-                    if ( (rc = ops->read_msr(MSR_TSC_AUX, &tsc_aux,
+                    if ( (rc = ops->read_msr(MSR_TSC_AUX, &msr_val,
                                              ctxt)) != X86EMUL_OKAY )
                         goto done;
                     dst = ea;
-                    dst.val = tsc_aux;
+                    dst.val = msr_val;
                     dst.bytes = 4;
                     break;
                 }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read
  2017-02-20 10:28 [PATCH v2 0/3] x86/emul: MSR emulation improvements Andrew Cooper
  2017-02-20 10:28 ` [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses Andrew Cooper
  2017-02-20 10:28 ` [PATCH v2 2/3] x86/emul: Introduce common msr_val for emulation Andrew Cooper
@ 2017-02-20 10:28 ` Andrew Cooper
  2017-02-20 10:32   ` Paul Durrant
  2017-02-20 10:59   ` Jan Beulich
  2 siblings, 2 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-02-20 10:28 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

This removes the need for every cpuid() emulation hook to individually support
CPUID faulting.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Paul Durrant <paul.durrant@citrix.com>

v2:
 * Substantial rebase
 * Reimplement speculative reading by squashing the exception rather than
   trying to prevent it by using an extra function parameter.
---
 xen/arch/x86/hvm/emulate.c             |  9 ---------
 xen/arch/x86/traps.c                   | 18 +-----------------
 xen/arch/x86/x86_emulate/x86_emulate.c | 19 +++++++++++++++++--
 xen/arch/x86/x86_emulate/x86_emulate.h |  7 +------
 4 files changed, 19 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index edcae5e..f24d289 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1575,15 +1575,6 @@ static int hvmemul_wbinvd(
 int hvmemul_cpuid(uint32_t leaf, uint32_t subleaf,
                   struct cpuid_leaf *res, struct x86_emulate_ctxt *ctxt)
 {
-    /*
-     * x86_emulate uses this function to query CPU features for its own internal
-     * use. Make sure we're actually emulating CPUID before emulating CPUID
-     * faulting.
-     */
-    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0xa2) &&
-         hvm_check_cpuid_faulting(current) )
-        return X86EMUL_EXCEPTION;
-
     guest_cpuid(current, leaf, subleaf, res);
     return X86EMUL_OKAY;
 }
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index ec8b002..75c89eb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2884,23 +2884,7 @@ static int priv_op_wbinvd(struct x86_emulate_ctxt *ctxt)
 int pv_emul_cpuid(uint32_t leaf, uint32_t subleaf,
                   struct cpuid_leaf *res, struct x86_emulate_ctxt *ctxt)
 {
-    const struct vcpu *curr = current;
-
-    /*
-     * x86_emulate uses this function to query CPU features for its own
-     * internal use. Make sure we're actually emulating CPUID before checking
-     * for emulated CPUID faulting.
-     */
-    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0xa2) )
-    {
-
-        /* If cpuid faulting is enabled and CPL>0 leave the #GP untouched. */
-        if ( curr->arch.cpuid_faulting &&
-             !guest_kernel_mode(curr, ctxt->regs) )
-            return X86EMUL_EXCEPTION;
-    }
-
-    guest_cpuid(curr, leaf, subleaf, res);
+    guest_cpuid(current, leaf, subleaf, res);
 
     return X86EMUL_OKAY;
 }
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index f339d36..c3fc26a 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -5424,10 +5424,25 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
+        msr_val = 0;
         fail_if(ops->cpuid == NULL);
+
+        /* Speculatively read MSR_INTEL_MISC_FEATURES_ENABLES. */
+        if ( ops->read_msr &&
+             (rc = ops->read_msr(MSR_INTEL_MISC_FEATURES_ENABLES,
+                                 &msr_val, ctxt)) == X86EMUL_EXCEPTION )
+        {
+            /* Not implemented.  Squash the exception and proceed normally. */
+            x86_emul_reset_event(ctxt);
+            rc = X86EMUL_OKAY;
+        }
+        if ( rc != X86EMUL_OKAY )
+            goto done;
+
+        generate_exception_if((msr_val & MSR_MISC_FEATURES_CPUID_FAULTING) &&
+                              !mode_ring0(), EXC_GP, 0); /* Faulting active? */
+
         rc = ops->cpuid(_regs._eax, _regs._ecx, &cpuid_leaf, ctxt);
-        generate_exception_if(rc == X86EMUL_EXCEPTION,
-                              EXC_GP, 0); /* CPUID Faulting? */
         if ( rc != X86EMUL_OKAY )
             goto done;
         _regs.r(ax) = cpuid_leaf.a;
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index 071668d..c35873e 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -413,12 +413,7 @@ struct x86_emulate_ops
     int (*wbinvd)(
         struct x86_emulate_ctxt *ctxt);
 
-    /*
-     * cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs.
-     *
-     * May return X86EMUL_EXCEPTION, which causes the emulator to inject
-     * #GP[0].  Used to implement CPUID faulting.
-     */
+    /* cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs. */
     int (*cpuid)(
         uint32_t leaf,
         uint32_t subleaf,
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read
  2017-02-20 10:28 ` [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read Andrew Cooper
@ 2017-02-20 10:32   ` Paul Durrant
  2017-02-20 10:59   ` Jan Beulich
  1 sibling, 0 replies; 12+ messages in thread
From: Paul Durrant @ 2017-02-20 10:32 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 20 February 2017 10:29
> To: Xen-devel <xen-devel@lists.xen.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich
> <JBeulich@suse.com>; Paul Durrant <Paul.Durrant@citrix.com>
> Subject: [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative
> MSR read
> 
> This removes the need for every cpuid() emulation hook to individually
> support
> CPUID faulting.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Paul Durrant <paul.durrant@citrix.com>
> 

hvm/emulate.c change...

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> v2:
>  * Substantial rebase
>  * Reimplement speculative reading by squashing the exception rather than
>    trying to prevent it by using an extra function parameter.
> ---
>  xen/arch/x86/hvm/emulate.c             |  9 ---------
>  xen/arch/x86/traps.c                   | 18 +-----------------
>  xen/arch/x86/x86_emulate/x86_emulate.c | 19 +++++++++++++++++--
>  xen/arch/x86/x86_emulate/x86_emulate.h |  7 +------
>  4 files changed, 19 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index edcae5e..f24d289 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1575,15 +1575,6 @@ static int hvmemul_wbinvd(
>  int hvmemul_cpuid(uint32_t leaf, uint32_t subleaf,
>                    struct cpuid_leaf *res, struct x86_emulate_ctxt *ctxt)
>  {
> -    /*
> -     * x86_emulate uses this function to query CPU features for its own
> internal
> -     * use. Make sure we're actually emulating CPUID before emulating CPUID
> -     * faulting.
> -     */
> -    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0xa2) &&
> -         hvm_check_cpuid_faulting(current) )
> -        return X86EMUL_EXCEPTION;
> -
>      guest_cpuid(current, leaf, subleaf, res);
>      return X86EMUL_OKAY;
>  }
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index ec8b002..75c89eb 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2884,23 +2884,7 @@ static int priv_op_wbinvd(struct x86_emulate_ctxt
> *ctxt)
>  int pv_emul_cpuid(uint32_t leaf, uint32_t subleaf,
>                    struct cpuid_leaf *res, struct x86_emulate_ctxt *ctxt)
>  {
> -    const struct vcpu *curr = current;
> -
> -    /*
> -     * x86_emulate uses this function to query CPU features for its own
> -     * internal use. Make sure we're actually emulating CPUID before checking
> -     * for emulated CPUID faulting.
> -     */
> -    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0xa2) )
> -    {
> -
> -        /* If cpuid faulting is enabled and CPL>0 leave the #GP untouched. */
> -        if ( curr->arch.cpuid_faulting &&
> -             !guest_kernel_mode(curr, ctxt->regs) )
> -            return X86EMUL_EXCEPTION;
> -    }
> -
> -    guest_cpuid(curr, leaf, subleaf, res);
> +    guest_cpuid(current, leaf, subleaf, res);
> 
>      return X86EMUL_OKAY;
>  }
> diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c
> b/xen/arch/x86/x86_emulate/x86_emulate.c
> index f339d36..c3fc26a 100644
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -5424,10 +5424,25 @@ x86_emulate(
>          break;
> 
>      case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
> +        msr_val = 0;
>          fail_if(ops->cpuid == NULL);
> +
> +        /* Speculatively read MSR_INTEL_MISC_FEATURES_ENABLES. */
> +        if ( ops->read_msr &&
> +             (rc = ops->read_msr(MSR_INTEL_MISC_FEATURES_ENABLES,
> +                                 &msr_val, ctxt)) == X86EMUL_EXCEPTION )
> +        {
> +            /* Not implemented.  Squash the exception and proceed normally. */
> +            x86_emul_reset_event(ctxt);
> +            rc = X86EMUL_OKAY;
> +        }
> +        if ( rc != X86EMUL_OKAY )
> +            goto done;
> +
> +        generate_exception_if((msr_val &
> MSR_MISC_FEATURES_CPUID_FAULTING) &&
> +                              !mode_ring0(), EXC_GP, 0); /* Faulting active? */
> +
>          rc = ops->cpuid(_regs._eax, _regs._ecx, &cpuid_leaf, ctxt);
> -        generate_exception_if(rc == X86EMUL_EXCEPTION,
> -                              EXC_GP, 0); /* CPUID Faulting? */
>          if ( rc != X86EMUL_OKAY )
>              goto done;
>          _regs.r(ax) = cpuid_leaf.a;
> diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h
> b/xen/arch/x86/x86_emulate/x86_emulate.h
> index 071668d..c35873e 100644
> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
> @@ -413,12 +413,7 @@ struct x86_emulate_ops
>      int (*wbinvd)(
>          struct x86_emulate_ctxt *ctxt);
> 
> -    /*
> -     * cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs.
> -     *
> -     * May return X86EMUL_EXCEPTION, which causes the emulator to inject
> -     * #GP[0].  Used to implement CPUID faulting.
> -     */
> +    /* cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs. */
>      int (*cpuid)(
>          uint32_t leaf,
>          uint32_t subleaf,
> --
> 2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses
  2017-02-20 10:28 ` [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses Andrew Cooper
@ 2017-02-20 10:34   ` Paul Durrant
  2017-02-21 13:46   ` Boris Ostrovsky
  1 sibling, 0 replies; 12+ messages in thread
From: Paul Durrant @ 2017-02-20 10:34 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky, Suravee Suthikulpanit

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 20 February 2017 10:29
> To: Xen-devel <xen-devel@lists.xen.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul Durrant
> <Paul.Durrant@citrix.com>; Boris Ostrovsky <boris.ostrovsky@oracle.com>;
> Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Subject: [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back
> for MSR accesses
> 
> The current hvm_msr_{read,write}_intercept() infrastructure calls
> hvm_inject_hw_exception() directly to latch a fault, and returns
> X86EMUL_EXCEPTION to its caller.
> 
> This behaviour is problematic for the hvmemul_{read,write}_msr() paths, as
> the
> fault is raised behind the back of the x86 emulator.
> 
> Alter the behaviour so hvm_msr_{read,write}_intercept() simply returns
> X86EMUL_EXCEPTION, leaving the callers to actually inject the #GP fault.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
> CC: Paul Durrant <paul.durrant@citrix.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>

hvm/emulate.c changes...

Paul Durrant <paul.durrant@citrix.com>
 
> v2:
>  * Substantial rebase
>  * Introduce __must_check for hvm_msr_{read,write}_intercept()
> ---
>  xen/arch/x86/hvm/emulate.c        | 14 ++++++++++++--
>  xen/arch/x86/hvm/hvm.c            |  7 ++++---
>  xen/arch/x86/hvm/svm/svm.c        |  4 ++--
>  xen/arch/x86/hvm/vmx/vmx.c        | 23 ++++++++++++++++++-----
>  xen/arch/x86/hvm/vmx/vvmx.c       | 19 ++++++++++++++-----
>  xen/include/asm-x86/hvm/support.h | 12 +++++++++---
>  6 files changed, 59 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 14f9b43..edcae5e 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1544,7 +1544,12 @@ static int hvmemul_read_msr(
>      uint64_t *val,
>      struct x86_emulate_ctxt *ctxt)
>  {
> -    return hvm_msr_read_intercept(reg, val);
> +    int rc = hvm_msr_read_intercept(reg, val);
> +
> +    if ( rc == X86EMUL_EXCEPTION )
> +        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
> +
> +    return rc;
>  }
> 
>  static int hvmemul_write_msr(
> @@ -1552,7 +1557,12 @@ static int hvmemul_write_msr(
>      uint64_t val,
>      struct x86_emulate_ctxt *ctxt)
>  {
> -    return hvm_msr_write_intercept(reg, val, 1);
> +    int rc = hvm_msr_write_intercept(reg, val, 1);
> +
> +    if ( rc == X86EMUL_EXCEPTION )
> +        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
> +
> +    return rc;
>  }
> 
>  static int hvmemul_wbinvd(
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 6621d62..08855c2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -518,7 +518,10 @@ void hvm_do_resume(struct vcpu *v)
> 
>          if ( w->do_write.msr )
>          {
> -            hvm_msr_write_intercept(w->msr, w->value, 0);
> +            if ( hvm_msr_write_intercept(w->msr, w->value, 0) ==
> +                 X86EMUL_EXCEPTION )
> +                hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
>              w->do_write.msr = 0;
>          }
> 
> @@ -3455,7 +3458,6 @@ int hvm_msr_read_intercept(unsigned int msr,
> uint64_t *msr_content)
>      return ret;
> 
>   gp_fault:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      ret = X86EMUL_EXCEPTION;
>      *msr_content = -1ull;
>      goto out;
> @@ -3600,7 +3602,6 @@ int hvm_msr_write_intercept(unsigned int msr,
> uint64_t msr_content,
>      return ret;
> 
>  gp_fault:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 894c457..b864535 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1744,7 +1744,6 @@ static int svm_msr_read_intercept(unsigned int
> msr, uint64_t *msr_content)
>      return X86EMUL_OKAY;
> 
>   gpf:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }
> 
> @@ -1897,7 +1896,6 @@ static int svm_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>      return result;
> 
>   gpf:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }
> 
> @@ -1924,6 +1922,8 @@ static void svm_do_msr_access(struct
> cpu_user_regs *regs)
> 
>      if ( rc == X86EMUL_OKAY )
>          __update_guest_eip(regs, inst_len);
> +    else if ( rc == X86EMUL_EXCEPTION )
> +        hvm_inject_hw_exception(TRAP_gp_fault, 0);
>  }
> 
>  static void svm_vmexit_do_hlt(struct vmcb_struct *vmcb,
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 597d7ac..b5bfa05 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2734,7 +2734,6 @@ static int vmx_msr_read_intercept(unsigned int
> msr, uint64_t *msr_content)
>      return X86EMUL_OKAY;
> 
>  gp_fault:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }
> 
> @@ -2971,7 +2970,6 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>      return X86EMUL_OKAY;
> 
>  gp_fault:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }
> 
> @@ -3664,18 +3662,33 @@ void vmx_vmexit_handler(struct cpu_user_regs
> *regs)
>          break;
>      case EXIT_REASON_MSR_READ:
>      {
> -        uint64_t msr_content;
> -        if ( hvm_msr_read_intercept(regs->_ecx, &msr_content) ==
> X86EMUL_OKAY )
> +        uint64_t msr_content = 0;
> +
> +        switch ( hvm_msr_read_intercept(regs->_ecx, &msr_content) )
>          {
> +        case X86EMUL_OKAY:
>              msr_split(regs, msr_content);
>              update_guest_eip(); /* Safe: RDMSR */
> +            break;
> +
> +        case X86EMUL_EXCEPTION:
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +            break;
>          }
>          break;
>      }
> 
>      case EXIT_REASON_MSR_WRITE:
> -        if ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) ==
> X86EMUL_OKAY )
> +        switch ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) )
> +        {
> +        case X86EMUL_OKAY:
>              update_guest_eip(); /* Safe: WRMSR */
> +            break;
> +
> +        case X86EMUL_EXCEPTION:
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +            break;
> +        }
>          break;
> 
>      case EXIT_REASON_VMXOFF:
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
> b/xen/arch/x86/hvm/vmx/vvmx.c
> index f6a25a6..c830d16 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1032,6 +1032,7 @@ static void load_shadow_guest_state(struct vcpu
> *v)
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      u32 control;
>      u64 cr_gh_mask, cr_read_shadow;
> +    int rc;
> 
>      static const u16 vmentry_fields[] = {
>          VM_ENTRY_INTR_INFO,
> @@ -1053,8 +1054,12 @@ static void load_shadow_guest_state(struct vcpu
> *v)
>      if ( control & VM_ENTRY_LOAD_GUEST_PAT )
>          hvm_set_guest_pat(v, get_vvmcs(v, GUEST_PAT));
>      if ( control & VM_ENTRY_LOAD_PERF_GLOBAL_CTRL )
> -        hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
> -                                get_vvmcs(v, GUEST_PERF_GLOBAL_CTRL), 0);
> +    {
> +        rc = hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
> +                                     get_vvmcs(v, GUEST_PERF_GLOBAL_CTRL), 0);
> +        if ( rc == X86EMUL_EXCEPTION )
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +    }
> 
>      hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
> 
> @@ -1222,7 +1227,7 @@ static void sync_vvmcs_ro(struct vcpu *v)
> 
>  static void load_vvmcs_host_state(struct vcpu *v)
>  {
> -    int i;
> +    int i, rc;
>      u64 r;
>      u32 control;
> 
> @@ -1240,8 +1245,12 @@ static void load_vvmcs_host_state(struct vcpu *v)
>      if ( control & VM_EXIT_LOAD_HOST_PAT )
>          hvm_set_guest_pat(v, get_vvmcs(v, HOST_PAT));
>      if ( control & VM_EXIT_LOAD_PERF_GLOBAL_CTRL )
> -        hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
> -                                get_vvmcs(v, HOST_PERF_GLOBAL_CTRL), 1);
> +    {
> +        rc = hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL,
> +                                     get_vvmcs(v, HOST_PERF_GLOBAL_CTRL), 1);
> +        if ( rc == X86EMUL_EXCEPTION )
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +    }
> 
>      hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
> 
> diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-
> x86/hvm/support.h
> index 262955d..5e25698 100644
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -121,13 +121,19 @@ int hvm_set_efer(uint64_t value);
>  int hvm_set_cr0(unsigned long value, bool_t may_defer);
>  int hvm_set_cr3(unsigned long value, bool_t may_defer);
>  int hvm_set_cr4(unsigned long value, bool_t may_defer);
> -int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
> -int hvm_msr_write_intercept(
> -    unsigned int msr, uint64_t msr_content, bool_t may_defer);
>  int hvm_mov_to_cr(unsigned int cr, unsigned int gpr);
>  int hvm_mov_from_cr(unsigned int cr, unsigned int gpr);
>  void hvm_ud_intercept(struct cpu_user_regs *);
> 
> +/*
> + * May return X86EMUL_EXCEPTION, at which point the caller is responsible
> for
> + * injecting a #GP fault.  Used to support speculative reads.
> + */
> +int __must_check hvm_msr_read_intercept(
> +    unsigned int msr, uint64_t *msr_content);
> +int __must_check hvm_msr_write_intercept(
> +    unsigned int msr, uint64_t msr_content, bool_t may_defer);
> +
>  #endif /* __ASM_X86_HVM_SUPPORT_H__ */
> 
>  /*
> --
> 2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/3] x86/emul: Introduce common msr_val for emulation
  2017-02-20 10:28 ` [PATCH v2 2/3] x86/emul: Introduce common msr_val for emulation Andrew Cooper
@ 2017-02-20 10:55   ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2017-02-20 10:55 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel

>>> On 20.02.17 at 11:28, <andrew.cooper3@citrix.com> wrote:
> Use it consistently in place of local tsc_aux, msr_content and val
> declarations, and replace opencoded uses of X86EMUL_OKAY.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

One thing less on my to-be-done-eventually list, thanks.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read
  2017-02-20 10:28 ` [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read Andrew Cooper
  2017-02-20 10:32   ` Paul Durrant
@ 2017-02-20 10:59   ` Jan Beulich
  2017-02-20 11:04     ` Andrew Cooper
  1 sibling, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2017-02-20 10:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Paul Durrant, Xen-devel

>>> On 20.02.17 at 11:28, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -5424,10 +5424,25 @@ x86_emulate(
>          break;
>  
>      case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
> +        msr_val = 0;
>          fail_if(ops->cpuid == NULL);
> +
> +        /* Speculatively read MSR_INTEL_MISC_FEATURES_ENABLES. */
> +        if ( ops->read_msr &&
> +             (rc = ops->read_msr(MSR_INTEL_MISC_FEATURES_ENABLES,
> +                                 &msr_val, ctxt)) == X86EMUL_EXCEPTION )
> +        {
> +            /* Not implemented.  Squash the exception and proceed normally. */
> +            x86_emul_reset_event(ctxt);
> +            rc = X86EMUL_OKAY;
> +        }
> +        if ( rc != X86EMUL_OKAY )
> +            goto done;
> +
> +        generate_exception_if((msr_val & MSR_MISC_FEATURES_CPUID_FAULTING) &&
> +                              !mode_ring0(), EXC_GP, 0); /* Faulting active? */

Could you please move the mode_ring0() check up to bypass the
MSR read in the first place?

> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
> @@ -413,12 +413,7 @@ struct x86_emulate_ops
>      int (*wbinvd)(
>          struct x86_emulate_ctxt *ctxt);
>  
> -    /*
> -     * cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs.
> -     *
> -     * May return X86EMUL_EXCEPTION, which causes the emulator to inject
> -     * #GP[0].  Used to implement CPUID faulting.
> -     */
> +    /* cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs. */
>      int (*cpuid)(
>          uint32_t leaf,
>          uint32_t subleaf,

Are there any ways left for the hook to fail? IOW, should its return
type become void now?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read
  2017-02-20 10:59   ` Jan Beulich
@ 2017-02-20 11:04     ` Andrew Cooper
  2017-02-20 11:13       ` Jan Beulich
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2017-02-20 11:04 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Paul Durrant, Xen-devel

On 20/02/17 10:59, Jan Beulich wrote:
>>>> On 20.02.17 at 11:28, <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -5424,10 +5424,25 @@ x86_emulate(
>>          break;
>>  
>>      case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
>> +        msr_val = 0;
>>          fail_if(ops->cpuid == NULL);
>> +
>> +        /* Speculatively read MSR_INTEL_MISC_FEATURES_ENABLES. */
>> +        if ( ops->read_msr &&
>> +             (rc = ops->read_msr(MSR_INTEL_MISC_FEATURES_ENABLES,
>> +                                 &msr_val, ctxt)) == X86EMUL_EXCEPTION )
>> +        {
>> +            /* Not implemented.  Squash the exception and proceed normally. */
>> +            x86_emul_reset_event(ctxt);
>> +            rc = X86EMUL_OKAY;
>> +        }
>> +        if ( rc != X86EMUL_OKAY )
>> +            goto done;
>> +
>> +        generate_exception_if((msr_val & MSR_MISC_FEATURES_CPUID_FAULTING) &&
>> +                              !mode_ring0(), EXC_GP, 0); /* Faulting active? */
> Could you please move the mode_ring0() check up to bypass the
> MSR read in the first place?

Ok.

>
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
>> @@ -413,12 +413,7 @@ struct x86_emulate_ops
>>      int (*wbinvd)(
>>          struct x86_emulate_ctxt *ctxt);
>>  
>> -    /*
>> -     * cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs.
>> -     *
>> -     * May return X86EMUL_EXCEPTION, which causes the emulator to inject
>> -     * #GP[0].  Used to implement CPUID faulting.
>> -     */
>> +    /* cpuid: Emulate CPUID via given set of EAX-EDX inputs/outputs. */
>>      int (*cpuid)(
>>          uint32_t leaf,
>>          uint32_t subleaf,
> Are there any ways left for the hook to fail? IOW, should its return
> type become void now?

In principle, it should return RETRY when putting a cpuid entry into the
vmevent ring.  At the moment, that logic is wedged into the vmx hook, so
emulated cpuid instructions don't generate events, but I am looking to
fix this (mis)behaviour in the long run.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read
  2017-02-20 11:04     ` Andrew Cooper
@ 2017-02-20 11:13       ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2017-02-20 11:13 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Paul Durrant, Xen-devel

>>> On 20.02.17 at 12:04, <andrew.cooper3@citrix.com> wrote:
> On 20/02/17 10:59, Jan Beulich wrote:
>>>>> On 20.02.17 at 11:28, <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>> @@ -5424,10 +5424,25 @@ x86_emulate(
>>>          break;
>>>  
>>>      case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
>>> +        msr_val = 0;
>>>          fail_if(ops->cpuid == NULL);
>>> +
>>> +        /* Speculatively read MSR_INTEL_MISC_FEATURES_ENABLES. */
>>> +        if ( ops->read_msr &&
>>> +             (rc = ops->read_msr(MSR_INTEL_MISC_FEATURES_ENABLES,
>>> +                                 &msr_val, ctxt)) == X86EMUL_EXCEPTION )
>>> +        {
>>> +            /* Not implemented.  Squash the exception and proceed normally. */
>>> +            x86_emul_reset_event(ctxt);
>>> +            rc = X86EMUL_OKAY;
>>> +        }
>>> +        if ( rc != X86EMUL_OKAY )
>>> +            goto done;
>>> +
>>> +        generate_exception_if((msr_val & MSR_MISC_FEATURES_CPUID_FAULTING) &&
>>> +                              !mode_ring0(), EXC_GP, 0); /* Faulting active? */
>> Could you please move the mode_ring0() check up to bypass the
>> MSR read in the first place?
> 
> Ok.

With that
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses
  2017-02-20 10:28 ` [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses Andrew Cooper
  2017-02-20 10:34   ` Paul Durrant
@ 2017-02-21 13:46   ` Boris Ostrovsky
  2017-02-21 13:50     ` Andrew Cooper
  1 sibling, 1 reply; 12+ messages in thread
From: Boris Ostrovsky @ 2017-02-21 13:46 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Paul Durrant, Suravee Suthikulpanit

On 02/20/2017 05:28 AM, Andrew Cooper wrote:
> The current hvm_msr_{read,write}_intercept() infrastructure calls
> hvm_inject_hw_exception() directly to latch a fault, and returns
> X86EMUL_EXCEPTION to its caller.
>
> This behaviour is problematic for the hvmemul_{read,write}_msr() paths, as the
> fault is raised behind the back of the x86 emulator.
>
> Alter the behaviour so hvm_msr_{read,write}_intercept() simply returns
> X86EMUL_EXCEPTION, leaving the callers to actually inject the #GP fault.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>


>  
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 894c457..b864535 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1744,7 +1744,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>      return X86EMUL_OKAY;
>  
>   gpf:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }
>  
> @@ -1897,7 +1896,6 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>      return result;
>  
>   gpf:
> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>      return X86EMUL_EXCEPTION;
>  }

The label can be dropped with a direct return instead of a 'goto'.
Either way

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses
  2017-02-21 13:46   ` Boris Ostrovsky
@ 2017-02-21 13:50     ` Andrew Cooper
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-02-21 13:50 UTC (permalink / raw)
  To: Boris Ostrovsky, Xen-devel; +Cc: Paul Durrant, Suravee Suthikulpanit


[-- Attachment #1.1: Type: text/plain, Size: 913 bytes --]

On 21/02/17 13:46, Boris Ostrovsky wrote:
>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>> index 894c457..b864535 100644
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -1744,7 +1744,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>      return X86EMUL_OKAY;
>>  
>>   gpf:
>> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>      return X86EMUL_EXCEPTION;
>>  }
>>  
>> @@ -1897,7 +1896,6 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>      return result;
>>  
>>   gpf:
>> -    hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>      return X86EMUL_EXCEPTION;
>>  }
> The label can be dropped with a direct return instead of a 'goto'.
> Either way

I will do cleanup like that into the start of the MSR levelling work,
which will be bringing other changes to the functions as well.

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 1359 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-02-21 13:50 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-20 10:28 [PATCH v2 0/3] x86/emul: MSR emulation improvements Andrew Cooper
2017-02-20 10:28 ` [PATCH v2 1/3] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses Andrew Cooper
2017-02-20 10:34   ` Paul Durrant
2017-02-21 13:46   ` Boris Ostrovsky
2017-02-21 13:50     ` Andrew Cooper
2017-02-20 10:28 ` [PATCH v2 2/3] x86/emul: Introduce common msr_val for emulation Andrew Cooper
2017-02-20 10:55   ` Jan Beulich
2017-02-20 10:28 ` [PATCH v2 3/3] x86/emul: Support CPUID faulting via a speculative MSR read Andrew Cooper
2017-02-20 10:32   ` Paul Durrant
2017-02-20 10:59   ` Jan Beulich
2017-02-20 11:04     ` Andrew Cooper
2017-02-20 11:13       ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).