xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] xen/x86: ensure copying to L1 guest in update_runstate_area()
@ 2017-02-21  2:11 Haozhong Zhang
  2017-02-21  9:15 ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Haozhong Zhang @ 2017-02-21  2:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Jan Beulich, Haozhong Zhang

For a HVM domain, if a vcpu is in the nested guest mode,
__raw_copy_to_guest() and __copy_to_guest() used by
update_runstate_area() will copy data to L2 guest other than L1 guest.

Besides copying to the wrong address, this bug also causes crash in
the code path:
    context_switch(prev, next)
      _update_runstate_area(next)
        update_runstate_area(next)
	  __raw_copy_to_guest(...)
	    ...
	      __hvm_copy(...)
	        paging_gva_to_gfn(...)
		  nestedhap_walk_L1_p2m(...)
		    nvmx_hap_walk_L1_p2m(..)
		      vmx_vmcs_enter(v) [ v = next ]
		        vmx_vmcs_try_enter(v) [ v = next ]
			  if ( likely(v == current) )
                              return v->arch.hvm_vmx.vmcs_pa == this_cpu(current_vmcs);
vmx_vmcs_try_enter() will fail and trigger the assert in
vmx_vmcs_enter(), if vcpu 'next' is in the nested guest mode and is
being scheduled to another CPU.

This commit temporally clears the nested guest flag before all
__raw_copy_to_guest() and __copy_to_guest() in update_runstate_area(),
and restores the flag after those guest copy operations.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
 xen/arch/x86/domain.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7d3071e..5f0444c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -50,6 +50,7 @@
 #include <asm/mpspec.h>
 #include <asm/ldt.h>
 #include <asm/hvm/hvm.h>
+#include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/viridian.h>
 #include <asm/debugreg.h>
@@ -1931,10 +1932,29 @@ bool_t update_runstate_area(struct vcpu *v)
     bool_t rc;
     smap_check_policy_t smap_policy;
     void __user *guest_handle = NULL;
+    bool nested_guest_mode = 0;
 
     if ( guest_handle_is_null(runstate_guest(v)) )
         return 1;
 
+    /*
+     * Must be before all following __raw_copy_to_guest() and __copy_to_guest().
+     *
+     * Otherwise, if 'v' is in the nested guest mode, paging_gva_to_gfn() called
+     * from __raw_copy_to_guest() and __copy_to_guest() will treat the target
+     * address as L2 gva, and __raw_copy_to_guest() and __copy_to_guest() will
+     * consequently copy runstate to L2 guest other than L1 guest.
+     *
+     * Therefore, we clear the nested guest flag before __raw_copy_to_guest()
+     * and __copy_to_guest(), and restore the flag after all guest copy.
+     */
+    if ( is_hvm_vcpu(v) && paging_mode_hap(v->domain) )
+    {
+        nested_guest_mode = nestedhvm_is_n2(v);
+        if ( nested_guest_mode )
+            nestedhvm_vcpu_exit_guestmode(v);
+    }
+
     smap_policy = smap_policy_change(v, SMAP_CHECK_ENABLED);
 
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
@@ -1971,6 +1991,9 @@ bool_t update_runstate_area(struct vcpu *v)
 
     smap_policy_change(v, smap_policy);
 
+    if ( nested_guest_mode )
+        nestedhvm_vcpu_enter_guestmode(v);
+
     return rc;
 }
 
-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-02-23 13:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-21  2:11 [PATCH] xen/x86: ensure copying to L1 guest in update_runstate_area() Haozhong Zhang
2017-02-21  9:15 ` Jan Beulich
2017-02-22  1:28   ` Haozhong Zhang
2017-02-22  2:20     ` Haozhong Zhang
2017-02-22  3:15       ` Tian, Kevin
2017-02-22  7:46       ` Jan Beulich
2017-02-23  8:00         ` Haozhong Zhang
2017-02-23 13:08           ` Andrew Cooper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).