xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases
@ 2014-07-30  1:35 Feng Wu
  2014-07-30  1:35 ` [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v) Feng Wu
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Feng Wu @ 2014-07-30  1:35 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, Feng Wu, keir, jbeulich, linux

This patch set fixs a issue found by Sander Eikelenboom. Here is the log
when this issue occurs:

(d2)  Booting from Hard Disk...
(d2)  Booting from 0000:7c00
(XEN) irq.c:380: Dom1 callback via changed to Direct Vector 0xf3
(XEN) irq.c:380: Dom2 callback via changed to Direct Vector 0xf3
(XEN) Segment register inaccessible for d1v0
(XEN) (If you see this outside of debugging activity, please report to xen-devel@lists.xenproject.org)

And here is the Xen call trace:
(XEN) [<ffff82d0801dc9c5>] vmx_get_segment_register+0x4d/0x422
(XEN) [<ffff82d0801f4415>] guest_walk_tables_3_levels+0x189/0x520
(XEN) [<ffff82d0802204a8>] hap_p2m_ga_to_gfn_3_levels+0x158/0x2c2
(XEN) [<ffff82d08022062e>] hap_gva_to_gfn_3_levels+0x1c/0x1e
(XEN) [<ffff82d0801ec215>] paging_gva_to_gfn+0xb8/0xce
(XEN) [<ffff82d0801ba88d>] __hvm_copy+0x87/0x354
(XEN) [<ffff82d0801bac7c>] hvm_copy_to_guest_virt_nofault+0x1e/0x20
(XEN) [<ffff82d0801bace5>] copy_to_user_hvm+0x67/0x87
(XEN) [<ffff82d08016237c>] update_runstate_area+0x98/0xfb
(XEN) [<ffff82d0801623f0>] _update_runstate_area+0x11/0x39
(XEN) [<ffff82d0801634db>] context_switch+0x10c3/0x10fa
(XEN) [<ffff82d080126a19>] schedule+0x5a8/0x5da
(XEN) [<ffff82d0801297f9>] __do_softirq+0x81/0x8c
(XEN) [<ffff82d080129852>] do_softirq+0x13/0x15
(XEN) [<ffff82d08015f70a>] idle_loop+0x67/0x77

We need get guest's SS register via hvm_get_segment_register()
to do the SMAP checking, however, in these two cases, we cannot
do it that way since it is between setting 'current' and reloading
the VMCS context for it. As an alternative, here we treat these
accesses as implicit supervisor mode access, hence SMAP checking is
always need.

V2:
Remove ' VCPUOP_enable_smap_check_vcpu_time_memory_area' hypercall,
hence always do the SMAP checking for the secondary system time.

V3:
- Add smap_policy_change() to change the smap policy, which will
returen the old value.
- Use enum to define the smap policy.
- Drop 'Case SMAP_CHECK_DISABLED' in guest_walk_tables(), and add
'ASSERT(v->arch.smap_check_policy == SMAP_CHECK_DISABLED)' in the
default case instead.

V4:
- Adjust the branch handling in update_runstate_area().
- Remove the pointless initial value of smap_check_policy_t.
- Using __packed__ attribute for smap_check_policy_t to make
its size down to one byte.
- Adjust the position of 'smap_check_policy' in struct arch_vcpu
to use the current padding in this structure.
- Coding style.

Feng Wu (2):
  x86/hvm: Always do SMAP check when updating runstate_guest(v)
  x86/hvm: Always do SMAP check when updating secondary system time for
    guest

 xen/arch/x86/domain.c        | 27 +++++++++++++++++++++++----
 xen/arch/x86/mm/guest_walk.c | 39 ++++++++++++++++++++++++++-------------
 xen/arch/x86/time.c          | 10 +++++++++-
 xen/include/asm-x86/domain.h | 19 +++++++++++++++++--
 4 files changed, 75 insertions(+), 20 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v)
  2014-07-30  1:35 [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Feng Wu
@ 2014-07-30  1:35 ` Feng Wu
  2014-07-30  8:15   ` Jan Beulich
  2014-07-30  1:35 ` [PATCH v4 2/2] x86/hvm: Always do SMAP check when updating secondary system time for guest Feng Wu
  2014-07-30  8:24 ` [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Jan Beulich
  2 siblings, 1 reply; 8+ messages in thread
From: Feng Wu @ 2014-07-30  1:35 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, Feng Wu, keir, jbeulich, linux

In the current implementation, we honor the guest's CPL and AC
to determain whether do the SMAP check or not for runstate_guest(v).
However, this doesn't work. The VMCS feild is invalid when we try
to get geust's SS by hvm_get_segment_register(), since the
right VMCS has not beed loaded for the current VCPU.

In this patch, we always do the SMAP check when updating
runstate_guest(v) for the guest when SMAP is enabled by it.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/domain.c        | 27 +++++++++++++++++++++++----
 xen/arch/x86/mm/guest_walk.c | 39 ++++++++++++++++++++++++++-------------
 xen/include/asm-x86/domain.h | 17 ++++++++++++++++-
 3 files changed, 65 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index e896210..ad42e4c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -180,6 +180,14 @@ void dump_pageframe_info(struct domain *d)
     spin_unlock(&d->page_alloc_lock);
 }
 
+smap_check_policy_t smap_policy_change(struct vcpu *v,
+    smap_check_policy_t new_policy)
+{
+    smap_check_policy_t old_policy = v->arch.smap_check_policy;
+    v->arch.smap_check_policy = new_policy;
+    return old_policy;
+}
+
 /*
  * The hole may be at or above the 44-bit boundary, so we need to determine
  * the total bit count until reaching 32 significant (not squashed out) bits
@@ -1349,22 +1357,33 @@ static void paravirt_ctxt_switch_to(struct vcpu *v)
 }
 
 /* Update per-VCPU guest runstate shared memory area (if registered). */
-bool_t update_runstate_area(const struct vcpu *v)
+bool_t update_runstate_area(struct vcpu *v)
 {
+    bool_t rc;
+    smap_check_policy_t smap_policy;
+
     if ( guest_handle_is_null(runstate_guest(v)) )
         return 1;
 
+    smap_policy = smap_policy_change(v, SMAP_CHECK_ENABLED);
+
     if ( has_32bit_shinfo(v->domain) )
     {
         struct compat_vcpu_runstate_info info;
 
         XLAT_vcpu_runstate_info(&info, &v->runstate);
         __copy_to_guest(v->runstate_guest.compat, &info, 1);
-        return 1;
+        smap_policy_change(v, smap_policy);
+        rc = 1;
+    }
+    else
+    {
+        rc = __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=
+             sizeof(v->runstate);
+        smap_policy_change(v, smap_policy);
     }
 
-    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=
-           sizeof(v->runstate);
+    return rc;
 }
 
 static void _update_runstate_area(struct vcpu *v)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index bb38fda..1b26175 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -164,25 +164,38 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
         struct segment_register seg;
         const struct cpu_user_regs *regs = guest_cpu_user_regs();
 
-        hvm_get_segment_register(v, x86_seg_ss, &seg);
-
         /* SMEP: kernel-mode instruction fetches from user-mode mappings
          * should fault.  Unlike NX or invalid bits, we're looking for _all_
          * entries in the walk to have _PAGE_USER set, so we need to do the
          * whole walk as if it were a user-mode one and then invert the answer. */
         smep =  hvm_smep_enabled(v) && (pfec & PFEC_insn_fetch);
 
-        /*
-         * SMAP: kernel-mode data accesses from user-mode mappings should fault
-         * A fault is considered as a SMAP violation if the following
-         * conditions come true:
-         *   - X86_CR4_SMAP is set in CR4
-         *   - A user page is accessed
-         *   - CPL = 3 or X86_EFLAGS_AC is clear
-         *   - Page fault in kernel mode
-         */
-        smap = hvm_smap_enabled(v) &&
-               ((seg.attr.fields.dpl == 3) || !(regs->eflags & X86_EFLAGS_AC));
+        switch ( v->arch.smap_check_policy )
+        {
+        case SMAP_CHECK_HONOR_CPL_AC:
+            hvm_get_segment_register(v, x86_seg_ss, &seg);
+
+            /*
+             * SMAP: kernel-mode data accesses from user-mode mappings
+             * should fault.
+             * A fault is considered as a SMAP violation if the following
+             * conditions come true:
+             *   - X86_CR4_SMAP is set in CR4
+             *   - A user page is accessed
+             *   - CPL = 3 or X86_EFLAGS_AC is clear
+             *   - Page fault in kernel mode
+             */
+            smap = hvm_smap_enabled(v) &&
+                   ((seg.attr.fields.dpl == 3) ||
+                    !(regs->eflags & X86_EFLAGS_AC));
+            break;
+        case SMAP_CHECK_ENABLED:
+            smap = hvm_smap_enabled(v);
+            break;
+        default:
+            ASSERT(v->arch.smap_check_policy == SMAP_CHECK_DISABLED);
+            break;
+        }
     }
 
     if ( smep || smap )
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index abf55fb..112d0b1 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -382,6 +382,12 @@ struct pv_vcpu
     struct vcpu_time_info pending_system_time;
 };
 
+typedef enum __packed {
+    SMAP_CHECK_HONOR_CPL_AC,    /* honor the guest's CPL and AC */
+    SMAP_CHECK_ENABLED,         /* enable the check */
+    SMAP_CHECK_DISABLED,        /* disable the check */
+} smap_check_policy_t;
+
 struct arch_vcpu
 {
     /*
@@ -438,6 +444,12 @@ struct arch_vcpu
      * and thus should be saved/restored. */
     bool_t nonlazy_xstate_used;
 
+    /*
+     * The SMAP check policy when updating runstate_guest(v) and the
+     * secondary system time.
+     */
+    smap_check_policy_t smap_check_policy;
+
     struct vmce vmce;
 
     struct paging_vcpu paging;
@@ -448,11 +460,14 @@ struct arch_vcpu
     XEN_GUEST_HANDLE(vcpu_time_info_t) time_info_guest;
 } __cacheline_aligned;
 
+smap_check_policy_t smap_policy_change(struct vcpu *v,
+                                       smap_check_policy_t new_policy);
+
 /* Shorthands to improve code legibility. */
 #define hvm_vmx         hvm_vcpu.u.vmx
 #define hvm_svm         hvm_vcpu.u.svm
 
-bool_t update_runstate_area(const struct vcpu *);
+bool_t update_runstate_area(struct vcpu *);
 bool_t update_secondary_system_time(const struct vcpu *,
                                     struct vcpu_time_info *);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/2] x86/hvm: Always do SMAP check when updating secondary system time for guest
  2014-07-30  1:35 [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Feng Wu
  2014-07-30  1:35 ` [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v) Feng Wu
@ 2014-07-30  1:35 ` Feng Wu
  2014-07-30  8:24 ` [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Jan Beulich
  2 siblings, 0 replies; 8+ messages in thread
From: Feng Wu @ 2014-07-30  1:35 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, Feng Wu, keir, jbeulich, linux

In this patch, we always do the SMAP check when updating secondary
system time for the guest when SMAP is enabled by it.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/time.c          | 10 +++++++++-
 xen/include/asm-x86/domain.h |  2 +-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index a4e1656..e4627f3 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -821,17 +821,23 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
         v->arch.pv_vcpu.pending_system_time = _u;
 }
 
-bool_t update_secondary_system_time(const struct vcpu *v,
+bool_t update_secondary_system_time(struct vcpu *v,
                                     struct vcpu_time_info *u)
 {
     XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v->arch.time_info_guest;
+    smap_check_policy_t saved_policy;
 
     if ( guest_handle_is_null(user_u) )
         return 1;
 
+    saved_policy = smap_policy_change(v, SMAP_CHECK_ENABLED);
+
     /* 1. Update userspace version. */
     if ( __copy_field_to_guest(user_u, u, version) == sizeof(u->version) )
+    {
+        smap_policy_change(v, saved_policy);
         return 0;
+    }
     wmb();
     /* 2. Update all other userspace fields. */
     __copy_to_guest(user_u, u, 1);
@@ -840,6 +846,8 @@ bool_t update_secondary_system_time(const struct vcpu *v,
     u->version = version_update_end(u->version);
     __copy_field_to_guest(user_u, u, version);
 
+    smap_policy_change(v, saved_policy);
+
     return 1;
 }
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 112d0b1..83329ed 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -468,7 +468,7 @@ smap_check_policy_t smap_policy_change(struct vcpu *v,
 #define hvm_svm         hvm_vcpu.u.svm
 
 bool_t update_runstate_area(struct vcpu *);
-bool_t update_secondary_system_time(const struct vcpu *,
+bool_t update_secondary_system_time(struct vcpu *,
                                     struct vcpu_time_info *);
 
 void vcpu_show_execution_state(struct vcpu *);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v)
  2014-07-30  1:35 ` [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v) Feng Wu
@ 2014-07-30  8:15   ` Jan Beulich
  2014-07-30  8:53     ` Wu, Feng
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-07-30  8:15 UTC (permalink / raw)
  To: Feng Wu; +Cc: linux, tim, keir, xen-devel

>>> On 30.07.14 at 03:35, <feng.wu@intel.com> wrote:
> @@ -1349,22 +1357,33 @@ static void paravirt_ctxt_switch_to(struct vcpu *v)
>  }
>  
>  /* Update per-VCPU guest runstate shared memory area (if registered). */
> -bool_t update_runstate_area(const struct vcpu *v)
> +bool_t update_runstate_area(struct vcpu *v)
>  {
> +    bool_t rc;
> +    smap_check_policy_t smap_policy;
> +
>      if ( guest_handle_is_null(runstate_guest(v)) )
>          return 1;
>  
> +    smap_policy = smap_policy_change(v, SMAP_CHECK_ENABLED);
> +
>      if ( has_32bit_shinfo(v->domain) )
>      {
>          struct compat_vcpu_runstate_info info;
>  
>          XLAT_vcpu_runstate_info(&info, &v->runstate);
>          __copy_to_guest(v->runstate_guest.compat, &info, 1);
> -        return 1;
> +        smap_policy_change(v, smap_policy);
> +        rc = 1;
> +    }
> +    else
> +    {
> +        rc = __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=
> +             sizeof(v->runstate);
> +        smap_policy_change(v, smap_policy);
>      }

The common smap_policy_change(v, smap_policy) should have been
pulled out of the if and else parts. I'll see to remember doing this when
committing (assuming that not other issue with the patch gets spotted
by someone else).

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases
  2014-07-30  1:35 [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Feng Wu
  2014-07-30  1:35 ` [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v) Feng Wu
  2014-07-30  1:35 ` [PATCH v4 2/2] x86/hvm: Always do SMAP check when updating secondary system time for guest Feng Wu
@ 2014-07-30  8:24 ` Jan Beulich
  2014-07-30 13:14   ` Sander Eikelenboom
  2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-07-30  8:24 UTC (permalink / raw)
  To: linux; +Cc: keir, tim, Feng Wu, xen-devel

>>> On 30.07.14 at 03:35, <feng.wu@intel.com> wrote:
> V4:
> - Adjust the branch handling in update_runstate_area().
> - Remove the pointless initial value of smap_check_policy_t.
> - Using __packed__ attribute for smap_check_policy_t to make
> its size down to one byte.
> - Adjust the position of 'smap_check_policy' in struct arch_vcpu
> to use the current padding in this structure.
> - Coding style.
> 
> Feng Wu (2):
>   x86/hvm: Always do SMAP check when updating runstate_guest(v)
>   x86/hvm: Always do SMAP check when updating secondary system time for
>     guest
> 
>  xen/arch/x86/domain.c        | 27 +++++++++++++++++++++++----
>  xen/arch/x86/mm/guest_walk.c | 39 ++++++++++++++++++++++++++-------------
>  xen/arch/x86/time.c          | 10 +++++++++-
>  xen/include/asm-x86/domain.h | 19 +++++++++++++++++--
>  4 files changed, 75 insertions(+), 20 deletions(-)

Sander,

can you confirm this addresses the issue you had?

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v)
  2014-07-30  8:15   ` Jan Beulich
@ 2014-07-30  8:53     ` Wu, Feng
  0 siblings, 0 replies; 8+ messages in thread
From: Wu, Feng @ 2014-07-30  8:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: linux@eikelenboom.it, tim@xen.org, keir@xen.org,
	xen-devel@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, July 30, 2014 4:15 PM
> To: Wu, Feng
> Cc: linux@eikelenboom.it; xen-devel@lists.xen.org; keir@xen.org;
> tim@xen.org
> Subject: Re: [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating
> runstate_guest(v)
> 
> >>> On 30.07.14 at 03:35, <feng.wu@intel.com> wrote:
> > @@ -1349,22 +1357,33 @@ static void paravirt_ctxt_switch_to(struct vcpu
> *v)
> >  }
> >
> >  /* Update per-VCPU guest runstate shared memory area (if registered). */
> > -bool_t update_runstate_area(const struct vcpu *v)
> > +bool_t update_runstate_area(struct vcpu *v)
> >  {
> > +    bool_t rc;
> > +    smap_check_policy_t smap_policy;
> > +
> >      if ( guest_handle_is_null(runstate_guest(v)) )
> >          return 1;
> >
> > +    smap_policy = smap_policy_change(v, SMAP_CHECK_ENABLED);
> > +
> >      if ( has_32bit_shinfo(v->domain) )
> >      {
> >          struct compat_vcpu_runstate_info info;
> >
> >          XLAT_vcpu_runstate_info(&info, &v->runstate);
> >          __copy_to_guest(v->runstate_guest.compat, &info, 1);
> > -        return 1;
> > +        smap_policy_change(v, smap_policy);
> > +        rc = 1;
> > +    }
> > +    else
> > +    {
> > +        rc = __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=
> > +             sizeof(v->runstate);
> > +        smap_policy_change(v, smap_policy);
> >      }
> 
> The common smap_policy_change(v, smap_policy) should have been
> pulled out of the if and else parts. I'll see to remember doing this when
> committing (assuming that not other issue with the patch gets spotted
> by someone else).
> 
> Jan

Thanks a lot. I forgot that one.

Thanks,
Feng

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases
  2014-07-30  8:24 ` [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Jan Beulich
@ 2014-07-30 13:14   ` Sander Eikelenboom
  2014-07-30 13:20     ` Wu, Feng
  0 siblings, 1 reply; 8+ messages in thread
From: Sander Eikelenboom @ 2014-07-30 13:14 UTC (permalink / raw)
  To: Jan Beulich; +Cc: keir, tim, Feng Wu, xen-devel


Wednesday, July 30, 2014, 10:24:39 AM, you wrote:

>>>> On 30.07.14 at 03:35, <feng.wu@intel.com> wrote:
>> V4:
>> - Adjust the branch handling in update_runstate_area().
>> - Remove the pointless initial value of smap_check_policy_t.
>> - Using __packed__ attribute for smap_check_policy_t to make
>> its size down to one byte.
>> - Adjust the position of 'smap_check_policy' in struct arch_vcpu
>> to use the current padding in this structure.
>> - Coding style.
>> 
>> Feng Wu (2):
>>   x86/hvm: Always do SMAP check when updating runstate_guest(v)
>>   x86/hvm: Always do SMAP check when updating secondary system time for
>>     guest
>> 
>>  xen/arch/x86/domain.c        | 27 +++++++++++++++++++++++----
>>  xen/arch/x86/mm/guest_walk.c | 39 ++++++++++++++++++++++++++-------------
>>  xen/arch/x86/time.c          | 10 +++++++++-
>>  xen/include/asm-x86/domain.h | 19 +++++++++++++++++--
>>  4 files changed, 75 insertions(+), 20 deletions(-)

> Sander,

> can you confirm this addresses the issue you had?

> Jan

Hi Jan / Feng,

It wasn't so much of an issue in the sense that i only got this warning (didn't notice any other adverse effects my self):

(XEN) [2014-07-30 11:41:02] Segment register inaccessible for d1v0
(XEN) [2014-07-30 11:41:02] (If you see this outside of debugging activity, please report to xen-devel@lists.xenproject.org)

But i just tested the v4 patches and the warning is gone !

Thanks,

Sander

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases
  2014-07-30 13:14   ` Sander Eikelenboom
@ 2014-07-30 13:20     ` Wu, Feng
  0 siblings, 0 replies; 8+ messages in thread
From: Wu, Feng @ 2014-07-30 13:20 UTC (permalink / raw)
  To: Sander Eikelenboom, Jan Beulich
  Cc: tim@xen.org, keir@xen.org, xen-devel@lists.xen.org



> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Wednesday, July 30, 2014 9:14 PM
> To: Jan Beulich
> Cc: Wu, Feng; xen-devel@lists.xen.org; keir@xen.org; tim@xen.org
> Subject: Re: [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain
> cases
> 
> 
> Wednesday, July 30, 2014, 10:24:39 AM, you wrote:
> 
> >>>> On 30.07.14 at 03:35, <feng.wu@intel.com> wrote:
> >> V4:
> >> - Adjust the branch handling in update_runstate_area().
> >> - Remove the pointless initial value of smap_check_policy_t.
> >> - Using __packed__ attribute for smap_check_policy_t to make
> >> its size down to one byte.
> >> - Adjust the position of 'smap_check_policy' in struct arch_vcpu
> >> to use the current padding in this structure.
> >> - Coding style.
> >>
> >> Feng Wu (2):
> >>   x86/hvm: Always do SMAP check when updating runstate_guest(v)
> >>   x86/hvm: Always do SMAP check when updating secondary system time
> for
> >>     guest
> >>
> >>  xen/arch/x86/domain.c        | 27 +++++++++++++++++++++++----
> >>  xen/arch/x86/mm/guest_walk.c | 39
> ++++++++++++++++++++++++++-------------
> >>  xen/arch/x86/time.c          | 10 +++++++++-
> >>  xen/include/asm-x86/domain.h | 19 +++++++++++++++++--
> >>  4 files changed, 75 insertions(+), 20 deletions(-)
> 
> > Sander,
> 
> > can you confirm this addresses the issue you had?
> 
> > Jan
> 
> Hi Jan / Feng,
> 
> It wasn't so much of an issue in the sense that i only got this warning (didn't
> notice any other adverse effects my self):
> 
> (XEN) [2014-07-30 11:41:02] Segment register inaccessible for d1v0
> (XEN) [2014-07-30 11:41:02] (If you see this outside of debugging activity, please
> report to xen-devel@lists.xenproject.org)

Printing out this message means there are some issues, since the function returns
from that point directly without executing the remaining stuff, which is incorrect.
> 
> But i just tested the v4 patches and the warning is gone !

Thanks a lot for the testing!

Thanks,
Feng

> 
> Thanks,
> 
> Sander

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-07-30 13:20 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-30  1:35 [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Feng Wu
2014-07-30  1:35 ` [PATCH v4 1/2] x86/hvm: Always do SMAP check when updating runstate_guest(v) Feng Wu
2014-07-30  8:15   ` Jan Beulich
2014-07-30  8:53     ` Wu, Feng
2014-07-30  1:35 ` [PATCH v4 2/2] x86/hvm: Always do SMAP check when updating secondary system time for guest Feng Wu
2014-07-30  8:24 ` [PATCH v4 0/2] x86/HVM: Properly handle SMAP check in certain cases Jan Beulich
2014-07-30 13:14   ` Sander Eikelenboom
2014-07-30 13:20     ` Wu, Feng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).