From mboxrd@z Thu Jan 1 00:00:00 1970 From: Feng Wu Subject: [PATCH v3] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS Date: Tue, 6 May 2014 15:14:54 +0800 Message-ID: <1399360494-19490-1-git-send-email-feng.wu@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xen.org Cc: kevin.tian@intel.com, Feng Wu , JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com, jun.nakajima@intel.com List-Id: xen-devel@lists.xenproject.org When checking the SMEP feature for HVM guests, we should check the VCPU instead of the host CPU. Signed-off-by: Feng Wu Reviewed-by: Andrew Cooper --- xen/include/asm-x86/hvm/hvm.h | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index dcc3483..a1f639c 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -351,6 +351,19 @@ static inline int hvm_event_pending(struct vcpu *v) return hvm_funcs.event_pending(v); } +static inline bool_t hvm_vcpu_has_smep(void) +{ + unsigned int eax, ebx; + + hvm_cpuid(0x0, &eax, NULL, NULL, NULL); + + if ( eax < 0x7 ) + return 0; + + hvm_cpuid(0x7, NULL, &ebx, NULL, NULL); + return !!(ebx & cpufeat_mask(X86_FEATURE_SMEP)); +} + /* These reserved bits in lower 32 remain 0 after any load of CR0 */ #define HVM_CR0_GUEST_RESERVED_BITS \ (~((unsigned long) \ @@ -370,7 +383,7 @@ static inline int hvm_event_pending(struct vcpu *v) X86_CR4_DE | X86_CR4_PSE | X86_CR4_PAE | \ X86_CR4_MCE | X86_CR4_PGE | X86_CR4_PCE | \ X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT | \ - (cpu_has_smep ? X86_CR4_SMEP : 0) | \ + (hvm_vcpu_has_smep() ? X86_CR4_SMEP : 0) | \ (cpu_has_fsgsbase ? X86_CR4_FSGSBASE : 0) | \ ((nestedhvm_enabled((_v)->domain) && cpu_has_vmx)\ ? X86_CR4_VMXE : 0) | \ -- 1.8.3.1