* Re: [PATCH] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS
2014-04-23 14:32 [PATCH] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS Feng Wu
@ 2014-04-23 9:43 ` Andrew Cooper
2014-04-23 10:05 ` Jan Beulich
1 sibling, 0 replies; 3+ messages in thread
From: Andrew Cooper @ 2014-04-23 9:43 UTC (permalink / raw)
To: Feng Wu
Cc: kevin.tian, ian.campbell, eddie.dong, xen-devel, JBeulich,
jun.nakajima
On 23/04/14 15:32, Feng Wu wrote:
> When checking the SMEP feature for HVM guests, we should check the
> VCPU instead of the host CPU.
>
> Signed-off-by: Feng Wu <feng.wu@intel.com>
> ---
> xen/include/asm-x86/hvm/hvm.h | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index dcc3483..74a09ef 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -351,6 +351,15 @@ static inline int hvm_event_pending(struct vcpu *v)
> return hvm_funcs.event_pending(v);
> }
>
> +static inline bool_t hvm_vcpu_has_smep(void)
> +{
> + unsigned int ebx = 0, leaf = 0x7;
You need to check hvm_cpuid.0.eax for max leaf.
~Andrew
> +
> + hvm_cpuid(leaf, NULL, &ebx, NULL, NULL);
> +
> + return !!(ebx & cpufeat_mask(X86_FEATURE_SMEP));
> +}
> +
> /* These reserved bits in lower 32 remain 0 after any load of CR0 */
> #define HVM_CR0_GUEST_RESERVED_BITS \
> (~((unsigned long) \
> @@ -370,7 +379,7 @@ static inline int hvm_event_pending(struct vcpu *v)
> X86_CR4_DE | X86_CR4_PSE | X86_CR4_PAE | \
> X86_CR4_MCE | X86_CR4_PGE | X86_CR4_PCE | \
> X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT | \
> - (cpu_has_smep ? X86_CR4_SMEP : 0) | \
> + (hvm_vcpu_has_smep() ? X86_CR4_SMEP : 0) | \
> (cpu_has_fsgsbase ? X86_CR4_FSGSBASE : 0) | \
> ((nestedhvm_enabled((_v)->domain) && cpu_has_vmx)\
> ? X86_CR4_VMXE : 0) | \
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS
2014-04-23 14:32 [PATCH] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS Feng Wu
2014-04-23 9:43 ` Andrew Cooper
@ 2014-04-23 10:05 ` Jan Beulich
1 sibling, 0 replies; 3+ messages in thread
From: Jan Beulich @ 2014-04-23 10:05 UTC (permalink / raw)
To: Feng Wu
Cc: kevin.tian, ian.campbell, andrew.cooper3, eddie.dong, xen-devel,
jun.nakajima
>>> On 23.04.14 at 16:32, <feng.wu@intel.com> wrote:
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -351,6 +351,15 @@ static inline int hvm_event_pending(struct vcpu *v)
> return hvm_funcs.event_pending(v);
> }
>
> +static inline bool_t hvm_vcpu_has_smep(void)
> +{
> + unsigned int ebx = 0, leaf = 0x7;
Pointless initializer for ebx, and pointless variable leaf.
> + hvm_cpuid(leaf, NULL, &ebx, NULL, NULL);
And, as I just saw Andrew already pointed out, this needs gating on
there being a (visible) leaf 7 in the first place.
Jan
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS
@ 2014-04-23 14:32 Feng Wu
2014-04-23 9:43 ` Andrew Cooper
2014-04-23 10:05 ` Jan Beulich
0 siblings, 2 replies; 3+ messages in thread
From: Feng Wu @ 2014-04-23 14:32 UTC (permalink / raw)
To: xen-devel
Cc: kevin.tian, Feng Wu, JBeulich, andrew.cooper3, eddie.dong,
jun.nakajima, ian.campbell
When checking the SMEP feature for HVM guests, we should check the
VCPU instead of the host CPU.
Signed-off-by: Feng Wu <feng.wu@intel.com>
---
xen/include/asm-x86/hvm/hvm.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index dcc3483..74a09ef 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -351,6 +351,15 @@ static inline int hvm_event_pending(struct vcpu *v)
return hvm_funcs.event_pending(v);
}
+static inline bool_t hvm_vcpu_has_smep(void)
+{
+ unsigned int ebx = 0, leaf = 0x7;
+
+ hvm_cpuid(leaf, NULL, &ebx, NULL, NULL);
+
+ return !!(ebx & cpufeat_mask(X86_FEATURE_SMEP));
+}
+
/* These reserved bits in lower 32 remain 0 after any load of CR0 */
#define HVM_CR0_GUEST_RESERVED_BITS \
(~((unsigned long) \
@@ -370,7 +379,7 @@ static inline int hvm_event_pending(struct vcpu *v)
X86_CR4_DE | X86_CR4_PSE | X86_CR4_PAE | \
X86_CR4_MCE | X86_CR4_PGE | X86_CR4_PCE | \
X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT | \
- (cpu_has_smep ? X86_CR4_SMEP : 0) | \
+ (hvm_vcpu_has_smep() ? X86_CR4_SMEP : 0) | \
(cpu_has_fsgsbase ? X86_CR4_FSGSBASE : 0) | \
((nestedhvm_enabled((_v)->domain) && cpu_has_vmx)\
? X86_CR4_VMXE : 0) | \
--
1.8.3.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-04-23 14:32 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-23 14:32 [PATCH] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS Feng Wu
2014-04-23 9:43 ` Andrew Cooper
2014-04-23 10:05 ` Jan Beulich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).