From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [Patch v3] Enable CPU SMEP feature for KVM Date: Fri, 27 May 2011 11:45:29 +0300 Message-ID: <4DDF64A9.1000001@redhat.com> References: <5D8008F58939784290FAB48F54975198419FBE82D2@shsmsx502.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "kvm@vger.kernel.org" To: "Yang, Wei Y" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:38960 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758741Ab1E0Ipe (ORCPT ); Fri, 27 May 2011 04:45:34 -0400 In-Reply-To: <5D8008F58939784290FAB48F54975198419FBE82D2@shsmsx502.ccr.corp.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/26/2011 04:28 PM, Yang, Wei Y wrote: > This patchset enables a new CPU feature SMEP (Supervisor Mode Execution > Protection) in KVM. SMEP prevents kernel from executing code in application. > Updated Intel SDM describes this CPU feature. The document will be published soon. > > This patchset is based on Fenghua's SMEP patch series, as referred by: > https://lkml.org/lkml/2011/5/17/523 > > > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index 6c4dc01..7e0b2f8 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -120,7 +120,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, > struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, > gva_t addr, u32 access) > { > - pt_element_t pte; > + pt_element_t pte, pte_smep; > pt_element_t __user *ptep_user; > gfn_t table_gfn; > unsigned index, pt_access, uninitialized_var(pte_access); > @@ -150,7 +150,11 @@ walk: > } > --walker->level; > } > + pte_smep = ~0ULL; > +#else > + pte_smep = ~0U; > #endif Use pte_smep = ~(pt_element_t)0;. But I don't understand why you don't use pt_access instead. > > + if (unlikely(fetch_fault&& !user_fault)) Why are kernel fetch faults unlikely? Rather, a SMEP fault is unlikely, so annotate that instead. > + if ((vcpu->arch.cr4& X86_CR4_SMEP) > + && (pte_smep& PT_USER_MASK)) > + eperm = true; > + kvm_read_cr4_bits() > gfn = gpte_to_gfn_lvl(pte, lvl); > gfn += (addr& PT_LVL_OFFSET_MASK(lvl))>> PAGE_SHIFT; > > @@ -305,7 +316,7 @@ error: > > walker->fault.error_code |= write_fault | user_fault; > > - if (fetch_fault&& mmu->nx) > + if (fetch_fault&& (mmu->nx || (vcpu->arch.cr4& X86_CR4_SMEP))) Here, too. > @@ -4507,6 +4507,15 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) > } > } > } > + > + best = kvm_find_cpuid_entry(vcpu, 7, 0); > + if (best&& (best->ebx& bit(X86_FEATURE_SMEP))) { > + if (boot_cpu_has(X86_FEATURE_SMEP)) > + vcpu->arch.cr4_reserved_bits&= > + ~((unsigned long)X86_CR4_SMEP); Fails if cpuid is updated again to include SMEP. But I suggest to drop cr4_reserved_bits completely and just check cpuid directly in kvm_set_cr4(), if cr4.smep is changed, which should be very rare. See how XSAVE is supported, for example. > + else > + best->ebx&= ~(bit(X86_FEATURE_SMEP)); Not needed - x86.c already masks unsupported features. > + } > } -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.