From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH] KVM: x86: fix bogus warning about reserved bits Date: Thu, 24 Sep 2015 11:23:08 +0800 Message-ID: <56036C9C.9040709@linux.intel.com> References: <1442910329-3357-1-git-send-email-pbonzini@redhat.com> <20150922175647.GC3568@pd.tnic> <5601C266.4060601@redhat.com> <20150923075635.GA3564@pd.tnic> <560272AF.40802@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org To: Paolo Bonzini , Borislav Petkov Return-path: Received: from mga03.intel.com ([134.134.136.65]:42326 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932279AbbIXD3W (ORCPT ); Wed, 23 Sep 2015 23:29:22 -0400 In-Reply-To: <560272AF.40802@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 09/23/2015 05:36 PM, Paolo Bonzini wrote: > > > On 23/09/2015 09:56, Borislav Petkov wrote: >> On Tue, Sep 22, 2015 at 11:04:38PM +0200, Paolo Bonzini wrote: >>> Let's add more debugging output: >> >> Here you go: >> >> [ 50.474002] walk_shadow_page_get_mmio_spte: detect reserved bits on spte, addr 0xb8000 (level 4, 0xf0000000000f8) >> [ 50.484249] walk_shadow_page_get_mmio_spte: detect reserved bits on spte, addr 0xb8000 (level 3, 0xf000000000078) >> [ 50.494492] walk_shadow_page_get_mmio_spte: detect reserved bits on spte, addr 0xb8000 (level 2, 0xf000000000078) > > And another patch, which both cranks up the debugging a bit and > tries another fix: > > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h > index dd05b9cef6ae..b2f49bb15ba1 100644 > --- a/arch/x86/kvm/cpuid.h > +++ b/arch/x86/kvm/cpuid.h > @@ -105,8 +105,15 @@ static inline bool guest_cpuid_has_x2apic(struct kvm_vcpu *vcpu) > static inline bool guest_cpuid_is_amd(struct kvm_vcpu *vcpu) > { > struct kvm_cpuid_entry2 *best; > + static bool first; > > best = kvm_find_cpuid_entry(vcpu, 0, 0); > + if (first && best) { > + printk("cpuid(0).ebx = %x\n", best->ebx); > + first = false; > + } else if (first) > + printk_ratelimited("cpuid(0) not initialized yet\n"); > + > return best && best->ebx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx; > } > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index bf1122e9c7bf..f50b280ffee1 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -3625,7 +3625,7 @@ static void > __reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, > struct rsvd_bits_validate *rsvd_check, > int maxphyaddr, int level, bool nx, bool gbpages, > - bool pse) > + bool pse, bool amd) > { > u64 exb_bit_rsvd = 0; > u64 gbpages_bit_rsvd = 0; > @@ -3642,7 +3642,7 @@ __reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, > * Non-leaf PML4Es and PDPEs reserve bit 8 (which would be the G bit for > * leaf entries) on AMD CPUs only. > */ > - if (guest_cpuid_is_amd(vcpu)) > + if (amd) > nonleaf_bit8_rsvd = rsvd_bits(8, 8); > > switch (level) { > @@ -3710,7 +3710,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, > __reset_rsvds_bits_mask(vcpu, &context->guest_rsvd_check, > cpuid_maxphyaddr(vcpu), context->root_level, > context->nx, guest_cpuid_has_gbpages(vcpu), > - is_pse(vcpu)); > + is_pse(vcpu), guest_cpuid_is_amd(vcpu)); > } > > static void > @@ -3760,13 +3760,25 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, > void > reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) > { > + /* > + * Passing "true" to the last argument is okay; it adds a check > + * on bit 8 of the SPTEs which KVM doesn't use anyway. > + */ > __reset_rsvds_bits_mask(vcpu, &context->shadow_zero_check, > boot_cpu_data.x86_phys_bits, > context->shadow_root_level, context->nx, > - guest_cpuid_has_gbpages(vcpu), is_pse(vcpu)); > + guest_cpuid_has_gbpages(vcpu), is_pse(vcpu), > + true); > } > EXPORT_SYMBOL_GPL(reset_shadow_zero_bits_mask); > > +static inline bool > +boot_cpu_is_amd(void) > +{ > + WARN_ON_ONCE(!tdp_enabled); > + return shadow_x_mask != 0; shadow_x_mask != 0 is Intel's CPU. Borislav, could you please check shadow_x_mask == 0 instead and test it again? Further more, use guest_cpuid_is_amd() to detect hardware CPU vendor is wrong as usespace can fool KVM. Should test host CPUID or introduce intel/amd callback instead.