From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 4/4] VMX: x86: Only reset MMU when necessary Date: Wed, 12 May 2010 09:59:14 +0300 Message-ID: <4BEA51C2.5000708@redhat.com> References: <4BEA4B2A.9060907@redhat.com> <1273645986-21526-4-git-send-email-sheng@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm@vger.kernel.org To: Sheng Yang Return-path: Received: from mx1.redhat.com ([209.132.183.28]:19923 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753785Ab0ELG7V (ORCPT ); Wed, 12 May 2010 02:59:21 -0400 In-Reply-To: <1273645986-21526-4-git-send-email-sheng@linux.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/12/2010 09:33 AM, Sheng Yang wrote: > Only modifying some bits of CR0/CR4 needs paging mode switch. > > Add update_rsvd_bits_mask() to address EFER.NX bit updating for reserved bits. > > > @@ -2335,6 +2335,19 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) > } > } > > +void update_rsvd_bits_mask(struct kvm_vcpu *vcpu) > +{ > + if (!is_paging(vcpu)) > + return; > + if (is_long_mode(vcpu)) > + reset_rsvds_bits_mask(vcpu, PT64_ROOT_LEVEL); > + else if (is_pae(vcpu)) > + reset_rsvds_bits_mask(vcpu, PT32E_ROOT_LEVEL); > + else > + reset_rsvds_bits_mask(vcpu, PT32_ROOT_LEVEL); > +} > +EXPORT_SYMBOL_GPL(update_rsvd_bits_mask); > Needs a kvm_ prefix if made a global symbol. But isn't nx switching rare enough so we can reload the mmu completely? > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index b59fc67..971a295 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -416,6 +416,10 @@ out: > > static int __kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) > { > + unsigned long old_cr0 = kvm_read_cr0(vcpu); > + unsigned long update_bits = X86_CR0_PG | X86_CR0_PE | > + X86_CR0_CD | X86_CR0_NW; > PE doesn't affect paging, CD, NW don't either? What about WP? > + > cr0 |= X86_CR0_ET; > > #ifdef CONFIG_X86_64 > @@ -449,7 +453,8 @@ static int __kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) > > kvm_x86_ops->set_cr0(vcpu, cr0); > > - kvm_mmu_reset_context(vcpu); > + if ((cr0 ^ old_cr0)& update_bits) > + kvm_mmu_reset_context(vcpu); > return 0; > } > > > @@ -692,6 +698,8 @@ static u32 emulated_msrs[] = { > > static int set_efer(struct kvm_vcpu *vcpu, u64 efer) > { > + u64 old_efer = vcpu->arch.efer; > + > if (efer& efer_reserved_bits) > return 1; > > @@ -722,6 +730,9 @@ static int set_efer(struct kvm_vcpu *vcpu, u64 efer) > > vcpu->arch.mmu.base_role.nxe = (efer& EFER_NX)&& !tdp_enabled; > > + if ((efer ^ old_efer)& EFER_NX) > + update_rsvd_bits_mask(vcpu); > + > return 0; > } > I think it's fine to reset the entire mmu context here, most guests won't toggle nx all the time. But it needs to be in patch 3, otherwise we have a regression between 3 and 4. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.