From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH v2] KVM: MMU: optimize pte write path if don't have protected sp Date: Wed, 11 May 2011 20:01:31 +0800 Message-ID: <4DCA7A9B.6060305@cn.fujitsu.com> References: <4DC9F803.3050602@cn.fujitsu.com> <4DCA72C3.6050300@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , LKML , KVM To: Avi Kivity Return-path: In-Reply-To: <4DCA72C3.6050300@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 05/11/2011 07:28 PM, Avi Kivity wrote: > On 05/11/2011 05:44 AM, Xiao Guangrong wrote: >> Simply return from kvm_mmu_pte_write path if no shadow page is >> write-protected, then we can avoid to walk all shadow pages and hold >> mmu-lock >> >> @@ -1038,8 +1038,10 @@ static void kvm_mmu_free_page(struct kvm *kvm, struct kvm_mmu_page *sp) >> hlist_del(&sp->hash_link); >> list_del(&sp->link); >> free_page((unsigned long)sp->spt); >> - if (!sp->role.direct) >> + if (!sp->role.direct) { >> free_page((unsigned long)sp->gfns); >> + atomic_dec(&kvm->arch.indirect_shadow_pages); >> + } >> kmem_cache_free(mmu_page_header_cache, sp); >> kvm_mod_used_mmu_pages(kvm, -1); >> } >> @@ -1536,6 +1538,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, >> kvm_sync_pages(vcpu, gfn); >> >> account_shadowed(vcpu->kvm, gfn); >> + atomic_inc(&vcpu->kvm->arch.indirect_shadow_pages); >> } > > Better in account_shadowed()/unaccount_shadowed(), no? > Yes, will fix. thanks for your reminder!