From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 1/2] KVM: MMU: allow more page become unsync at gfn mapping time Date: Sun, 23 May 2010 16:51:25 +0300 Message-ID: <4BF932DD.5070900@redhat.com> References: <4BF91C34.6020904@cn.fujitsu.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , LKML , KVM list To: Xiao Guangrong Return-path: Received: from mx1.redhat.com ([209.132.183.28]:46389 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754308Ab0EWNvb (ORCPT ); Sun, 23 May 2010 09:51:31 -0400 In-Reply-To: <4BF91C34.6020904@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/23/2010 03:14 PM, Xiao Guangrong wrote: > In current code, shadow page can become asynchronous only if one > shadow page for a gfn, this rule is too strict, in fact, we can > let all last mapping page(i.e, it's the pte page) become unsync, > and sync them at invlpg or flush tlb time. > > This patch allow more page become asynchronous at gfn mapping time > > > + > +static void kvm_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn) > { > - unsigned index; > struct hlist_head *bucket; > struct kvm_mmu_page *s; > struct hlist_node *node, *n; > + unsigned index; > > - index = kvm_page_table_hashfn(sp->gfn); > + index = kvm_page_table_hashfn(gfn); > bucket =&vcpu->kvm->arch.mmu_page_hash[index]; > - /* don't unsync if pagetable is shadowed with multiple roles */ > + > hlist_for_each_entry_safe(s, node, n, bucket, hash_link) { > - if (s->gfn != sp->gfn || s->role.direct) > + if (s->gfn != gfn || s->role.direct || s->unsync) > continue; > role.invalid? > - if (s->role.word != sp->role.word) > - return 1; > + WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL); > + __kvm_unsync_page(vcpu, s); > } > - trace_kvm_mmu_unsync_page(sp); > - ++vcpu->kvm->stat.mmu_unsync; > - sp->unsync = 1; > - > - kvm_mmu_mark_parents_unsync(sp); > - > - mmu_convert_notrap(sp); > - return 0; > } > > static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, > bool can_unsync) > { > - struct kvm_mmu_page *shadow; > + unsigned index; > + struct hlist_head *bucket; > + struct kvm_mmu_page *s; > + struct hlist_node *node, *n; > + bool need_unsync = false; > + > + index = kvm_page_table_hashfn(gfn); > + bucket =&vcpu->kvm->arch.mmu_page_hash[index]; > + hlist_for_each_entry_safe(s, node, n, bucket, hash_link) { > + if (s->gfn != gfn || s->role.direct) > + continue; > > - shadow = kvm_mmu_lookup_page(vcpu->kvm, gfn); > - if (shadow) { > - if (shadow->role.level != PT_PAGE_TABLE_LEVEL) > + if (s->role.level != PT_PAGE_TABLE_LEVEL) > return 1; > role.invalid? > - if (shadow->unsync) > - return 0; > - if (can_unsync&& oos_shadow) > - return kvm_unsync_page(vcpu, shadow); > - return 1; > + > + if (!need_unsync&& !s->unsync) { > + if (!can_unsync || !oos_shadow) > + return 1; > + need_unsync = true; > + } > } > + if (need_unsync) > + kvm_unsync_pages(vcpu, gfn); > return 0; > } > > Looks good, I'm just uncertain about role.invalid handling. What's the reasoning here? -- error compiling committee.c: too many arguments to function