From mboxrd@z Thu Jan 1 00:00:00 1970 From: Takuya Yoshikawa Subject: Re: [PATCH 09/12] KVM: MMU: coalesce zapping page after mmu_sync_children Date: Thu, 25 Feb 2016 11:15:45 +0900 Message-ID: <56CE63D1.40009@lab.ntt.co.jp> References: <1456319873-34182-1-git-send-email-pbonzini@redhat.com> <1456319873-34182-10-git-send-email-pbonzini@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-2022-jp Content-Transfer-Encoding: 7bit Cc: guangrong.xiao@linux.intel.com, mtosatti@redhat.com To: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Return-path: In-Reply-To: <1456319873-34182-10-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 2016/02/24 22:17, Paolo Bonzini wrote: > Move the call to kvm_mmu_flush_or_zap outside the loop. > > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/mmu.c | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 725316df32ec..6d47b5c43246 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2029,24 +2029,27 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, > struct mmu_page_path parents; > struct kvm_mmu_pages pages; > LIST_HEAD(invalid_list); > + bool flush = false; > > while (mmu_unsync_walk(parent, &pages)) { > bool protected = false; > - bool flush = false; > > for_each_sp(pages, sp, parents, i) > protected |= rmap_write_protect(vcpu, sp->gfn); > > - if (protected) > + if (protected) { > kvm_flush_remote_tlbs(vcpu->kvm); > + flush = false; > + } > > for_each_sp(pages, sp, parents, i) { > flush |= kvm_sync_page(vcpu, sp, &invalid_list); > mmu_pages_clear_parents(&parents); > } > - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); > cond_resched_lock(&vcpu->kvm->mmu_lock); This may release the mmu_lock before committing the zapping. Is it safe? If so, we may want to see the reason in the changelog. Takuya > } > + > + kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); > } > > static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) >