From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760074AbcBYIq5 (ORCPT ); Thu, 25 Feb 2016 03:46:57 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42368 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758272AbcBYIqz (ORCPT ); Thu, 25 Feb 2016 03:46:55 -0500 Subject: Re: [PATCH 09/12] KVM: MMU: coalesce zapping page after mmu_sync_children To: Takuya Yoshikawa , linux-kernel@vger.kernel.org, kvm@vger.kernel.org References: <1456319873-34182-1-git-send-email-pbonzini@redhat.com> <1456319873-34182-10-git-send-email-pbonzini@redhat.com> <56CE63D1.40009@lab.ntt.co.jp> Cc: guangrong.xiao@linux.intel.com, mtosatti@redhat.com From: Paolo Bonzini X-Enigmail-Draft-Status: N1110 Message-ID: <56CEBF7B.5030502@redhat.com> Date: Thu, 25 Feb 2016 09:46:51 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 In-Reply-To: <56CE63D1.40009@lab.ntt.co.jp> Content-Type: text/plain; charset=iso-2022-jp Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 25/02/2016 03:15, Takuya Yoshikawa wrote: > On 2016/02/24 22:17, Paolo Bonzini wrote: >> Move the call to kvm_mmu_flush_or_zap outside the loop. >> >> Signed-off-by: Paolo Bonzini >> --- >> arch/x86/kvm/mmu.c | 9 ++++++--- >> 1 file changed, 6 insertions(+), 3 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 725316df32ec..6d47b5c43246 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -2029,24 +2029,27 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, >> struct mmu_page_path parents; >> struct kvm_mmu_pages pages; >> LIST_HEAD(invalid_list); >> + bool flush = false; >> >> while (mmu_unsync_walk(parent, &pages)) { >> bool protected = false; >> - bool flush = false; >> >> for_each_sp(pages, sp, parents, i) >> protected |= rmap_write_protect(vcpu, sp->gfn); >> >> - if (protected) >> + if (protected) { >> kvm_flush_remote_tlbs(vcpu->kvm); >> + flush = false; >> + } >> >> for_each_sp(pages, sp, parents, i) { >> flush |= kvm_sync_page(vcpu, sp, &invalid_list); >> mmu_pages_clear_parents(&parents); >> } >> - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); >> cond_resched_lock(&vcpu->kvm->mmu_lock); > > This may release the mmu_lock before committing the zapping. > Is it safe? If so, we may want to see the reason in the changelog. It should be safe; the page is already marked as invalid and hence the role will not match in kvm_mmu_get_page. The idea is simply that committing the zap is expensive (for example it requires a remote TLB flush) so you want to do it as rarely as possible. I'll note this in the commit message. Paolo > Takuya > >> } >> + >> + kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); >> } >> >> static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) >> > > >