From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759414AbcBYHfJ (ORCPT ); Thu, 25 Feb 2016 02:35:09 -0500 Received: from mga03.intel.com ([134.134.136.65]:19049 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751066AbcBYHfI (ORCPT ); Thu, 25 Feb 2016 02:35:08 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,497,1449561600"; d="scan'208";a="54423815" Subject: Re: [PATCH 09/12] KVM: MMU: coalesce zapping page after mmu_sync_children To: Takuya Yoshikawa , Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org References: <1456319873-34182-1-git-send-email-pbonzini@redhat.com> <1456319873-34182-10-git-send-email-pbonzini@redhat.com> <56CE63D1.40009@lab.ntt.co.jp> Cc: mtosatti@redhat.com From: Xiao Guangrong Message-ID: <56CEAEA7.8080702@linux.intel.com> Date: Thu, 25 Feb 2016 15:35:03 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <56CE63D1.40009@lab.ntt.co.jp> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/25/2016 10:15 AM, Takuya Yoshikawa wrote: > On 2016/02/24 22:17, Paolo Bonzini wrote: >> Move the call to kvm_mmu_flush_or_zap outside the loop. >> >> Signed-off-by: Paolo Bonzini >> --- >> arch/x86/kvm/mmu.c | 9 ++++++--- >> 1 file changed, 6 insertions(+), 3 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 725316df32ec..6d47b5c43246 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -2029,24 +2029,27 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, >> struct mmu_page_path parents; >> struct kvm_mmu_pages pages; >> LIST_HEAD(invalid_list); >> + bool flush = false; >> >> while (mmu_unsync_walk(parent, &pages)) { >> bool protected = false; >> - bool flush = false; >> >> for_each_sp(pages, sp, parents, i) >> protected |= rmap_write_protect(vcpu, sp->gfn); >> >> - if (protected) >> + if (protected) { >> kvm_flush_remote_tlbs(vcpu->kvm); >> + flush = false; >> + } >> >> for_each_sp(pages, sp, parents, i) { >> flush |= kvm_sync_page(vcpu, sp, &invalid_list); >> mmu_pages_clear_parents(&parents); >> } >> - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); >> cond_resched_lock(&vcpu->kvm->mmu_lock); > > This may release the mmu_lock before committing the zapping. > Is it safe? If so, we may want to see the reason in the changelog. It is unsafe indeed, please do not do it.