From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock Date: Fri, 10 Feb 2012 15:42:14 +0800 Message-ID: <4F34CA56.1080202@linux.vnet.ibm.com> References: <20120210152831.6ac3ac87.yoshikawa.takuya@oss.ntt.co.jp> <20120210152950.b54969be.yoshikawa.takuya@oss.ntt.co.jp> <4F34BF7E.4050009@linux.vnet.ibm.com> <4F34C591.1030909@oss.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: avi@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org, aarcange@redhat.com To: Takuya Yoshikawa Return-path: Received: from e23smtp06.au.ibm.com ([202.81.31.148]:38215 "EHLO e23smtp06.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753631Ab2BJHme (ORCPT ); Fri, 10 Feb 2012 02:42:34 -0500 Received: from /spool/local by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 10 Feb 2012 07:38:54 +1000 Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q1A7gHq7680134 for ; Fri, 10 Feb 2012 18:42:17 +1100 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q1A7gGpA016613 for ; Fri, 10 Feb 2012 18:42:17 +1100 In-Reply-To: <4F34C591.1030909@oss.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: On 02/10/2012 03:21 PM, Takuya Yoshikawa wrote: > (2012/02/10 15:55), Xiao Guangrong wrote: >> On 02/10/2012 02:29 PM, Takuya Yoshikawa wrote: >> >>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h >>> index 1561028..69d06f5 100644 >>> --- a/arch/x86/kvm/paging_tmpl.h >>> +++ b/arch/x86/kvm/paging_tmpl.h >>> @@ -682,6 +682,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) >>> mmu_topup_memory_caches(vcpu); >>> >>> spin_lock(&vcpu->kvm->mmu_lock); >>> + >>> for_each_shadow_entry(vcpu, gva, iterator) { >>> level = iterator.level; >>> sptep = iterator.sptep; >>> @@ -697,8 +698,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) >>> pte_gpa = FNAME(get_level1_sp_gpa)(sp); >>> pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t); >>> >>> - if (mmu_page_zap_pte(vcpu->kvm, sp, sptep)) >>> - kvm_flush_remote_tlbs(vcpu->kvm); >>> + mmu_page_zap_pte(vcpu->kvm, sp, sptep); >>> >>> if (!rmap_can_add(vcpu)) >>> break; >>> @@ -713,6 +713,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) >>> if (!is_shadow_present_pte(*sptep) || !sp->unsync_children) >>> break; >>> } >>> + >>> + kvm_flush_remote_tlbs(vcpu->kvm); >>> spin_unlock(&vcpu->kvm->mmu_lock); >> >> >> It is obvious wrong, i do not think all tlbs always need be flushed... >> > > What do you mean by "obvious wrong" ? > In the current code, all tlbs are flushed only when s spte is zapped, but after your change, they are always changed. > Even before this patch, we were always flushing TLBs from the caller. > Oh, could you please tell me where tlbs can be flushed except when a spte is zapped in this path? > I have a question: your patches apparently changed the timing of TLB flush > but all I could see from the changelogs were: > > > KVM: MMU: cleanup FNAME(invlpg) > > Directly Use mmu_page_zap_pte to zap spte in FNAME(invlpg), also remove the > same code between FNAME(invlpg) and FNAME(sync_page) > This patch dose not change the logic, the tlb flushed time is also not changed, it just directly call kvm_flush_remote_tlbs when a spte is zapped. > > KVM: MMU: fast prefetch spte on invlpg path > > Fast prefetch spte for the unsync shadow page on invlpg path > > This patch did not change the code when kvm_flush_remote_tlbs is called. Where cause your confused?