From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hollis Blanchard Subject: Re: [patch 2/2] KVM: switch to get_user_pages_fast Date: Thu, 11 Sep 2008 10:04:29 -0500 Message-ID: <1221145469.1316.14.camel@localhost.localdomain> References: <20080911134340.714935482@localhost.localdomain> <20080911134425.012616301@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: Avi Kivity , kvm@vger.kernel.org To: Marcelo Tosatti Return-path: Received: from e2.ny.us.ibm.com ([32.97.182.142]:57991 "EHLO e2.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751045AbYIKPE5 (ORCPT ); Thu, 11 Sep 2008 11:04:57 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e2.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id m8BF4nc4027960 for ; Thu, 11 Sep 2008 11:04:49 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id m8BF4nnQ191288 for ; Thu, 11 Sep 2008 11:04:49 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m8BF4nQg027841 for ; Thu, 11 Sep 2008 11:04:49 -0400 In-Reply-To: <20080911134425.012616301@localhost.localdomain> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, 2008-09-11 at 10:43 -0300, Marcelo Tosatti wrote: > plain text document attachment (kvm-use-fast-gup) > Convert gfn_to_pfn to use get_user_pages_fast, which can do lockless > pagetable lookups on x86. Kernel compilation on 4-way guest is 3.7% > faster on VMX. > > Hollis, can you fix kvmppc_mmu_map? gfn_to_page must not be called with > mmap_sem held. > > Looks tricky: > /* Must be called with mmap_sem locked for writing. */ > static void kvmppc_44x_shadow_release(struct kvm_vcpu *vcpu, Actually the comment is wrong, so it's not that tricky. ;) Marcelo, after Avi applies the following patch, could you respin and remove the locking around PPC's gfn_to_pfn() too? Thanks! kvm: ppc: kvmppc_44x_shadow_release() does not require mmap_sem to be locked Signed-off-by: Hollis Blanchard diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c --- a/arch/powerpc/kvm/44x_tlb.c +++ b/arch/powerpc/kvm/44x_tlb.c @@ -110,7 +110,6 @@ static int kvmppc_44x_tlbe_is_writable(s return tlbe->word2 & (PPC44x_TLB_SW|PPC44x_TLB_UW); } -/* Must be called with mmap_sem locked for writing. */ static void kvmppc_44x_shadow_release(struct kvm_vcpu *vcpu, unsigned int index) { @@ -150,17 +149,16 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcp /* Get reference to new page. */ down_read(¤t->mm->mmap_sem); new_page = gfn_to_page(vcpu->kvm, gfn); + up_read(¤t->mm->mmap_sem); if (is_error_page(new_page)) { printk(KERN_ERR "Couldn't get guest page for gfn %lx!\n", gfn); kvm_release_page_clean(new_page); - up_read(¤t->mm->mmap_sem); return; } hpaddr = page_to_phys(new_page); /* Drop reference to old page. */ kvmppc_44x_shadow_release(vcpu, victim); - up_read(¤t->mm->mmap_sem); vcpu->arch.shadow_pages[victim] = new_page; @@ -194,7 +192,6 @@ void kvmppc_mmu_invalidate(struct kvm_vc int i; /* XXX Replace loop with fancy data structures. */ - down_write(¤t->mm->mmap_sem); for (i = 0; i <= tlb_44x_hwater; i++) { struct tlbe *stlbe = &vcpu->arch.shadow_tlb[i]; unsigned int tid; @@ -219,7 +216,6 @@ void kvmppc_mmu_invalidate(struct kvm_vc stlbe->tid, stlbe->word0, stlbe->word1, stlbe->word2, handler); } - up_write(¤t->mm->mmap_sem); } /* Invalidate all mappings on the privilege switch after PID has been changed. @@ -231,7 +227,6 @@ void kvmppc_mmu_priv_switch(struct kvm_v if (vcpu->arch.swap_pid) { /* XXX Replace loop with fancy data structures. */ - down_write(¤t->mm->mmap_sem); for (i = 0; i <= tlb_44x_hwater; i++) { struct tlbe *stlbe = &vcpu->arch.shadow_tlb[i]; @@ -243,7 +238,6 @@ void kvmppc_mmu_priv_switch(struct kvm_v stlbe->tid, stlbe->word0, stlbe->word1, stlbe->word2, handler); } - up_write(¤t->mm->mmap_sem); vcpu->arch.swap_pid = 0; } -- Hollis Blanchard IBM Linux Technology Center