From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 3/4] KVM: MMU: Avoid calling gfn_to_page() in mmu_set_spte() Date: Mon, 31 Dec 2007 09:50:23 -0500 Message-ID: <20071231145023.GA15378@dmt> References: <1199013439-2047-1-git-send-email-avi@qumranet.com> <1199013439-2047-4-git-send-email-avi@qumranet.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Avi Kivity Return-path: Content-Disposition: inline In-Reply-To: <1199013439-2047-4-git-send-email-avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org On Sun, Dec 30, 2007 at 01:17:18PM +0200, Avi Kivity wrote: > Since gfn_to_page() is a sleeping function, and we want to make the core mmu > spinlocked, we need to pass the page from the walker context (which can sleep) > to the shadow context (which cannot). > > Signed-off-by: Avi Kivity > --- > arch/x86/kvm/mmu.c | 58 ++++++++++++++++++++++++++++++++++++++++---- > arch/x86/kvm/paging_tmpl.h | 25 +++++++++++++++---- > include/asm-x86/kvm_host.h | 5 ++++ > 3 files changed, 78 insertions(+), 10 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 3b91227..1b68f07 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > +static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, > + const u8 *new, int bytes) > +{ > + gfn_t gfn; > + int r; > + u64 gpte = 0; > + > + if (bytes != 4 && bytes != 8) > + return; > + > + down_read(¤t->mm->mmap_sem); kvm_mmu_pte_write() already holds mmap_sem in read-mode, so this is not required (and is deadlock prone actually). Other than that the patchset looks good, thanks. --- mmu.c.orig 2007-12-31 09:34:50.000000000 -0500 +++ mmu.c 2007-12-31 09:34:52.000000000 -0500 @@ -1360,7 +1360,6 @@ static void mmu_guess_page_from_pte_writ if (bytes != 4 && bytes != 8) return; - down_read(¤t->mm->mmap_sem); /* * Assume that the pte write on a page table of the same type * as the current vcpu paging mode. This is nearly always true @@ -1372,7 +1371,7 @@ static void mmu_guess_page_from_pte_writ if ((bytes == 4) && (gpa % 4 == 0)) { r = kvm_read_guest(vcpu->kvm, gpa & ~(u64)7, &gpte, 8); if (r) - goto out; + return; memcpy((void *)&gpte + (gpa % 8), new, 4); } else if ((bytes == 8) && (gpa % 8 == 0)) { memcpy((void *)&gpte, new, 8); @@ -1382,12 +1381,10 @@ static void mmu_guess_page_from_pte_writ memcpy((void *)&gpte, new, 4); } if (!is_present_pte(gpte)) - goto out; + return; gfn = (gpte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT; vcpu->arch.update_pte.gfn = gfn; vcpu->arch.update_pte.page = gfn_to_page(vcpu->kvm, gfn); -out: - up_read(¤t->mm->mmap_sem); } void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/