From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 07/10] KVM: MMU: mmu_parent_walk Date: Mon, 22 Sep 2008 19:04:10 -0300 Message-ID: <20080922220410.GC27744@dmt.cnet> References: <20080918212749.800177179@localhost.localdomain> <20080918213336.976429470@localhost.localdomain> <48D44A4E.3070400@redhat.com> <20080921004428.GB10120@dmt.cnet> <48D8007B.10405@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, "David S. Ahern" To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:60470 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753199AbYIVWFa (ORCPT ); Mon, 22 Sep 2008 18:05:30 -0400 Content-Disposition: inline In-Reply-To: <48D8007B.10405@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Sep 22, 2008 at 11:30:51PM +0300, Avi Kivity wrote: > Marcelo Tosatti wrote: >> On Fri, Sep 19, 2008 at 05:56:46PM -0700, Avi Kivity wrote: >> >>>> + } while (level > start_level-1); >>>> +} >>>> + >>>> >>> Could be much simplified with recursion, no? As the depth is limited >>> to 4, there's no stack overflow problem. >>> >> >> The early version was recursive, but since its a generic helper I >> preferred a non-recursive function. >> > > Let's start with a super-simple recursive version. When the code has > seen some debugging, we can add complexity. But for the initial phase, > simpler is better. > > The non-recursive version has the advantage that it can be converted to > a kvm_for_each_parent() later, but still, we can do that later. OK, this is the earlier version. I'll resend the patchset with it instead, anything else? (hoping you're ok with 32-bit nonpae being done as optimization later). Index: kvm.oos2/arch/x86/kvm/mmu.c =================================================================== --- kvm.oos2.orig/arch/x86/kvm/mmu.c +++ kvm.oos2/arch/x86/kvm/mmu.c @@ -922,6 +922,31 @@ static void mmu_page_remove_parent_pte(s BUG(); } +static int mmu_parent_walk(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + int (*fn) (struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp)) +{ + struct kvm_pte_chain *pte_chain; + struct hlist_node *node; + struct kvm_mmu_page *parent_sp; + int i; + + fn(vcpu, sp); + + if (!sp->multimapped && sp->parent_pte) { + parent_sp = page_header(__pa(sp->parent_pte)); + mmu_parent_walk(vcpu, parent_sp, fn); + return; + } + hlist_for_each_entry(pte_chain, node, &sp->parent_ptes, link) + for (i = 0; i < NR_PTE_CHAIN_ENTRIES; ++i) { + if (!pte_chain->parent_ptes[i]) + break; + parent_sp = page_header(__pa(pte_chain->parent_ptes[i])); + mmu_parent_walk(vcpu, parent_sp, fn); + } +} + static void nonpaging_prefetch_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) {