From mboxrd@z Thu Jan 1 00:00:00 1970 From: Scott Wood Subject: Re: [PATCH 3/3] KVM: PPC: e500: Implement TLB1-in-TLB0 mapping Date: Thu, 17 Jan 2013 18:31:07 -0600 Message-ID: <1358469067.13978.19@snotra> References: <1358463041-25922-4-git-send-email-agraf@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; delsp=Yes; format=Flowed Content-Transfer-Encoding: 8BIT Cc: , To: Alexander Graf Return-path: Received: from [207.46.163.24] ([207.46.163.24]:32284 "EHLO co9outboundpool.messaging.microsoft.com" rhost-flags-FAIL-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1753294Ab3ARAcb convert rfc822-to-8bit (ORCPT ); Thu, 17 Jan 2013 19:32:31 -0500 In-Reply-To: <1358463041-25922-4-git-send-email-agraf@suse.de> (from agraf@suse.de on Thu Jan 17 16:50:41 2013) Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: On 01/17/2013 04:50:41 PM, Alexander Graf wrote: > When a host mapping fault happens in a guest TLB1 entry today, we > map the translated guest entry into the host's TLB1. > > This isn't particularly clever when the guest is mapped by normal 4k > pages, since these would be a lot better to put into TLB0 instead. > > This patch adds the required logic to map 4k TLB1 shadow maps into > the host's TLB0. > > Signed-off-by: Alexander Graf > --- > arch/powerpc/kvm/e500.h | 1 + > arch/powerpc/kvm/e500_mmu_host.c | 58 > +++++++++++++++++++++++++++++-------- > 2 files changed, 46 insertions(+), 13 deletions(-) > > diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h > index 00f96d8..d32e6a8 100644 > --- a/arch/powerpc/kvm/e500.h > +++ b/arch/powerpc/kvm/e500.h > @@ -28,6 +28,7 @@ > > #define E500_TLB_VALID 1 > #define E500_TLB_BITMAP 2 > +#define E500_TLB_TLB0 (1 << 2) > > struct tlbe_ref { > pfn_t pfn; > diff --git a/arch/powerpc/kvm/e500_mmu_host.c > b/arch/powerpc/kvm/e500_mmu_host.c > index 3bb2154..cbb6cf8 100644 > --- a/arch/powerpc/kvm/e500_mmu_host.c > +++ b/arch/powerpc/kvm/e500_mmu_host.c > @@ -198,6 +198,11 @@ void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 > *vcpu_e500, int tlbsel, > local_irq_restore(flags); > > return; > + } else if (tlbsel == 1 && > + vcpu_e500->gtlb_priv[1][esel].ref.flags & > E500_TLB_TLB0) { > + /* This is a slow path, so just invalidate everything */ > + kvmppc_e500_tlbil_all(vcpu_e500); > + vcpu_e500->gtlb_priv[1][esel].ref.flags &= > ~E500_TLB_TLB0; > } What if the guest TLB1 entry is backed by a mix of TLB0 and TLB1 entries on the host? I don't see checks elsewhere that would prevent this situation. > @@ -529,9 +556,14 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 > eaddr, gpa_t gpaddr, > case 1: { > gfn_t gfn = gpaddr >> PAGE_SHIFT; > > - stlbsel = 1; > sesel = kvmppc_e500_tlb1_map(vcpu_e500, eaddr, gfn, > gtlbe, &stlbe, esel); > + if (sesel < 0) { > + /* TLB0 mapping */ > + sesel = 0; > + stlbsel = 0; > + } else > + stlbsel = 1; > break; > } Maybe push the call to write_tlbe() into the tlb0/1_map functions, getting rid of the need to pass sesel/stlbsel/stlbe back? -Scott