From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH] kvm mmu: reduce 50% memory usage Date: Thu, 29 Apr 2010 21:43:40 +0300 Message-ID: <4BD9D35C.5090403@redhat.com> References: <4BD8228D.7090708@cn.fujitsu.com> <20100429180955.GA17909@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Lai Jiangshan , LKML , kvm@vger.kernel.org To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:45305 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933087Ab0D3RWJ (ORCPT ); Fri, 30 Apr 2010 13:22:09 -0400 In-Reply-To: <20100429180955.GA17909@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On 04/29/2010 09:09 PM, Marcelo Tosatti wrote: > > You missed quadrant on 4mb large page emulation with shadow (see updated > patch below). Good catch. > Also for some reason i can't understand the assumption > does not hold for large sptes with TDP, so reverted for now. > It's unrelated to TDP, same issue with shadow. I think the calculation is correct. For example the 4th spte for a level=2 page will yield gfn=4*512. > @@ -393,6 +393,27 @@ static void mmu_free_rmap_desc(struct kvm_rmap_desc *rd) > kfree(rd); > } > > +static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) > +{ > + gfn_t gfn; > + > + if (!sp->role.direct) > + return sp->gfns[index]; > + > + gfn = sp->gfn + index * (1<< (sp->role.level - 1) * PT64_LEVEL_BITS); > + gfn += sp->role.quadrant<< PT64_LEVEL_BITS; > PT64_LEVEL_BITS * level > + > + return gfn; > +} > + > > -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.