From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: KVM: MMU: improve n_max_mmu_pages calculation with TDP Date: Thu, 21 Mar 2013 13:41:59 +0800 Message-ID: <514A9DA7.10702@linux.vnet.ibm.com> References: <20130320201420.GA17347@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: kvm , Ulrich Obergfell , Takuya Yoshikawa , Avi Kivity To: Marcelo Tosatti Return-path: Received: from e28smtp06.in.ibm.com ([122.248.162.6]:46165 "EHLO e28smtp06.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751115Ab3CUFmK (ORCPT ); Thu, 21 Mar 2013 01:42:10 -0400 Received: from /spool/local by e28smtp06.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 21 Mar 2013 11:08:07 +0530 Received: from d28relay04.in.ibm.com (d28relay04.in.ibm.com [9.184.220.61]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id 04EC43940023 for ; Thu, 21 Mar 2013 11:12:05 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay04.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r2L5g0sx262596 for ; Thu, 21 Mar 2013 11:12:00 +0530 Received: from d28av02.in.ibm.com (loopback [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r2L5g2dK023717 for ; Thu, 21 Mar 2013 16:42:03 +1100 In-Reply-To: <20130320201420.GA17347@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On 03/21/2013 04:14 AM, Marcelo Tosatti wrote: > > kvm_mmu_calculate_mmu_pages numbers, > > maximum number of shadow pages = 2% of mapped guest pages > > Does not make sense for TDP guests where mapping all of guest > memory with 4k pages cannot exceed "mapped guest pages / 512" > (not counting root pages). > > Allow that maximum for TDP, forcing the guest to recycle otherwise. > > Signed-off-by: Marcelo Tosatti > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 956ca35..a9694a8d7 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -4293,7 +4293,7 @@ nomem: > unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) > { > unsigned int nr_mmu_pages; > - unsigned int nr_pages = 0; > + unsigned int i, nr_pages = 0; > struct kvm_memslots *slots; > struct kvm_memory_slot *memslot; > > @@ -4302,7 +4302,19 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) > kvm_for_each_memslot(memslot, slots) > nr_pages += memslot->npages; > > - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; > + if (tdp_enabled) { > + /* one root page */ > + nr_mmu_pages = 1; > + /* nr_pages / (512^i) per level, due to > + * guest RAM map being linear */ > + for (i = 1; i < 4; i++) { > + int nr_pages_round = nr_pages + (1 << (9*i)); > + nr_mmu_pages += nr_pages_round >> (9*i); > + } Marcelo, Can it work if nested guest is used? Did you see any problem in practice (direct guest uses more memory than your calculation)? And mmio also can build some page table that looks like not considered in this patch.