From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Graf Subject: Re: [PATCH] KVM: PPC: BOOK3S: HV: Don't try to allocate from kernel page allocator for hash page table. Date: Mon, 05 May 2014 13:26:16 +0200 Message-ID: <53677558.50900@suse.de> References: <1399224322-22028-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: benh@kernel.crashing.org, paulus@samba.org, linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org To: "Aneesh Kumar K.V" Return-path: Received: from cantor2.suse.de ([195.135.220.15]:39370 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753525AbaEEL0R (ORCPT ); Mon, 5 May 2014 07:26:17 -0400 In-Reply-To: <1399224322-22028-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/04/2014 07:25 PM, Aneesh Kumar K.V wrote: > We reserve 5% of total ram for CMA allocation and not using that can > result in us running out of numa node memory with specific > configuration. One caveat is we may not have node local hpt with pinned > vcpu configuration. But currently libvirt also pins the vcpu to cpuset > after creating hash page table. I don't understand the problem. Can you please elaborate? Alex > > Signed-off-by: Aneesh Kumar K.V > --- > arch/powerpc/kvm/book3s_64_mmu_hv.c | 23 ++++++----------------- > 1 file changed, 6 insertions(+), 17 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c > index fb25ebc0af0c..f32896ffd784 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -52,7 +52,7 @@ static void kvmppc_rmap_reset(struct kvm *kvm); > > long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > { > - unsigned long hpt; > + unsigned long hpt = 0; > struct revmap_entry *rev; > struct page *page = NULL; > long order = KVM_DEFAULT_HPT_ORDER; > @@ -64,22 +64,11 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > } > > kvm->arch.hpt_cma_alloc = 0; > - /* > - * try first to allocate it from the kernel page allocator. > - * We keep the CMA reserved for failed allocation. > - */ > - hpt = __get_free_pages(GFP_KERNEL | __GFP_ZERO | __GFP_REPEAT | > - __GFP_NOWARN, order - PAGE_SHIFT); > - > - /* Next try to allocate from the preallocated pool */ > - if (!hpt) { > - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > - page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > - if (page) { > - hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > - kvm->arch.hpt_cma_alloc = 1; > - } else > - --order; > + VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > + page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > + if (page) { > + hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > + kvm->arch.hpt_cma_alloc = 1; > } > > /* Lastly try successively smaller sizes from the page allocator */