From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Aneesh Kumar K.V" Subject: Re: [PATCH v2] PC, KVM, CMA: Fix regression caused by wrong get_order() use Date: Thu, 14 Aug 2014 10:43:52 +0530 Message-ID: <87egwj64bz.fsf@linux.vnet.ibm.com> References: <1407992587-9164-1-git-send-email-aik@ozlabs.ru> Mime-Version: 1.0 Content-Type: text/plain Cc: Alexey Kardashevskiy , Benjamin Herrenschmidt , Paul Mackerras , Alexander Graf , Gleb Natapov , Paolo Bonzini , Michael Ellerman , kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joonsoo Kim To: Alexey Kardashevskiy , linuxppc-dev@lists.ozlabs.org Return-path: In-Reply-To: <1407992587-9164-1-git-send-email-aik@ozlabs.ru> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org Alexey Kardashevskiy writes: > fc95ca7284bc54953165cba76c3228bd2cdb9591 claims that there is no > functional change but this is not true as it calls get_order() (which > takes bytes) where it should have called ilog2() and the kernel stops > on VM_BUG_ON(). > > This replaces get_order() with order_base_2() (round-up version of ilog2). > > Suggested-by: Paul Mackerras > Cc: Alexander Graf > Cc: Aneesh Kumar K.V > Cc: Joonsoo Kim > Cc: Benjamin Herrenschmidt > Signed-off-by: Alexey Kardashevskiy Reviewed-by: Aneesh Kumar K.V > --- > > Changes: > v2: > * s/ilog2/order_base_2/ > * removed cc: as I got wrong impression that v3.16 is > broken > > --- > arch/powerpc/kvm/book3s_hv_builtin.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c > index 329d7fd..b9615ba 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -101,7 +101,7 @@ struct kvm_rma_info *kvm_alloc_rma() > ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL); > if (!ri) > return NULL; > - page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages)); > + page = cma_alloc(kvm_cma, kvm_rma_pages, order_base_2(kvm_rma_pages)); > if (!page) > goto err_out; > atomic_set(&ri->use_count, 1); > @@ -135,12 +135,12 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) > { > unsigned long align_pages = HPT_ALIGN_PAGES; > > - VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > + VM_BUG_ON(order_base_2(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > > /* Old CPUs require HPT aligned on a multiple of its size */ > if (!cpu_has_feature(CPU_FTR_ARCH_206)) > align_pages = nr_pages; > - return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); > + return cma_alloc(kvm_cma, nr_pages, order_base_2(align_pages)); > } > EXPORT_SYMBOL_GPL(kvm_alloc_hpt); > > -- > 2.0.0