From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728408AbgHDNxc (ORCPT ); Tue, 4 Aug 2020 09:53:32 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5CDCC0617A0 for ; Tue, 4 Aug 2020 06:53:20 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id 74so12011789pfx.13 for ; Tue, 04 Aug 2020 06:53:20 -0700 (PDT) From: Daniel Axtens Subject: Re: [PATCH v2 01/17] KVM: PPC: Book3S HV: simplify kvm_cma_reserve() In-Reply-To: <20200802163601.8189-2-rppt@kernel.org> References: <20200802163601.8189-1-rppt@kernel.org> <20200802163601.8189-2-rppt@kernel.org> Date: Tue, 04 Aug 2020 23:53:15 +1000 Message-ID: <87tuxio6us.fsf@dja-thinkpad.axtens.net> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-s390-owner@vger.kernel.org List-ID: To: Mike Rapoport , Andrew Morton Cc: Andy Lutomirski , Baoquan He , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Dave Hansen , Emil Renner Berthing , Ingo Molnar , Hari Bathini , Marek Szyprowski , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Peter Zijlstra , Russell King , Stafford Horne , Thomas Gleixner , Will Deacon , Yoshinori Sato , clang-built-linux@googlegroups.com, iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, x86@kernel.org Hi Mike, > > The memory size calculation in kvm_cma_reserve() traverses memblock.memory > rather than simply call memblock_phys_mem_size(). The comment in that > function suggests that at some point there should have been call to > memblock_analyze() before memblock_phys_mem_size() could be used. > As of now, there is no memblock_analyze() at all and > memblock_phys_mem_size() can be used as soon as cold-plug memory is > registerd with memblock. > > Replace loop over memblock.memory with a call to memblock_phys_mem_size(). > > Signed-off-by: Mike Rapoport > --- > arch/powerpc/kvm/book3s_hv_builtin.c | 11 ++--------- > 1 file changed, 2 insertions(+), 9 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c > index 7cd3cf3d366b..56ab0d28de2a 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -95,22 +95,15 @@ EXPORT_SYMBOL_GPL(kvm_free_hpt_cma); > void __init kvm_cma_reserve(void) > { > unsigned long align_size; > - struct memblock_region *reg; > - phys_addr_t selected_size = 0; > + phys_addr_t selected_size; > > /* > * We need CMA reservation only when we are in HV mode > */ > if (!cpu_has_feature(CPU_FTR_HVMODE)) > return; > - /* > - * We cannot use memblock_phys_mem_size() here, because > - * memblock_analyze() has not been called yet. > - */ > - for_each_memblock(memory, reg) > - selected_size += memblock_region_memory_end_pfn(reg) - > - memblock_region_memory_base_pfn(reg); > > + selected_size = PHYS_PFN(memblock_phys_mem_size()); > selected_size = (selected_size * kvm_cma_resv_ratio / 100) << PAGE_SHIFT; I think this is correct, but PHYS_PFN does x >> PAGE_SHIFT and then the next line does x << PAGE_SHIFT, so I think we could combine those two lines as: selected_size = PAGE_ALIGN(memblock_phys_mem_size() * kvm_cma_resv_ratio / 100); (I think that might technically change it from aligning down to aligning up but I don't think 1 page matters here.) Kind regards, Daniel > if (selected_size) { > pr_debug("%s: reserving %ld MiB for global area\n", __func__, > -- > 2.26.2