From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0B29C2BA19 for ; Tue, 21 Apr 2020 08:50:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 796BB20CC7 for ; Tue, 21 Apr 2020 08:50:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="xtIDLhOk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 796BB20CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 226F08E0005; Tue, 21 Apr 2020 04:50:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D8278E0003; Tue, 21 Apr 2020 04:50:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C6128E0005; Tue, 21 Apr 2020 04:50:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id E69538E0003 for ; Tue, 21 Apr 2020 04:50:03 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 82C4C181AEF1D for ; Tue, 21 Apr 2020 08:50:03 +0000 (UTC) X-FDA: 76731239886.30.loaf60_709e9d1b724 X-HE-Tag: loaf60_709e9d1b724 X-Filterd-Recvd-Size: 8384 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 21 Apr 2020 08:50:02 +0000 (UTC) Received: from kernel.org (unknown [87.71.41.92]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A8A7620B1F; Tue, 21 Apr 2020 08:49:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587459002; bh=9AREcPZgTLZoOQNS4HkOJCkkvxOi7Qh0sUZN50XGXA4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=xtIDLhOks9I+BKS5TYEqFJ9Vfc6A4ZSXltBCpypI67DEgioqz+65m6OacsoeIkbAA 88yc/lqmFc8Ow2Y9SNd9/Z87ieKFQpgU+8EamCx23Pk7I9SAqDmbfCbYt/WaMhGXv6 fkQQCkLj5UymWDJxhVN8pD6iX/cTrOoL+hLaJy1Q= Date: Tue, 21 Apr 2020 11:49:35 +0300 From: Mike Rapoport To: Baoquan He Cc: linux-kernel@vger.kernel.org, Andrew Morton , Brian Cain , Catalin Marinas , "David S. Miller" , Geert Uytterhoeven , Greentime Hu , Greg Ungerer , Guan Xuetao , Guo Ren , Heiko Carstens , Helge Deller , Hoan Tran , "James E.J. Bottomley" , Jonathan Corbet , Ley Foon Tan , Mark Salter , Matt Turner , Max Filippov , Michael Ellerman , Michal Hocko , Michal Simek , Nick Hu , Paul Walmsley , Richard Weinberger , Rich Felker , Russell King , Stafford Horne , Thomas Bogendoerfer , Tony Luck , Vineet Gupta , x86@kernel.org, Yoshinori Sato , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-csky@vger.kernel.org, linux-doc@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, Mike Rapoport Subject: Re: [PATCH 02/21] mm: make early_pfn_to_nid() and related defintions close to each other Message-ID: <20200421084935.GB14260@kernel.org> References: <20200412194859.12663-1-rppt@kernel.org> <20200412194859.12663-3-rppt@kernel.org> <20200421022435.GP4247@MiWiFi-R3L-srv> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200421022435.GP4247@MiWiFi-R3L-srv> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 21, 2020 at 10:24:35AM +0800, Baoquan He wrote: > On 04/12/20 at 10:48pm, Mike Rapoport wrote: > > From: Mike Rapoport > > > > The early_pfn_to_nid() and it's helper __early_pfn_to_nid() are spread > > around include/linux/mm.h, include/linux/mmzone.h and mm/page_alloc.c. > > > > Drop unused stub for __early_pfn_to_nid() and move its actual generic > > implementation close to its users. > > > > Signed-off-by: Mike Rapoport > > --- > > include/linux/mm.h | 4 ++-- > > include/linux/mmzone.h | 9 -------- > > mm/page_alloc.c | 51 +++++++++++++++++++++--------------------- > > 3 files changed, 27 insertions(+), 37 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 5a323422d783..a404026d14d4 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -2388,9 +2388,9 @@ extern void sparse_memory_present_with_active_regions(int nid); > > > > #if !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && \ > > !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) > > -static inline int __early_pfn_to_nid(unsigned long pfn, > > - struct mminit_pfnnid_cache *state) > > +static inline int early_pfn_to_nid(unsigned long pfn) > > { > > + BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); > > return 0; > > } > > It's better to make a separate patch to drop __early_pfn_to_nid() here. Not sure it's really worth it. This patch anyway only moves the code around without any actual changes. > > #else > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > > index 1b9de7d220fb..7b5b6eba402f 100644 > > --- a/include/linux/mmzone.h > > +++ b/include/linux/mmzone.h > > @@ -1078,15 +1078,6 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist, > > #include > > #endif > > > > -#if !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) && \ > > - !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) > > -static inline unsigned long early_pfn_to_nid(unsigned long pfn) > > -{ > > - BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); > > - return 0; > > -} > > -#endif > > - > > #ifdef CONFIG_FLATMEM > > #define pfn_to_nid(pfn) (0) > > #endif > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 0d012eda1694..1ac775bfc9cf 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -1504,6 +1504,31 @@ void __free_pages_core(struct page *page, unsigned int order) > > #if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) || \ > defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) > > This is the upper layer of ifdeffery scope. > > > > static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata; > > > > +#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID > > Moving __early_pfn_to_nid() here makes the upper layer of ifdeferry > scope a little werid. But seems no better way to optimize it. It gets a bit better after patch 3 :) > Otherwise, this patch looks good to me. > > Reviewed-by: Baoquan He Thanks! > > + > > +/* > > + * Required by SPARSEMEM. Given a PFN, return what node the PFN is on. > > + */ > > +int __meminit __early_pfn_to_nid(unsigned long pfn, > > + struct mminit_pfnnid_cache *state) > > +{ > > + unsigned long start_pfn, end_pfn; > > + int nid; > > + > > + if (state->last_start <= pfn && pfn < state->last_end) > > + return state->last_nid; > > + > > + nid = memblock_search_pfn_nid(pfn, &start_pfn, &end_pfn); > > + if (nid != NUMA_NO_NODE) { > > + state->last_start = start_pfn; > > + state->last_end = end_pfn; > > + state->last_nid = nid; > > + } > > + > > + return nid; > > +} > > +#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ > > + > > int __meminit early_pfn_to_nid(unsigned long pfn) > > { > > static DEFINE_SPINLOCK(early_pfn_lock); > > @@ -6298,32 +6323,6 @@ void __meminit init_currently_empty_zone(struct zone *zone, > > zone->initialized = 1; > > } > > > > -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP > > -#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID > > - > > -/* > > - * Required by SPARSEMEM. Given a PFN, return what node the PFN is on. > > - */ > > -int __meminit __early_pfn_to_nid(unsigned long pfn, > > - struct mminit_pfnnid_cache *state) > > -{ > > - unsigned long start_pfn, end_pfn; > > - int nid; > > - > > - if (state->last_start <= pfn && pfn < state->last_end) > > - return state->last_nid; > > - > > - nid = memblock_search_pfn_nid(pfn, &start_pfn, &end_pfn); > > - if (nid != NUMA_NO_NODE) { > > - state->last_start = start_pfn; > > - state->last_end = end_pfn; > > - state->last_nid = nid; > > - } > > - > > - return nid; > > -} > > -#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ > > - > > /** > > * free_bootmem_with_active_regions - Call memblock_free_early_nid for each active range > > * @nid: The node to free memory on. If MAX_NUMNODES, all nodes are freed. > > -- > > 2.25.1 > > > -- Sincerely yours, Mike.