From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id DD1C66B0022 for ; Sat, 24 Mar 2018 08:26:10 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id q11so5271137pfd.8 for ; Sat, 24 Mar 2018 05:26:10 -0700 (PDT) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id s1-v6sor5313735plr.79.2018.03.24.05.26.09 for (Google Transport Security); Sat, 24 Mar 2018 05:26:09 -0700 (PDT) From: Jia He Subject: [PATCH v2 5/5] mm: page_alloc: reduce unnecessary binary search in early_pfn_valid() Date: Sat, 24 Mar 2018 05:24:42 -0700 Message-Id: <1521894282-6454-6-git-send-email-hejianet@gmail.com> In-Reply-To: <1521894282-6454-1-git-send-email-hejianet@gmail.com> References: <1521894282-6454-1-git-send-email-hejianet@gmail.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" Cc: Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Daniel Vacek , Eugeniu Rosca , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, James Morse , Ard Biesheuvel , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Jia He , Jia He Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") optimized the loop in memmap_init_zone(). But there is still some room for improvement. E.g. in early_pfn_valid(), if pfn and pfn+1 are in the same memblock region, we can record the last returned memblock region index and check if pfn++ is still in the same region. Currently it only improve the performance on arm64 and will have no impact on other arches. Signed-off-by: Jia He --- arch/x86/include/asm/mmzone_32.h | 2 +- include/linux/mmzone.h | 12 +++++++++--- mm/page_alloc.c | 2 +- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/mmzone_32.h b/arch/x86/include/asm/mmzone_32.h index 73d8dd1..329d3ba 100644 --- a/arch/x86/include/asm/mmzone_32.h +++ b/arch/x86/include/asm/mmzone_32.h @@ -49,7 +49,7 @@ static inline int pfn_valid(int pfn) return 0; } -#define early_pfn_valid(pfn) pfn_valid((pfn)) +#define early_pfn_valid(pfn, last_region_idx) pfn_valid((pfn)) #endif /* CONFIG_DISCONTIGMEM */ diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index d797716..3a686af 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1267,9 +1267,15 @@ static inline int pfn_present(unsigned long pfn) }) #else #define pfn_to_nid(pfn) (0) -#endif +#endif /*CONFIG_NUMA*/ + +#ifdef CONFIG_HAVE_ARCH_PFN_VALID +#define early_pfn_valid(pfn, last_region_idx) \ + pfn_valid_region(pfn, last_region_idx) +#else +#define early_pfn_valid(pfn, last_region_idx) pfn_valid(pfn) +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ -#define early_pfn_valid(pfn) pfn_valid(pfn) void sparse_init(void); #else #define sparse_init() do {} while (0) @@ -1288,7 +1294,7 @@ struct mminit_pfnnid_cache { }; #ifndef early_pfn_valid -#define early_pfn_valid(pfn) (1) +#define early_pfn_valid(pfn, last_region_idx) (1) #endif void memory_present(int nid, unsigned long start, unsigned long end); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0bb0274..68aef71 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5484,8 +5484,8 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (context != MEMMAP_EARLY) goto not_early; - if (!early_pfn_valid(pfn)) { #if (defined CONFIG_HAVE_MEMBLOCK) && (defined CONFIG_HAVE_ARCH_PFN_VALID) + if (!early_pfn_valid(pfn, &idx)) { /* * Skip to the pfn preceding the next valid one (or * end_pfn), such that we hit a valid pfn (or end_pfn) -- 2.7.4