From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57B35C2D0A3 for ; Sun, 1 Nov 2020 17:06:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5DE92223F for ; Sun, 1 Nov 2020 17:06:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="u2/jIlBi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5DE92223F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CE496B008C; Sun, 1 Nov 2020 12:06:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 77E236B0092; Sun, 1 Nov 2020 12:06:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 645E46B0093; Sun, 1 Nov 2020 12:06:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id 364F26B008C for ; Sun, 1 Nov 2020 12:06:03 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D6CF88249980 for ; Sun, 1 Nov 2020 17:06:02 +0000 (UTC) X-FDA: 77436476964.23.linen15_2914ea9272a9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id B720F37609 for ; Sun, 1 Nov 2020 17:06:02 +0000 (UTC) X-HE-Tag: linen15_2914ea9272a9 X-Filterd-Recvd-Size: 12774 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Nov 2020 17:06:02 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A3855208B6; Sun, 1 Nov 2020 17:05:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604250361; bh=+ecFmVmt/rsUVyb3hVXQX25qqx1MNuz8K3wp7M/7+rI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u2/jIlBir/DJpSgEuwwICHipzapnBrfJ5wgziKCe76dN6IQWf2VbZfLeQF3gTGhwc WCe6bIomXH+NoQ8agQkqUr7qOeSyy/W9VGgaEx48N/Pq869PCjWfU+m5Dwbpvc5Kq3 dYnfu7SqwADLLoQ0AmV0d1mb669IMNoEiY77DXKE= From: Mike Rapoport To: Andrew Morton Cc: Alexey Dobriyan , Catalin Marinas , Geert Uytterhoeven , Greg Ungerer , John Paul Adrian Glaubitz , Jonathan Corbet , Matt Turner , Meelis Roos , Michael Schmitz , Mike Rapoport , Mike Rapoport , Russell King , Tony Luck , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mm@kvack.org, linux-snps-arc@lists.infradead.org Subject: [PATCH v2 09/13] arm, arm64: move free_unused_memmap() to generic mm Date: Sun, 1 Nov 2020 19:04:50 +0200 Message-Id: <20201101170454.9567-10-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201101170454.9567-1-rppt@kernel.org> References: <20201101170454.9567-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport ARM and ARM64 free unused parts of the memory map just before the initialization of the page allocator. To allow holes in the memory map bo= th architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID. Allowing holes in the memory map for FLATMEM may be useful for small machines, such as ARC and m68k and will enable those architectures to cea= se using DISCONTIGMEM and still support more than one memory bank. Move the functions that free unused memory map to generic mm and enable them in case HAVE_ARCH_PFN_VALID=3Dy. Signed-off-by: Mike Rapoport --- arch/Kconfig | 3 ++ arch/arm/Kconfig | 4 +-- arch/arm/mm/init.c | 78 ------------------------------------------ arch/arm64/Kconfig | 4 +-- arch/arm64/mm/init.c | 68 ------------------------------------- mm/memblock.c | 80 ++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 85 insertions(+), 152 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 56b6ccc0e32d..d715da18a8a9 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1028,6 +1028,9 @@ config HAVE_STATIC_CALL_INLINE bool depends on HAVE_STATIC_CALL =20 +config HAVE_ARCH_PFN_VALID + bool + source "kernel/gcov/Kconfig" =20 source "scripts/gcc-plugins/Kconfig" diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 83adc46c1e67..495d42c5ecec 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -68,6 +68,7 @@ config ARM select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU select HAVE_ARCH_MMAP_RND_BITS if MMU + select HAVE_ARCH_PFN_VALID select HAVE_ARCH_SECCOMP select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT select HAVE_ARCH_THREAD_STRUCT_WHITELIST @@ -1488,9 +1489,6 @@ config ARCH_SPARSEMEM_ENABLE bool select SPARSEMEM_STATIC if SPARSEMEM =20 -config HAVE_ARCH_PFN_VALID - def_bool y - config HIGHMEM bool "High Memory Support" depends on MMU diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index d57112a276f5..a04ac5ea7641 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -267,83 +267,6 @@ static inline void poison_init_mem(void *s, size_t c= ount) *p++ =3D 0xe7fddef0; } =20 -static inline void __init -free_memmap(unsigned long start_pfn, unsigned long end_pfn) -{ - struct page *start_pg, *end_pg; - phys_addr_t pg, pgend; - - /* - * Convert start_pfn/end_pfn to a struct page pointer. - */ - start_pg =3D pfn_to_page(start_pfn - 1) + 1; - end_pg =3D pfn_to_page(end_pfn - 1) + 1; - - /* - * Convert to physical addresses, and - * round start upwards and end downwards. - */ - pg =3D PAGE_ALIGN(__pa(start_pg)); - pgend =3D __pa(end_pg) & PAGE_MASK; - - /* - * If there are free pages between these, - * free the section of the memmap array. - */ - if (pg < pgend) - memblock_free_early(pg, pgend - pg); -} - -/* - * The mem_map array can get very big. Free the unused area of the memo= ry map. - */ -static void __init free_unused_memmap(void) -{ - unsigned long start, end, prev_end =3D 0; - int i; - - /* - * This relies on each bank being in address order. - * The banks are sorted previously in bootmem_init(). - */ - for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, NULL) { -#ifdef CONFIG_SPARSEMEM - /* - * Take care not to free memmap entries that don't exist - * due to SPARSEMEM sections which aren't present. - */ - start =3D min(start, - ALIGN(prev_end, PAGES_PER_SECTION)); -#else - /* - * Align down here since the VM subsystem insists that the - * memmap entries are valid from the bank start aligned to - * MAX_ORDER_NR_PAGES. - */ - start =3D round_down(start, MAX_ORDER_NR_PAGES); -#endif - /* - * If we had a previous bank, and there is a space - * between the current bank and the previous, free it. - */ - if (prev_end && prev_end < start) - free_memmap(prev_end, start); - - /* - * Align up here since the VM subsystem insists that the - * memmap entries are valid from the bank end aligned to - * MAX_ORDER_NR_PAGES. - */ - prev_end =3D ALIGN(end, MAX_ORDER_NR_PAGES); - } - -#ifdef CONFIG_SPARSEMEM - if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) - free_memmap(prev_end, - ALIGN(prev_end, PAGES_PER_SECTION)); -#endif -} - static void __init free_highpages(void) { #ifdef CONFIG_HIGHMEM @@ -385,7 +308,6 @@ void __init mem_init(void) set_max_mapnr(pfn_to_page(max_pfn) - mem_map); =20 /* this will put all unused low memory onto the freelists */ - free_unused_memmap(); memblock_free_all(); =20 #ifdef CONFIG_SA1111 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index f858c352f72a..b7e6a0c09d12 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -138,6 +138,7 @@ config ARM64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT + select HAVE_ARCH_PFN_VALID select HAVE_ARCH_PREL32_RELOCATIONS select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_STACKLEAK @@ -1021,9 +1022,6 @@ config ARCH_SELECT_MEMORY_MODEL config ARCH_FLATMEM_ENABLE def_bool !NUMA =20 -config HAVE_ARCH_PFN_VALID - def_bool y - config HW_PERF_EVENTS def_bool y depends on ARM_PMU diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 095540667f0f..3d8328277bec 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -430,71 +430,6 @@ void __init bootmem_init(void) memblock_dump_all(); } =20 -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static inline void free_memmap(unsigned long start_pfn, unsigned long en= d_pfn) -{ - struct page *start_pg, *end_pg; - unsigned long pg, pgend; - - /* - * Convert start_pfn/end_pfn to a struct page pointer. - */ - start_pg =3D pfn_to_page(start_pfn - 1) + 1; - end_pg =3D pfn_to_page(end_pfn - 1) + 1; - - /* - * Convert to physical addresses, and round start upwards and end - * downwards. - */ - pg =3D (unsigned long)PAGE_ALIGN(__pa(start_pg)); - pgend =3D (unsigned long)__pa(end_pg) & PAGE_MASK; - - /* - * If there are free pages between these, free the section of the - * memmap array. - */ - if (pg < pgend) - memblock_free(pg, pgend - pg); -} - -/* - * The mem_map array can get very big. Free the unused area of the memor= y map. - */ -static void __init free_unused_memmap(void) -{ - unsigned long start, end, prev_end =3D 0; - int i; - - for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, NULL) { -#ifdef CONFIG_SPARSEMEM - /* - * Take care not to free memmap entries that don't exist due - * to SPARSEMEM sections which aren't present. - */ - start =3D min(start, ALIGN(prev_end, PAGES_PER_SECTION)); -#endif - /* - * If we had a previous bank, and there is a space between the - * current bank and the previous, free it. - */ - if (prev_end && prev_end < start) - free_memmap(prev_end, start); - - /* - * Align up here since the VM subsystem insists that the - * memmap entries are valid from the bank end aligned to - * MAX_ORDER_NR_PAGES. - */ - prev_end =3D ALIGN(end, MAX_ORDER_NR_PAGES); - } - -#ifdef CONFIG_SPARSEMEM - if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) - free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); -#endif -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - /* * mem_init() marks the free areas in the mem_map and tells us how much = memory * is free. This is done after various parts of the system have claimed= their @@ -510,9 +445,6 @@ void __init mem_init(void) =20 set_max_mapnr(max_pfn - PHYS_PFN_OFFSET); =20 -#ifndef CONFIG_SPARSEMEM_VMEMMAP - free_unused_memmap(); -#endif /* this will put all unused low memory onto the freelists */ memblock_free_all(); =20 diff --git a/mm/memblock.c b/mm/memblock.c index b68ee86788af..049df4163a97 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1926,6 +1926,85 @@ static int __init early_memblock(char *p) } early_param("memblock", early_memblock); =20 +static void __init free_memmap(unsigned long start_pfn, unsigned long en= d_pfn) +{ + struct page *start_pg, *end_pg; + phys_addr_t pg, pgend; + + /* + * Convert start_pfn/end_pfn to a struct page pointer. + */ + start_pg =3D pfn_to_page(start_pfn - 1) + 1; + end_pg =3D pfn_to_page(end_pfn - 1) + 1; + + /* + * Convert to physical addresses, and round start upwards and end + * downwards. + */ + pg =3D PAGE_ALIGN(__pa(start_pg)); + pgend =3D __pa(end_pg) & PAGE_MASK; + + /* + * If there are free pages between these, free the section of the + * memmap array. + */ + if (pg < pgend) + memblock_free(pg, pgend - pg); +} + +/* + * The mem_map array can get very big. Free the unused area of the memo= ry map. + */ +static void __init free_unused_memmap(void) +{ + unsigned long start, end, prev_end =3D 0; + int i; + + if (!IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID) || + IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP)) + return; + + /* + * This relies on each bank being in address order. + * The banks are sorted previously in bootmem_init(). + */ + for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, NULL) { +#ifdef CONFIG_SPARSEMEM + /* + * Take care not to free memmap entries that don't exist + * due to SPARSEMEM sections which aren't present. + */ + start =3D min(start, ALIGN(prev_end, PAGES_PER_SECTION)); +#else + /* + * Align down here since the VM subsystem insists that the + * memmap entries are valid from the bank start aligned to + * MAX_ORDER_NR_PAGES. + */ + start =3D round_down(start, MAX_ORDER_NR_PAGES); +#endif + + /* + * If we had a previous bank, and there is a space + * between the current bank and the previous, free it. + */ + if (prev_end && prev_end < start) + free_memmap(prev_end, start); + + /* + * Align up here since the VM subsystem insists that the + * memmap entries are valid from the bank end aligned to + * MAX_ORDER_NR_PAGES. + */ + prev_end =3D ALIGN(end, MAX_ORDER_NR_PAGES); + } + +#ifdef CONFIG_SPARSEMEM + if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) + free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); +#endif +} + static void __init __free_pages_memory(unsigned long start, unsigned lon= g end) { int order; @@ -2012,6 +2091,7 @@ unsigned long __init memblock_free_all(void) { unsigned long pages; =20 + free_unused_memmap(); reset_all_zones_managed_pages(); =20 pages =3D free_low_memory_core_early(); --=20 2.28.0