From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A20F1DE8BB; Mon, 23 Mar 2026 07:49:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252175; cv=none; b=GWjB2jfN95s4Bd+kU2BHITkBAaBc40xf0/5KfhHhXDSnLOomc8xj6tqqhYK0G+CtR3yAuzNu+EL3JWN+1OmA3ICR8rRDEsLKVA8gpkhDtXePycm8Vz3kBW32apMdhU7yULdzhfCN/a5ima52Zsytes5Osgh/bRCBb9LaGK60yT4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252175; c=relaxed/simple; bh=BA3ifIM8vMxNHhaniMoWafVGKix2KJOOnEEIB0IJslA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HD3ZdEIAqboaPUbADQnK15bPOt8LyduQIR9xX5VBSYzW0I066k1LC4KQZA3xc5KQEXRWaahmYhKx4ZMZ3ghWW1lME6A58t6y6q+jJ9F//v4kAygjXFxMmhfJu0e1dydG18U7SJ4qOVwwAEVHfAmuc7s7POqgWx5DTxpSZJZoYRw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mLj2uAa3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mLj2uAa3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 591E2C2BC9E; Mon, 23 Mar 2026 07:49:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252175; bh=BA3ifIM8vMxNHhaniMoWafVGKix2KJOOnEEIB0IJslA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mLj2uAa3+/NwQuaTBTXMkFS5rLYfpWnRharPWCiY5k6pdAcO4J3dmwxqJ62FgiBQQ 6QZgWy9Mlf+tLly2+EmiGoDig1TtvZff/B4jJXf2DfbTWewwveYymwfcCOjtHZQiEM A5doCHhRHLPTbVpCiM0WsxIRUNgKw1Bx20fB8JozatJ8LOTL2kl9c4hvj5qA9sfO+n AzfuKIyOe63JrT4uxw94ft2d5Kx2CM1R8jWppspvBEBJEOpyiO3cITWk2VSYdG1nuP qsE5AeX67dLQzBsQgP/eryYFm2ElRC0wJ8E8PQf9c8rRCCLLCxB/Hagi8E4T2A58pl wNxCDVFEoh0Dw== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 4/9] mm: move free_reserved_area() to mm/memblock.c Date: Mon, 23 Mar 2026 09:48:31 +0200 Message-ID: <20260323074836.3653702-5-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Mike Rapoport (Microsoft)" free_reserved_area() is related to memblock as it frees reserved memory back to the buddy allocator, similar to what memblock_free_late() does. Move free_reserved_area() to mm/memblock.c to prepare for further consolidation of the functions that free reserved memory. No functional changes. Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 37 ++++++++++++++++++++++++++++++- mm/page_alloc.c | 36 ------------------------------ tools/include/linux/mm.h | 1 + tools/testing/memblock/internal.h | 34 +++++++++++++++++++++++++--- 4 files changed, 68 insertions(+), 40 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index d4a02f1750e9..c0896efbee97 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -893,6 +893,42 @@ int __init_memblock memblock_remove(phys_addr_t base, phys_addr_t size) return memblock_remove_range(&memblock.memory, base, size); } +unsigned long free_reserved_area(void *start, void *end, int poison, const char *s) +{ + void *pos; + unsigned long pages = 0; + + start = (void *)PAGE_ALIGN((unsigned long)start); + end = (void *)((unsigned long)end & PAGE_MASK); + for (pos = start; pos < end; pos += PAGE_SIZE, pages++) { + struct page *page = virt_to_page(pos); + void *direct_map_addr; + + /* + * 'direct_map_addr' might be different from 'pos' + * because some architectures' virt_to_page() + * work with aliases. Getting the direct map + * address ensures that we get a _writeable_ + * alias for the memset(). + */ + direct_map_addr = page_address(page); + /* + * Perform a kasan-unchecked memset() since this memory + * has not been initialized. + */ + direct_map_addr = kasan_reset_tag(direct_map_addr); + if ((unsigned int)poison <= 0xFF) + memset(direct_map_addr, poison, PAGE_SIZE); + + free_reserved_page(page); + } + + if (pages && s) + pr_info("Freeing %s memory: %ldK\n", s, K(pages)); + + return pages; +} + /** * memblock_free - free boot memory allocation * @ptr: starting address of the boot memory allocation @@ -1776,7 +1812,6 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size) totalram_pages_inc(); } } - /* * Remaining API functions */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2d4b6f1a554e..df3d61253001 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6234,42 +6234,6 @@ void adjust_managed_page_count(struct page *page, long count) } EXPORT_SYMBOL(adjust_managed_page_count); -unsigned long free_reserved_area(void *start, void *end, int poison, const char *s) -{ - void *pos; - unsigned long pages = 0; - - start = (void *)PAGE_ALIGN((unsigned long)start); - end = (void *)((unsigned long)end & PAGE_MASK); - for (pos = start; pos < end; pos += PAGE_SIZE, pages++) { - struct page *page = virt_to_page(pos); - void *direct_map_addr; - - /* - * 'direct_map_addr' might be different from 'pos' - * because some architectures' virt_to_page() - * work with aliases. Getting the direct map - * address ensures that we get a _writeable_ - * alias for the memset(). - */ - direct_map_addr = page_address(page); - /* - * Perform a kasan-unchecked memset() since this memory - * has not been initialized. - */ - direct_map_addr = kasan_reset_tag(direct_map_addr); - if ((unsigned int)poison <= 0xFF) - memset(direct_map_addr, poison, PAGE_SIZE); - - free_reserved_page(page); - } - - if (pages && s) - pr_info("Freeing %s memory: %ldK\n", s, K(pages)); - - return pages; -} - void free_reserved_page(struct page *page) { clear_page_tag_ref(page); diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h index 028f3faf46e7..4407d8396108 100644 --- a/tools/include/linux/mm.h +++ b/tools/include/linux/mm.h @@ -17,6 +17,7 @@ #define __va(x) ((void *)((unsigned long)(x))) #define __pa(x) ((unsigned long)(x)) +#define __pa_symbol(x) ((unsigned long)(x)) #define pfn_to_page(pfn) ((void *)((pfn) * PAGE_SIZE)) diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/internal.h index 009b97bbdd22..b72be2968104 100644 --- a/tools/testing/memblock/internal.h +++ b/tools/testing/memblock/internal.h @@ -11,9 +11,22 @@ static int memblock_debug = 1; #define pr_warn_ratelimited(fmt, ...) printf(fmt, ##__VA_ARGS__) +#define K(x) ((x) << (PAGE_SHIFT-10)) + bool mirrored_kernelcore = false; struct page {}; +static inline void *page_address(struct page *page) +{ + BUG(); + return page; +} + +static inline struct page *virt_to_page(void *virt) +{ + BUG(); + return virt; +} void memblock_free_pages(unsigned long pfn, unsigned int order) { @@ -23,10 +36,25 @@ static inline void accept_memory(phys_addr_t start, unsigned long size) { } -static inline unsigned long free_reserved_area(void *start, void *end, - int poison, const char *s) +unsigned long free_reserved_area(void *start, void *end, int poison, const char *s); +void free_reserved_page(struct page *page); + +static inline bool deferred_pages_enabled(void) +{ + return false; +} + +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \ + for ((pfn) = (start_pfn); (pfn) < (end_pfn); (pfn)++) + +static inline void *kasan_reset_tag(const void *addr) +{ + return (void *)addr; +} + +static inline bool __is_kernel(unsigned long addr) { - return 0; + return false; } #endif -- 2.53.0