From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5091CEC01A1 for ; Mon, 23 Mar 2026 07:50:34 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [127.0.0.1]) by lists.ozlabs.org (Postfix) with ESMTP id 4ffQMF0GwWz2ypY; Mon, 23 Mar 2026 18:50:33 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; arc=none smtp.remote-ip=172.234.252.31 ARC-Seal: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1774252233; cv=none; b=Gzliug63gk2cI6uce1AYfZm/6gPP07Hd9ryusn7YQFE0r3Vp5g1X/k4SK+HhiStRIfry65U3keA1k3UJx/7riW6K/E5dVaNr2MQfaI+UXxkEDMGm3zX3gWv4/CcW+iGH/H6w2UDpHSUY75qnLzMaSgKg/s47tFA6/XVAOYM8jmbh4gbQ+HIKhNAB+RkZEhmj4uZXgyzDtMHJrhEpkpdSIm7WBErjw0oc31xD61fpxe3gIpRX+Y9pt9b+OkUuyM+XoGYCApyjGw7N7HO+yVd1pJTf8A4n89H4PIpq0LJEG1y+Yq+dKM0mYkRpbPeyGtTfoixnl6UyM6uzvCWnixJc4A== ARC-Message-Signature: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1774252233; c=relaxed/relaxed; bh=HBQ46eALmOXx1bXFmc5/BhXaBncR35rS+7D8fkUk74Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KQBQ6Po4M7Ifb6NJZPw0AM8f655zhZlV1GZJv8BqdWi6BvqDk9RnZhkBnBPxnxtlTqGCc5LNlA4b4al9mWHJqcpJpbtxmKAkdqQ+/QoEo4/OJ1aTj/WJDnCxuUUz1K7v5HlmuYqV6Ps1Qv89WKBSEgP6ing8gQO8D2C0P2jAtdjLEEjl+S502j+Gw+MOiQIelnPQbhKjBk5KU2x4SiTHhBSoOMHFwUSlDbRMIzwSkCP+bmtwMYh0hzAixamvGjc4WrUvCxy45nIESnxc33GfeISTCMRUYy82FfzMfvJRaOE7pRmZOdDEAHQAzrBc+Xwy8TxZm8WFNDZ0E+HxgZfX8w== ARC-Authentication-Results: i=1; lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kernel.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=l2yPHSZJ; dkim-atps=neutral; spf=pass (client-ip=172.234.252.31; helo=sea.source.kernel.org; envelope-from=rppt@kernel.org; receiver=lists.ozlabs.org) smtp.mailfrom=kernel.org Authentication-Results: lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kernel.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=l2yPHSZJ; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=172.234.252.31; helo=sea.source.kernel.org; envelope-from=rppt@kernel.org; receiver=lists.ozlabs.org) Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4ffQMD2984z2yfl for ; Mon, 23 Mar 2026 18:50:32 +1100 (AEDT) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id C11DB43704; Mon, 23 Mar 2026 07:50:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A012C4CEF7; Mon, 23 Mar 2026 07:50:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252230; bh=2YaMKdXLS6q2N8eU5YUjQf/EJjPIOHb0bH+1NaX/nR8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l2yPHSZJSFV/2Xg29g07aWdlpqOxb4al6/yvYuIOUOc2J7tJq3yci9lYgN2Ad0Btc nh8HF9hH3MVm8s1S/WvMnipPqcfHyqAWEvEIecCcyDRi1zIUz73z0ufinHvS2jUzLd 5aN1Ha8QPGf10iolGEgBJSw3s3hSAP0Nc5WD7PS9QgwP1azCodE+X0whKC6tqHZLc6 x38IZNl+bgo7TYx0jvflmrbVMoc8805c5TR7QMdkpN9XA29LY99N0h9RQbpDy+SrSX EuBPFYqGifP/Y+eDtcoJ7SoPLUbAZ5a/FJM8kp8xH+wzmAySKlRhhdc4rhpWSVh4yl yYvsW9iWMARwA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 9/9] memblock: warn when freeing reserved memory before memory map is initialized Date: Mon, 23 Mar 2026 09:48:36 +0200 Message-ID: <20260323074836.3653702-10-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> X-Mailing-List: linuxppc-dev@lists.ozlabs.org List-Id: List-Help: List-Owner: List-Post: List-Archive: , List-Subscribe: , , List-Unsubscribe: Precedence: list MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Mike Rapoport (Microsoft)" When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, freeing of reserved memory before the memory map is fully initialized in deferred_init_memmap() would cause access to uninitialized struct pages and may crash when accessing spurious list pointers, like was recently discovered during discussion about memory leaks in x86 EFI code [1]. The trace below is from an attempt to call free_reserved_page() before page_alloc_init_late(): [ 0.076840] BUG: unable to handle page fault for address: ffffce1a005a0788 [ 0.078226] #PF: supervisor read access in kernel mode [ 0.078226] #PF: error_code(0x0000) - not-present page [ 0.078226] PGD 0 P4D 0 [ 0.078226] Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI [ 0.078226] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.12.68-92.123.amzn2023.x86_64 #1 [ 0.078226] Hardware name: Amazon EC2 t3a.nano/, BIOS 1.0 10/16/2017 [ 0.078226] RIP: 0010:__list_del_entry_valid_or_report+0x32/0xb0 ... [ 0.078226] __free_one_page+0x170/0x520 [ 0.078226] free_pcppages_bulk+0x151/0x1e0 [ 0.078226] free_unref_page_commit+0x263/0x320 [ 0.078226] free_unref_page+0x2c8/0x5b0 [ 0.078226] ? srso_return_thunk+0x5/0x5f [ 0.078226] free_reserved_page+0x1c/0x30 [ 0.078226] memblock_free_late+0x6c/0xc0 Currently there are not many callers of free_reserved_area() and they all appear to be at the right timings. Still, in order to protect against problematic code moves or additions of new callers add a warning that will inform that reserved pages cannot be freed until the memory map is fully initialized. [1] https://lore.kernel.org/all/e5d5a1105d90ee1e7fe7eafaed2ed03bbad0c46b.camel@kernel.crashing.org/ Signed-off-by: Mike Rapoport (Microsoft) --- mm/internal.h | 10 ++++++++++ mm/memblock.c | 5 +++++ mm/page_alloc.c | 10 ---------- 3 files changed, 15 insertions(+), 10 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index cb0af847d7d9..f60c1edb2e02 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1233,7 +1233,17 @@ static inline void vunmap_range_noflush(unsigned long start, unsigned long end) #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT DECLARE_STATIC_KEY_TRUE(deferred_pages); +static inline bool deferred_pages_enabled(void) +{ + return static_branch_unlikely(&deferred_pages); +} + bool __init deferred_grow_zone(struct zone *zone, unsigned int order); +#else +static inline bool deferred_pages_enabled(void) +{ + return false; +} #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ void init_deferred_page(unsigned long pfn, int nid); diff --git a/mm/memblock.c b/mm/memblock.c index dc8811861c11..ab8f35c3bd41 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -899,6 +899,11 @@ static unsigned long __free_reserved_area(phys_addr_t start, phys_addr_t end, { unsigned long pages = 0, pfn; + if (deferred_pages_enabled()) { + WARN(1, "Cannot free reserved memory because of deferred initialization of the memory map"); + return 0; + } + for_each_valid_pfn(pfn, PFN_UP(start), PFN_DOWN(end)) { struct page *page = pfn_to_page(pfn); void *direct_map_addr; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df3d61253001..9ac47bab2ea7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -331,11 +331,6 @@ int page_group_by_mobility_disabled __read_mostly; */ DEFINE_STATIC_KEY_TRUE(deferred_pages); -static inline bool deferred_pages_enabled(void) -{ - return static_branch_unlikely(&deferred_pages); -} - /* * deferred_grow_zone() is __init, but it is called from * get_page_from_freelist() during early boot until deferred_pages permanently @@ -348,11 +343,6 @@ _deferred_grow_zone(struct zone *zone, unsigned int order) return deferred_grow_zone(zone, order); } #else -static inline bool deferred_pages_enabled(void) -{ - return false; -} - static inline bool _deferred_grow_zone(struct zone *zone, unsigned int order) { return false; -- 2.53.0