From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A765B103E162 for ; Wed, 18 Mar 2026 10:59:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tCxMaYBsBxXFKDoyhHps/O9S/t9DBe65sUTVzz1X8nE=; b=hHTHJOge6CjbEtGGZrPH9mTF4U CEii3Khl11oCMALR8TXjAX8oCWWw3ObtecaIevBYSa9YnUcxUXe98rqIK9cmpYNMZan0mZM2rPhAL oq2r3HnlIaQNgMLbfGqQ38G7hWAO2MQkOSmlHo+ov0fWdbW3HLXXDxfGZ5Kpd9z8hJbuCd4aCaROb YOJ8fNsu0KKe8hq+kClprFOGLHohPJGWyYjaAyNCEsuNGiecmfayCm5aWAsFfJNLag8b//ptD+Hnq e3+L91bYvJZLw2SzkSbxWZQKUchSHZP/QAYoISnRCoPC8djNtWCbspG6m642+L+ax5BbEbWnCgr5K PxakNhGg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1w2ocg-00000008FhR-2kgu; Wed, 18 Mar 2026 10:59:34 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1w2oce-00000008Fei-0P3T for linux-arm-kernel@lists.infradead.org; Wed, 18 Mar 2026 10:59:33 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id C1F6643BDC; Wed, 18 Mar 2026 10:59:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 39407C2BCB9; Wed, 18 Mar 2026 10:59:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773831565; bh=EMwQwmstIDHc8FNs0QO2SlPHAkcDkrQ193OPS3fBl+Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tiqHuDff+jDdjfstb/ZYgq2TD5ZpMwcm65UNdvRfRBe8PAN3icqQumgjGf0jjMwOP j4nc/ZDV4yAotyjkmHE/A3sTeQSiHgk4XGbx+8PnfOEVuotW+6DjC+n8FzEn0N07ex xwVTZRvNHJmzO1cKw+KDDsxpa6ivmiuE4M/gWBYWIPYVlHgBT4kKgBSZEAuXMBjlX5 77Iul3JgDJt2CbM/2QFqakDPRN1MY9jKIxjxmHZ2w5mk+/6blf8d902eLpySGRS5tT qwiMpZ1OvhQHOtEQggRNLvjuWaF43Qy6zBz0ESnJrIw7OmrwuwmUImSs2W6tHshrk1 mMgjbGI9FPdTg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH 4/8] memblock: make free_reserved_area() more robust Date: Wed, 18 Mar 2026 12:58:23 +0200 Message-ID: <20260318105827.1358927-5-rppt@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260318105827.1358927-1-rppt@kernel.org> References: <20260318105827.1358927-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260318_035932_192600_C6FBE863 X-CRM114-Status: GOOD ( 18.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" There are two potential problems in free_reserved_area(): * it may free a page with not-existent buddy page * it may be passed a virtual address from an alias mapping that won't be properly translated by virt_to_page(), for example a symbol on arm64 While first issue is quite theoretical and the second one does not manifest itself because all the callers do the right thing, it is easy to make free_reserved_area() robust enough to avoid these potential issues. Replace the loop by virtual address with a loop by pfn that uses for_each_valid_pfn() and use __pa() or __pa_symbol() depending on the virtual mapping alias to correctly determine the loop boundaries. Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 34 +++++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 11 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index 8f3010dddc58..27d4c9889b59 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -895,21 +895,32 @@ int __init_memblock memblock_remove(phys_addr_t base, phys_addr_t size) unsigned long free_reserved_area(void *start, void *end, int poison, const char *s) { - void *pos; - unsigned long pages = 0; + phys_addr_t start_pa, end_pa; + unsigned long pages = 0, pfn; - start = (void *)PAGE_ALIGN((unsigned long)start); - end = (void *)((unsigned long)end & PAGE_MASK); - for (pos = start; pos < end; pos += PAGE_SIZE, pages++) { - struct page *page = virt_to_page(pos); + /* + * end is the first address past the region and it may be beyond what + * __pa() or __pa_symbol() can handle. + * Use the address included in the range for the cnversion and add back + * 1 afterwards. + */ + if (__is_kernel((unsigned long)start)) { + start_pa = __pa_symbol(start); + end_pa = __pa_symbol(end - 1) + 1; + } else { + start_pa = __pa(start); + end_pa = __pa(end - 1) + 1; + } + + for_each_valid_pfn(pfn, PFN_UP(start_pa), PFN_DOWN(end_pa)) { + struct page *page = pfn_to_page(pfn); void *direct_map_addr; /* - * 'direct_map_addr' might be different from 'pos' - * because some architectures' virt_to_page() - * work with aliases. Getting the direct map - * address ensures that we get a _writeable_ - * alias for the memset(). + * 'direct_map_addr' might be different from the kernel virtual + * address because some architectures use aliases. + * Going via physical address, pfn_to_page() and page_address() + * ensures that we get a _writeable_ alias for the memset(). */ direct_map_addr = page_address(page); /* @@ -921,6 +932,7 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char memset(direct_map_addr, poison, PAGE_SIZE); free_reserved_page(page); + pages++; } if (pages && s) -- 2.51.0