From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F7E81DE8A5; Tue, 3 Dec 2024 15:56:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733241411; cv=none; b=Gr/Nh7i4RvmTtQ1QUc2vO7eg+M4a1Qe2OGUjvhpMfg0BO199YRSdamdSH+1QR6ax4uA+2uF3GDkU/b/XWVFsQ9WnMDRA8W2zLWEnmn+D2PFzJGDXhyn3IsvoP21Dsb7Lr0IqN6VahUvdTFy548YEOQkoY2YNsOTbGqZIpKOt938= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733241411; c=relaxed/simple; bh=ZuPyKUDHMxF3er0tg6Hw1grFvmaHlVtWy3I0tJDKL9E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Nf1a4D3eZBAuT0C37uhSlWiQwWkAmKget/RrMheO9VHsxe/SNOtr0HGMY66GAh5Kdd+XOJdiQNvYcw8+AvbCQtIao//0YK45OR9o4+hRLTLvLaZoUEsCWHJs0ObG6TrR+mc5UEEhjG8SpNlSUb355i9VCXT3PjRBUUVpQKEyKZc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=p/mbxEaV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="p/mbxEaV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 824DBC4CECF; Tue, 3 Dec 2024 15:56:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1733241411; bh=ZuPyKUDHMxF3er0tg6Hw1grFvmaHlVtWy3I0tJDKL9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p/mbxEaVWyq4VKVMAiecLixkuGWyZ4aSz9cWlIrP8vAwEmYKWS+F5qART2Pc+HoII WwTt34sXPkuzzkgaEqlxYsbZRkeF51IECbJ0/97U2xZXEGWQiW0l0XOLSBD7obsi4w PsVFO/suG/QfQWh8vbLTeJtIDbSP9AHNKkq/4Lf0= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, David Hildenbrand , Sachin P Bappalige , Hari Bathini , Madhavan Srinivasan , "Ritesh Harjani (IBM)" , Michael Ellerman , Sasha Levin Subject: [PATCH 6.12 378/826] powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() Date: Tue, 3 Dec 2024 15:41:45 +0100 Message-ID: <20241203144758.509474988@linuxfoundation.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241203144743.428732212@linuxfoundation.org> References: <20241203144743.428732212@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ritesh Harjani (IBM) [ Upstream commit 05b94cae1c47f94588c3e7096963c1007c4d9c1d ] During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000 raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) ------------[ cut here ]------------ kernel BUG at mm/page_alloc.c:778! Call Trace: __free_one_page+0x57c/0x7b0 (unreliable) free_pcppages_bulk+0x1a8/0x2c8 free_unref_page_commit+0x3d4/0x4e4 free_unref_page+0x458/0x6d0 init_cma_reserved_pageblock+0x114/0x198 cma_init_reserved_areas+0x270/0x3e0 do_one_initcall+0x80/0x2f8 kernel_init_freeable+0x33c/0x530 kernel_init+0x34/0x26c ret_from_kernel_user_thread+0x14/0x1c Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") Suggested-by: David Hildenbrand Reported-by: Sachin P Bappalige Acked-by: Hari Bathini Reviewed-by: Madhavan Srinivasan Signed-off-by: Ritesh Harjani (IBM) Signed-off-by: Michael Ellerman Link: https://patch.msgid.link/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.list@gmail.com Signed-off-by: Sasha Levin --- arch/powerpc/include/asm/fadump.h | 7 +++++++ arch/powerpc/kernel/fadump.c | 6 +----- arch/powerpc/kernel/setup-common.c | 6 ++++-- 3 files changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h index ef40c9b6972a6..3638f04447f59 100644 --- a/arch/powerpc/include/asm/fadump.h +++ b/arch/powerpc/include/asm/fadump.h @@ -34,4 +34,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname, int depth, void *data); extern int fadump_reserve_mem(void); #endif + +#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA) +void fadump_cma_init(void); +#else +static inline void fadump_cma_init(void) { } +#endif + #endif /* _ASM_POWERPC_FADUMP_H */ diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index 162327d66982e..ac7b4e1645e55 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -78,7 +78,7 @@ static struct cma *fadump_cma; * But for some reason even if it fails we still have the memory reservation * with us and we can still continue doing fadump. */ -static void __init fadump_cma_init(void) +void __init fadump_cma_init(void) { unsigned long long base, size; int rc; @@ -122,8 +122,6 @@ static void __init fadump_cma_init(void) (unsigned long)cma_get_base(fadump_cma) >> 20, fw_dump.reserve_dump_area_size); } -#else -static void __init fadump_cma_init(void) { } #endif /* CONFIG_CMA */ /* @@ -632,8 +630,6 @@ int __init fadump_reserve_mem(void) pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n", (size >> 20), base, (memblock_phys_mem_size() >> 20)); - - fadump_cma_init(); } return ret; diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 943430077375a..b6b01502e5047 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -997,9 +997,11 @@ void __init setup_arch(char **cmdline_p) initmem_init(); /* - * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must - * be called after initmem_init(), so that pageblock_order is initialised. + * Reserve large chunks of memory for use by CMA for fadump, KVM and + * hugetlb. These must be called after initmem_init(), so that + * pageblock_order is initialised. */ + fadump_cma_init(); kvm_cma_reserve(); gigantic_hugetlb_cma_reserve(); -- 2.43.0