From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C006EA4FA9 for ; Mon, 23 Feb 2026 11:08:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WPriieVDmKWqMBFR4G9YtxqzqfHaNSD1dKKcY0qJxAU=; b=Ypiti70/RXP/ZGAjrn7WcZBRtJ KBJVWCQd+Qr4mr6fbOMY/rhqkOevTOKiQrWHfU4kzhiZdye8jF/C2Go/LufjxBqm9UEvouGv5KWjq b9E1srXw3S1wXCkwTvOxEVZBuL1GlUhpZyqDy7I3J/e2LZPWFx8gOMJuT1K/Nn8M8lg7anJRdiRU2 eR0E9PyxsLYfEPL0yyBXh87lNUpzp+Nadf3hYAE6Yw3d95SINwfoZeSIv8eoOrRZTQrSjW3wwI/47 FNR39sftsFJisnvZni4/PkM1e1htEFImHU1Jjk1gYoPH4ntpErBsV+cxW/vQErihahaf0uvZJt2S2 mlLoVNhw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vuTnA-000000007wB-30fM; Mon, 23 Feb 2026 11:07:56 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vuTn5-000000007vU-1QKQ for kexec@lists.infradead.org; Mon, 23 Feb 2026 11:07:52 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 43FEA43BAD; Mon, 23 Feb 2026 11:07:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02FDEC116C6; Mon, 23 Feb 2026 11:07:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771844868; bh=0YsJWHREax0w0K2fWlItFC0pBFK8dqhoeUM0EkuZNKY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=T1Qj9fQSaAfU7lAymctvQoNk2F6nxLXKeGFXUzGa3OO5BJumvXl9nHd5HYD3gxpb2 EvtVD+tb2FDM2BgQnih18J9vjvEC3dwW21vHqthFbS7cyLJ29+hjaecdg9sOfOBDz8 cBtfsOYn5qAH8XU5dtoSYb1p2sDlpLK9dg47n2FjY+QBqeXhRnpyTYFPNTyWHYh4Zw h9aWqpagWz/lZSDmnPj1zHzIGnCMNHqnqKUZVRp71M0bxqgJAaBv3E49eW5kKco+V7 BtA0/aBjJh5OhW50YIDXyGGaMwX8RNJl7/CLh7pWuRwDHSWpP+PdBPRJNAnQptYgFN 32FwwBT/ir72A== Date: Mon, 23 Feb 2026 13:07:41 +0200 From: Mike Rapoport To: Michal Clapinski Cc: Evangelos Petrongonas , Pasha Tatashin , Pratyush Yadav , Alexander Graf , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Subject: Re: [PATCH v4 1/2] kho: fix deferred init of kho scratch Message-ID: References: <20260220165203.3213375-1-mclapinski@google.com> <20260220165203.3213375-2-mclapinski@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260220165203.3213375-2-mclapinski@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260223_030751_552163_32D987A2 X-CRM114-Status: GOOD ( 25.82 ) X-BeenThere: kexec@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kexec" Errors-To: kexec-bounces+kexec=archiver.kernel.org@lists.infradead.org On Fri, Feb 20, 2026 at 05:52:02PM +0100, Michal Clapinski wrote: > Currently, mm_core_init calls kho_memory_init, which calls > kho_release_scratch. > > If DEFERRED is enabled, kho_release_scratch will first initialize the > struct pages of kho scratch. This is not needed. We can just let > page_alloc_init_late init it. > > Next, kho_release_scratch will mark scratch as MIGRATE_CMA. If DEFERRED > is enabled, this will be overwritten later in deferred_free_pages. > > To fix this, I removed the whole kho_release_scratch. > Marking the pageblocks as MIGRATE_CMA now happens in kho_init, which > runs after deferred_free_pages. > > Signed-off-by: Michal Clapinski Reviewed-by: Mike Rapoport (Microsoft) > --- > include/linux/memblock.h | 2 -- > kernel/liveupdate/kexec_handover.c | 43 ++++++++---------------------- > mm/memblock.c | 22 --------------- > 3 files changed, 11 insertions(+), 56 deletions(-) > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 221118b5a16e..35d9cf6bbf7a 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -614,11 +614,9 @@ static inline void memtest_report_meminfo(struct seq_file *m) { } > #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH > void memblock_set_kho_scratch_only(void); > void memblock_clear_kho_scratch_only(void); > -void memmap_init_kho_scratch_pages(void); > #else > static inline void memblock_set_kho_scratch_only(void) { } > static inline void memblock_clear_kho_scratch_only(void) { } > -static inline void memmap_init_kho_scratch_pages(void) {} > #endif > > #endif /* _LINUX_MEMBLOCK_H */ > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index b851b09a8e99..de167bfa2c8d 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -1377,11 +1377,6 @@ static __init int kho_init(void) > if (err) > goto err_free_fdt; > > - if (fdt) { > - kho_in_debugfs_init(&kho_in.dbg, fdt); > - return 0; > - } > - > for (int i = 0; i < kho_scratch_cnt; i++) { > unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); > unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; > @@ -1397,8 +1392,17 @@ static __init int kho_init(void) > */ > kmemleak_ignore_phys(kho_scratch[i].addr); > for (pfn = base_pfn; pfn < base_pfn + count; > - pfn += pageblock_nr_pages) > - init_cma_reserved_pageblock(pfn_to_page(pfn)); > + pfn += pageblock_nr_pages) { > + if (fdt) > + init_cma_pageblock(pfn_to_page(pfn)); > + else > + init_cma_reserved_pageblock(pfn_to_page(pfn)); > + } > + } > + > + if (fdt) { > + kho_in_debugfs_init(&kho_in.dbg, fdt); > + return 0; > } > > WARN_ON_ONCE(kho_debugfs_fdt_add(&kho_out.dbg, "fdt", > @@ -1421,35 +1425,10 @@ static __init int kho_init(void) > } > fs_initcall(kho_init); > > -static void __init kho_release_scratch(void) > -{ > - phys_addr_t start, end; > - u64 i; > - > - memmap_init_kho_scratch_pages(); > - > - /* > - * Mark scratch mem as CMA before we return it. That way we > - * ensure that no kernel allocations happen on it. That means > - * we can reuse it as scratch memory again later. > - */ > - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > - MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { > - ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); > - ulong end_pfn = pageblock_align(PFN_UP(end)); > - ulong pfn; > - > - for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) > - init_pageblock_migratetype(pfn_to_page(pfn), > - MIGRATE_CMA, false); > - } > -} > - > void __init kho_memory_init(void) > { > if (kho_in.mem_map_phys) { > kho_scratch = phys_to_virt(kho_in.scratch_phys); > - kho_release_scratch(); > kho_mem_deserialize(phys_to_virt(kho_in.mem_map_phys)); > } else { > kho_reserve_scratch(); > diff --git a/mm/memblock.c b/mm/memblock.c > index 6cff515d82f4..3eff19124fc0 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -959,28 +959,6 @@ __init void memblock_clear_kho_scratch_only(void) > { > kho_scratch_only = false; > } > - > -__init void memmap_init_kho_scratch_pages(void) > -{ > - phys_addr_t start, end; > - unsigned long pfn; > - int nid; > - u64 i; > - > - if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) > - return; > - > - /* > - * Initialize struct pages for free scratch memory. > - * The struct pages for reserved scratch memory will be set up in > - * reserve_bootmem_region() > - */ > - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > - MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { > - for (pfn = PFN_UP(start); pfn < PFN_DOWN(end); pfn++) > - init_deferred_page(pfn, nid); > - } > -} > #endif > > /** > -- > 2.53.0.345.g96ddfc5eaa-goog > -- Sincerely yours, Mike.