From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79919355813 for ; Mon, 23 Feb 2026 11:07:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771844868; cv=none; b=jzC8KaHtJkMdzGDPSvCC71exttyxVgXiOVfGc3MmGqm8R3n/2AjyQOwTkZD5y2HeAlFKdge03G4euIgvvZpeEQCP4WHua0kKrulWTY+4d8lT5lbQAuuSyw+EbFWAexRMoZx9cd3AmNMJxZjuXGICp0PEXLc4d/Huie8B/vaEl/w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771844868; c=relaxed/simple; bh=0YsJWHREax0w0K2fWlItFC0pBFK8dqhoeUM0EkuZNKY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=iXR+3vwoj2zkbM1ej8b3K3gbWq+WGDev+Px/Scj8aNJM2KkrzSUsHrsPSNBlSoI9Hyb1gWu0L6QCA2z8Nd0Dok00eGdNebYl1hAIb2pBQPlVrzX6zP+Xwzza76GIRKPJQ/0aKtLEZJ0Lp4w9JqO6gN+4ZHUQSg4a8bxx+wXFgds= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=T1Qj9fQS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="T1Qj9fQS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02FDEC116C6; Mon, 23 Feb 2026 11:07:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771844868; bh=0YsJWHREax0w0K2fWlItFC0pBFK8dqhoeUM0EkuZNKY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=T1Qj9fQSaAfU7lAymctvQoNk2F6nxLXKeGFXUzGa3OO5BJumvXl9nHd5HYD3gxpb2 EvtVD+tb2FDM2BgQnih18J9vjvEC3dwW21vHqthFbS7cyLJ29+hjaecdg9sOfOBDz8 cBtfsOYn5qAH8XU5dtoSYb1p2sDlpLK9dg47n2FjY+QBqeXhRnpyTYFPNTyWHYh4Zw h9aWqpagWz/lZSDmnPj1zHzIGnCMNHqnqKUZVRp71M0bxqgJAaBv3E49eW5kKco+V7 BtA0/aBjJh5OhW50YIDXyGGaMwX8RNJl7/CLh7pWuRwDHSWpP+PdBPRJNAnQptYgFN 32FwwBT/ir72A== Date: Mon, 23 Feb 2026 13:07:41 +0200 From: Mike Rapoport To: Michal Clapinski Cc: Evangelos Petrongonas , Pasha Tatashin , Pratyush Yadav , Alexander Graf , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Subject: Re: [PATCH v4 1/2] kho: fix deferred init of kho scratch Message-ID: References: <20260220165203.3213375-1-mclapinski@google.com> <20260220165203.3213375-2-mclapinski@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260220165203.3213375-2-mclapinski@google.com> On Fri, Feb 20, 2026 at 05:52:02PM +0100, Michal Clapinski wrote: > Currently, mm_core_init calls kho_memory_init, which calls > kho_release_scratch. > > If DEFERRED is enabled, kho_release_scratch will first initialize the > struct pages of kho scratch. This is not needed. We can just let > page_alloc_init_late init it. > > Next, kho_release_scratch will mark scratch as MIGRATE_CMA. If DEFERRED > is enabled, this will be overwritten later in deferred_free_pages. > > To fix this, I removed the whole kho_release_scratch. > Marking the pageblocks as MIGRATE_CMA now happens in kho_init, which > runs after deferred_free_pages. > > Signed-off-by: Michal Clapinski Reviewed-by: Mike Rapoport (Microsoft) > --- > include/linux/memblock.h | 2 -- > kernel/liveupdate/kexec_handover.c | 43 ++++++++---------------------- > mm/memblock.c | 22 --------------- > 3 files changed, 11 insertions(+), 56 deletions(-) > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 221118b5a16e..35d9cf6bbf7a 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -614,11 +614,9 @@ static inline void memtest_report_meminfo(struct seq_file *m) { } > #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH > void memblock_set_kho_scratch_only(void); > void memblock_clear_kho_scratch_only(void); > -void memmap_init_kho_scratch_pages(void); > #else > static inline void memblock_set_kho_scratch_only(void) { } > static inline void memblock_clear_kho_scratch_only(void) { } > -static inline void memmap_init_kho_scratch_pages(void) {} > #endif > > #endif /* _LINUX_MEMBLOCK_H */ > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index b851b09a8e99..de167bfa2c8d 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -1377,11 +1377,6 @@ static __init int kho_init(void) > if (err) > goto err_free_fdt; > > - if (fdt) { > - kho_in_debugfs_init(&kho_in.dbg, fdt); > - return 0; > - } > - > for (int i = 0; i < kho_scratch_cnt; i++) { > unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); > unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; > @@ -1397,8 +1392,17 @@ static __init int kho_init(void) > */ > kmemleak_ignore_phys(kho_scratch[i].addr); > for (pfn = base_pfn; pfn < base_pfn + count; > - pfn += pageblock_nr_pages) > - init_cma_reserved_pageblock(pfn_to_page(pfn)); > + pfn += pageblock_nr_pages) { > + if (fdt) > + init_cma_pageblock(pfn_to_page(pfn)); > + else > + init_cma_reserved_pageblock(pfn_to_page(pfn)); > + } > + } > + > + if (fdt) { > + kho_in_debugfs_init(&kho_in.dbg, fdt); > + return 0; > } > > WARN_ON_ONCE(kho_debugfs_fdt_add(&kho_out.dbg, "fdt", > @@ -1421,35 +1425,10 @@ static __init int kho_init(void) > } > fs_initcall(kho_init); > > -static void __init kho_release_scratch(void) > -{ > - phys_addr_t start, end; > - u64 i; > - > - memmap_init_kho_scratch_pages(); > - > - /* > - * Mark scratch mem as CMA before we return it. That way we > - * ensure that no kernel allocations happen on it. That means > - * we can reuse it as scratch memory again later. > - */ > - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > - MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { > - ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); > - ulong end_pfn = pageblock_align(PFN_UP(end)); > - ulong pfn; > - > - for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) > - init_pageblock_migratetype(pfn_to_page(pfn), > - MIGRATE_CMA, false); > - } > -} > - > void __init kho_memory_init(void) > { > if (kho_in.mem_map_phys) { > kho_scratch = phys_to_virt(kho_in.scratch_phys); > - kho_release_scratch(); > kho_mem_deserialize(phys_to_virt(kho_in.mem_map_phys)); > } else { > kho_reserve_scratch(); > diff --git a/mm/memblock.c b/mm/memblock.c > index 6cff515d82f4..3eff19124fc0 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -959,28 +959,6 @@ __init void memblock_clear_kho_scratch_only(void) > { > kho_scratch_only = false; > } > - > -__init void memmap_init_kho_scratch_pages(void) > -{ > - phys_addr_t start, end; > - unsigned long pfn; > - int nid; > - u64 i; > - > - if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) > - return; > - > - /* > - * Initialize struct pages for free scratch memory. > - * The struct pages for reserved scratch memory will be set up in > - * reserve_bootmem_region() > - */ > - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > - MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { > - for (pfn = PFN_UP(start); pfn < PFN_DOWN(end); pfn++) > - init_deferred_page(pfn, nid); > - } > -} > #endif > > /** > -- > 2.53.0.345.g96ddfc5eaa-goog > -- Sincerely yours, Mike.