From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4E0038CFE6 for ; Wed, 18 Mar 2026 09:33:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773826391; cv=none; b=TDwyyOT4t+mx2S7zYbj5wRIVgl/9e5vzcaPtKRBd4OM+B3SoAtBSpJIsZiVkG2KvcTX+3PWwSvCLW3RUbx4YOP0BtdD5K6xph9+7bGRcvnbShIdNd0RJgf3D3eeeGtZ//I8nBfXmYKqulWFmWyuVLiU/XerMploUJYSbYuO207w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773826391; c=relaxed/simple; bh=uEHOHnogW56+g2NCkawlI23RrTbijq7RbohziykD1t4=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=iBG0NTy41HR4lcNaSUHM1h8GMoRQrRCDaKuJRWH0+bDQTeqRtnUWfp6Dq6Lv9V2f+BSDyk2kTBdw15ityB/PJPPbpcSFtnXqo1xCreDrHWzSstnNfWGeLsuPvlF+0LRZ2HDIAvxBV1/lecdDM+9/0S22VijjB/5bqybcowAWWWw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=N36oty7D; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="N36oty7D" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9091C19421; Wed, 18 Mar 2026 09:33:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773826391; bh=uEHOHnogW56+g2NCkawlI23RrTbijq7RbohziykD1t4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=N36oty7D6IXjGFsZKN4yobNwiu0L0Ch9p0graCXyTqc4EJXSZSe8IbzaBoOHfCzpJ sE/i0ORGRTpFLanJH4LyEMsIhSGXpsOD1dR5Lu3vJHyjiDXxDXX2t8L2hzHtAqdbjn KXxWVtpg7F6L70xB27ys4lh6EwtEZIWhueY4gEkaR/WV+REmobZHZyVfCck5YHMx12 Bzc1lWlRay5jpkfJyIH8DA66pip9nhUPRJSZinp3ov8EG78kaU59S5ZLS/9vWGgxoL Q3YL6QsXyKGBvh1C0nr7gjgdpAhKBmDc3jfBTMpVo/5Fj6oECzoN4iYa1FvAJw+y7F vt4lFIRnrdzCQ== Date: Wed, 18 Mar 2026 11:33:04 +0200 From: Mike Rapoport To: Michal Clapinski Cc: Evangelos Petrongonas , Pasha Tatashin , Pratyush Yadav , Alexander Graf , Samiullah Khawaja , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Subject: Re: [PATCH v7 2/3] kho: fix deferred init of kho scratch Message-ID: References: <20260317141534.815634-1-mclapinski@google.com> <20260317141534.815634-3-mclapinski@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260317141534.815634-3-mclapinski@google.com> Hi Michal, On Tue, Mar 17, 2026 at 03:15:33PM +0100, Michal Clapinski wrote: > Currently, if DEFERRED is enabled, kho_release_scratch will initialize Please spell out CONFIG_DEFERRED_STRUCT_PAGE_INIT > the struct pages and set migratetype of kho scratch. Unless the whole > scratch fit below first_deferred_pfn, some of that will be overwritten > either by deferred_init_pages or memmap_init_reserved_pages. Usually we put brackets after function names to make them more visible. > To fix it, I modified kho_release_scratch to only set the migratetype Prefer an imperative mood please, e.g. "To fix it, modify kho_release_scratch() ..." > on already initialized pages. Then, modified init_pageblock_migratetype > to set the migratetype to CMA if the page is located inside scratch. > > Signed-off-by: Michal Clapinski > --- > include/linux/memblock.h | 2 -- > kernel/liveupdate/kexec_handover.c | 10 ++++++---- > mm/memblock.c | 22 ---------------------- > mm/page_alloc.c | 7 +++++++ > 4 files changed, 13 insertions(+), 28 deletions(-) > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 6ec5e9ac0699..3e217414e12d 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -614,11 +614,9 @@ static inline void memtest_report_meminfo(struct seq_file *m) { } > #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH > void memblock_set_kho_scratch_only(void); > void memblock_clear_kho_scratch_only(void); > -void memmap_init_kho_scratch_pages(void); > #else > static inline void memblock_set_kho_scratch_only(void) { } > static inline void memblock_clear_kho_scratch_only(void) { } > -static inline void memmap_init_kho_scratch_pages(void) {} > #endif > > #endif /* _LINUX_MEMBLOCK_H */ > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index c9b982372d6e..e511a50fab9c 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -1477,8 +1477,7 @@ static void __init kho_release_scratch(void) > { > phys_addr_t start, end; > u64 i; > - > - memmap_init_kho_scratch_pages(); > + int nid; > > /* > * Mark scratch mem as CMA before we return it. That way we > @@ -1486,10 +1485,13 @@ static void __init kho_release_scratch(void) > * we can reuse it as scratch memory again later. > */ > __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > - MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { > + MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { > ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); > ulong end_pfn = pageblock_align(PFN_UP(end)); > ulong pfn; > +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT > + end_pfn = min(end_pfn, NODE_DATA(nid)->first_deferred_pfn); > +#endif A helper that returns first_deferred_pfn or ULONG_MAX might be beeter looking. > > for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) > init_pageblock_migratetype(pfn_to_page(pfn), > @@ -1500,8 +1502,8 @@ static void __init kho_release_scratch(void) > void __init kho_memory_init(void) > { > if (kho_in.scratch_phys) { > - kho_scratch = phys_to_virt(kho_in.scratch_phys); > kho_release_scratch(); > + kho_scratch = phys_to_virt(kho_in.scratch_phys); Why this change is needed? > > if (kho_mem_retrieve(kho_get_fdt())) > kho_in.fdt_phys = 0; > diff --git a/mm/memblock.c b/mm/memblock.c > index b3ddfdec7a80..ae6a5af46bd7 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -959,28 +959,6 @@ __init void memblock_clear_kho_scratch_only(void) > { > kho_scratch_only = false; > } > - > -__init void memmap_init_kho_scratch_pages(void) > -{ > - phys_addr_t start, end; > - unsigned long pfn; > - int nid; > - u64 i; > - > - if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) > - return; > - > - /* > - * Initialize struct pages for free scratch memory. > - * The struct pages for reserved scratch memory will be set up in > - * reserve_bootmem_region() > - */ > - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > - MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { > - for (pfn = PFN_UP(start); pfn < PFN_DOWN(end); pfn++) > - init_deferred_page(pfn, nid); > - } > -} > #endif > > /** > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ee81f5c67c18..5ca078dde61d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -55,6 +55,7 @@ > #include > #include > #include > +#include > #include > #include "internal.h" > #include "shuffle.h" > @@ -549,6 +550,12 @@ void __meminit init_pageblock_migratetype(struct page *page, > migratetype < MIGRATE_PCPTYPES)) > migratetype = MIGRATE_UNMOVABLE; > > + /* > + * Mark KHO scratch as CMA so no unmovable allocations are made there. > + */ > + if (unlikely(kho_scratch_overlap(page_to_phys(page), PAGE_SIZE))) > + migratetype = MIGRATE_CMA; > + Please pick SJ's fixup for the next respin :) > flags = migratetype; > > #ifdef CONFIG_MEMORY_ISOLATION > -- > 2.53.0.851.ga537e3e6e9-goog > -- Sincerely yours, Mike.