From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16E053630A5 for ; Thu, 23 Apr 2026 08:41:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776933676; cv=none; b=pgIL6yIyl0syCHVXHnloiqCejLIqre/xASMhQISAv+gJ2+HNsAa7igfPJPcUdiatNOR+ko4DREYG52tWaiT6VGbNOiSpKfnOb/MG12q4f/aW1XBiY86MNW/fuGyeVdmTpzFdrq1mXoRh7vHh+/5RLiMTarTHTezOyWf0Pi62lss= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776933676; c=relaxed/simple; bh=/o3cyhlI3NGUpyx7vDlXkXGS/7l1clLahFR6qWl4Ock=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=t69iyGj/57LUyNVmWv+Tal2TSEEyO3bwTrgGzGF0PGLhrZtkBKjLxVlWHzpvW11ZMoK3QGUlrHNFCCu1RnyyR6oYq2LdypWu2F6q6JkL+iDjIWEB+6ePj+Q1nxudz6IFDpplc1zfc/1zixkKD555gQvIX0ckn57F2BkAwEJf9HM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FLcJUqE+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FLcJUqE+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47E55C2BCAF; Thu, 23 Apr 2026 08:41:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776933676; bh=/o3cyhlI3NGUpyx7vDlXkXGS/7l1clLahFR6qWl4Ock=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=FLcJUqE+7zBvDxLageLu5YyUTMj8lC6wE1/M26SxjfptjMSqgpGlxLccHjh9YvSfp xROm9gPXfErHIJqvFx7WN5jD0HzoO6HV2MAvppfFxyImKdMvwg7pD53cfEeVFMfCOQ KoUcxtirs9rs8z16YYzzGjoGKHUieLX/lYMD9d/bC02bXJvc3rPvuuQPMLeBMYyMxS ggdyxIp2HIOkR5Sp4Z07wdEHzERvLKpLEi4hkeBxkD5as4nDON3vElkI5QmDQBDtaQ wzooCPgouNEtEDgXKs0uKi7/QxUY7Dnq4PdufvO9fyPjDRIeXY/LBxiDJblPV+xob5 MJLJICKdkYybw== From: Pratyush Yadav To: Mike Rapoport Cc: =?utf-8?B?TWljaGHFgiBDxYJhcGnFhHNraQ==?= , Evangelos Petrongonas , Pasha Tatashin , Pratyush Yadav , Alexander Graf , Samiullah Khawaja , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Vlastimil Babka , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan Subject: Re: [PATCH v8 1/2] kho: fix deferred initialization of scratch areas In-Reply-To: (Mike Rapoport's message of "Wed, 22 Apr 2026 11:24:39 +0300") References: <20260416110654.247398-1-mclapinski@google.com> <20260416110654.247398-2-mclapinski@google.com> Date: Thu, 23 Apr 2026 10:41:11 +0200 Message-ID: <2vxzqzo65as8.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On Wed, Apr 22 2026, Mike Rapoport wrote: > On Tue, Apr 21, 2026 at 12:20:27PM +0200, Micha=C5=82 C=C5=82api=C5=84ski= wrote: >> On Tue, Apr 21, 2026 at 8:08=E2=80=AFAM Mike Rapoport = wrote: >> > >> > On Mon, Apr 20, 2026 at 03:11:03PM +0200, Micha=C5=82 C=C5=82api=C5=84= ski wrote: >> > > On Thu, Apr 16, 2026 at 6:13=E2=80=AFPM Mike Rapoport wrote: >> > > > >> > > > On Thu, Apr 16, 2026 at 05:06:10PM +0200, Micha=C5=82 C=C5=82api= =C5=84ski wrote: >> > > > > On Thu, Apr 16, 2026 at 4:45=E2=80=AFPM Mike Rapoport wrote: >> > > > > > >> > > > > > Hi Michal, >> > > > > > >> > > > > > On Thu, Apr 16, 2026 at 01:06:53PM +0200, Michal Clapinski wro= te: >> > > > > > > @@ -2262,6 +2253,12 @@ static void __init memmap_init_reserv= ed_range(phys_addr_t start, >> > > > > > > * access it yet. >> > > > > > > */ >> > > > > > > __SetPageReserved(page); >> > > > > > > + >> > > > > > > +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH >> > > > > > >> > > > > > No need for #ifdef here, there's a stub returning false for >> > > > > > CONFIG_MEMBLOCK_KHO_SCRATCH=3Dn case. >> > > > > >> > > > > In all 3 places the #ifdef is there because MIGRATE_CMA might be >> > > > > undefined. I already broke mm-new branch in the past because of = that. >> > > > >> > > > Hmm, that hurts :/ >> > > > >> > > > The best I can think of is to add a static inline in memblock.h an= d ifdefs >> > > > around it. >> > > >> > > Sorry, I don't understand what you mean. What would that static inli= ne contain? >> > >> > Something like this: >> > >> > #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH >> > static inline enum migratetype kho_scratch_migratetype(unsigned long p= fn, >> > enum migratetyp= e mt) >> > { >> > if (memblock_is_kho_scratch_memory(PFN_PHYS(pfn))) >> > return MIGRATE_CMA >> > return mt; >> > } >> > #else >> > static inline enum migratetype kho_scratch_migratetype(unsigned long p= fn, >> > enum migratetyp= e mt) >> > { >> > return mt; >> > } >> > #endif >>=20 >> How would I use it for this code? >>=20 >> +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH >> + if (memblock_is_kho_scratch_memory(PFN_PHYS(pfn)) && >> + pageblock_aligned(pfn)) >> + init_pageblock_migratetype(page, MIGRATE_CMA, fals= e); >> +#endif > > Something like this > > enum migratetype mt =3D kho_scratch_migratetype(pfn, MIGRATE_MOVABLE); > > ... > > if (memblock_is_kho_scratch_memory(PFN_PHYS(pfn)) && > pageblock_aligned(pfn))=20 > init_pageblock_migratetype(page, mt, false); You can optimize this a tiny bit by doing something like this: if (pageblock_aligned(pfn)) { enum migratetype mt =3D kho_scratch_migratetype(pfn, MIGRATE_MOVABLE); init_pageblock_migratetype(page, mt, false); } The kho_scratch_migratetype() you suggested already calls memblock_is_kho_scratch_memory() so no need to duplicate the search. Plus, this way this only executes for each pageblock instead for each page. > > seems better to me than ifdef, even though MIGRATE_MOVABLE is bogus here. > > And since we anyway changing init_pageblock_migratetype() callers in > mm_init.c the entire block in memmap_init_reserved_range() can be dropped > and the change can be done in __init_page_from_nid(). > >>=20 >> It doesn't invoke init_pageblock_migratetype unless pfn is kho scratch. >>=20 >> > Can't say I'm happy about the name, but could not think of something >> > better. >> > >> > > > > > > + if (memblock_is_kho_scratch_memory(PFN_PHYS(pf= n)) && >> > > > > > > + pageblock_aligned(pfn)) >> > > > > > > + init_pageblock_migratetype(page, MIGRA= TE_CMA, false); >> > > > > > > +#endif >> > > > > > > } >> > > > > > > } >> > > > >> > > > -- >> > > > Sincerely yours, >> > > > Mike. >> > >> > -- >> > Sincerely yours, >> > Mike. --=20 Regards, Pratyush Yadav