From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE699C5B543 for ; Tue, 10 Jun 2025 16:41:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 62B286B008A; Tue, 10 Jun 2025 12:41:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DBFA6B008C; Tue, 10 Jun 2025 12:41:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CBB96B0092; Tue, 10 Jun 2025 12:41:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2A0B96B008A for ; Tue, 10 Jun 2025 12:41:36 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A93DFC08FD for ; Tue, 10 Jun 2025 16:41:35 +0000 (UTC) X-FDA: 83540056950.24.4817BC6 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf13.hostedemail.com (Postfix) with ESMTP id A7B9D20004 for ; Tue, 10 Jun 2025 16:41:33 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IdDVu89I; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749573694; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l1i8H/gKC/oQ3qAYchklHr1EYmsfN/BYNqkPNHyoIWY=; b=2TcyNkRxcyoOqvCcCHRqcRZYMu6m7yThgyMjCM0iZ7f7yuI0kyMn2N8YETjzYZtgVkUa5E OmthlTIfaI46utd2KOqjWAON36CCIiHBrPeS0PuALcFP/GSQpKCwlKggpaGgjNdJ+Il3OM 0lFac4UI0jm+Pv9nspIHYRahZDoQpU4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749573694; a=rsa-sha256; cv=none; b=8FS/vcAL4fvpnRMBCics52Waj2FzJoDeLokwG+amXM6IqwAxAHqBCWyJGD9AD9344lI2PJ NclKw+cDIACHni9yIsaYak9ta90eBcv18sl4R245BLm5JD88/ghdjsAUgIqslfNX8p6s1E Po3IJuhE5Zcrggsld6rprEPHZonxvlI= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IdDVu89I; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 54A40439AA; Tue, 10 Jun 2025 16:41:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 145CAC4CEED; Tue, 10 Jun 2025 16:41:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1749573692; bh=08Tu33HS43YPREV9dKmb5e6r7DxszomFLxukqEtLKf4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=IdDVu89IR3IdEcEWe6/IvKaF7o5LSpl246RsGJ6PnsyEtdcTN4MRqXzKvbf2pAC+L Js7czcTLYihX65/4n5DfhNZ/7Av+Gw2+l4m3hyp26Zz8q5jGJ4q41LY2MAhMS44SVy s4iMDv2EA30q4UgZ3myzo6Oj+Hci3/58dDeLcukivysCI/HavQ0nEmltshuJHfaYe7 lIMdHLaOMY5OXv+iyUzYa4c7fyfqsiZjp9yyAVQjCLVrDMlqMeBk1av8mKfq8MQ1UR HESTnkU13SHTt0TNkiHP2iUyPTTKSwiN6gUHzw9JNr4T1Fzk016KxwKva0tLRM/b7c 3jt+dIAuHtVdA== Date: Tue, 10 Jun 2025 19:41:25 +0300 From: Mike Rapoport To: Pasha Tatashin Cc: Pratyush Yadav , Alexander Graf , Changyuan Lyu , Andrew Morton , Baoquan He , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] kho: initialize tail pages for higher order folios properly Message-ID: References: <20250605171143.76963-1-pratyush@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: A7B9D20004 X-Stat-Signature: 361ff97pbgquw3q6h17i8u8pxke736ba X-HE-Tag: 1749573693-516937 X-HE-Meta: U2FsdGVkX1+3uYOXv8EjAQwWh7DOz8uPcuUUQ0ZrxFq+XjaPPiikFM4FRPK2V9oXfqBoPyBtZN8Eg3F2UhddP/wdwGmrfwMQPPq/cJjdur/4IPxYR92DpMxlcCj7ympqCbFcDefLN5/97YjVD0TLNOm8J0gJXLRXiu6JWNO7G52QVUhb5j55yhLaSkYs74HiRbPxmAEQMeB5IKv2SE+MCW4wtV3+Q7SaMm6Ei99HDo9ZLrjoKT1G0WZpFiuQWLFoRtBflA7kkV590S7rYjvVTK8pKscL8JIJo0b3Vn90FdM82vSPCAFhzjfAxKZ79vr4di5mlSLvNIZFBQb2zu2iyjVqrq4tQM8lRH2iIxuL6WDyWia0uhY91gZI8fCoatCsvpZnDhJDjIQtFtzfKGdfKKFDmUGqsy4rkJOfIiAqkpiSFtOtOkKmqPqx0HbNigzdIc3vbk4nwSxlmjfjakMXXPt0VPfdBEHrEaFevONMBitxi3O9MYCtZ+Ayin3GtgGcssDRmzG3WRPTLMQ7RqsOKD2hP3n78XBwSFLxQDET4oMnKqqL0mZK1ymKvJQK8V68IwxKlpvuJy4FDRaoIzeR8iCWPHt0LqWPdbJ3Uzsh8h7svgqRR9HUtT+qAj0GjPsdKXI/fBIPk6DEyxFlqQpMW4JItQLWilcML9GzUuBmtwLRuBK3vTDndkZoszjd94Ug19vMQ27g6dE8HdoOOuDvSDiZVB01uXBYQ51p/iaD/k57pOTXxTTHgMhzZ3iz2jP7PBAhKsrblqoQHfnmgCR9sh3nKMFO3C9O9RDMuetGgIj1EXPCp2pLTu3S1p0/k87vdbGz7l/Dj/M246XKIV/6jF1Y0IPg8fqLagU3ataTpRAzjg29/oncM3/5N5Nq2jb3vWQwsbwrt7rf8oWFb+ruhXFdl0uqXJI3GWvYWXJqfayLQ1D8J/96qbuS+ALX+CNxSeBeTXa6wkXH1ixJdsN myXrYv5R 5ECx2qNe8oWM4N6Pxi9u6sQNGVQAkyZKYj9WED7O0PYuSWAFirpzXRo4iydjs2v2CY0U4SGki//f421nyt38zJ9fb3wlB4hTF5iuicGq6BXDIMhqkzEiFa1vw9Dq+myXwnnWYhLAjnIVqeZcWrDVu6hhV7QFDjF66ejnPvLl6MmWBmSN90OGxCZMpFoEOkFuOHXSu1fQLIxfE30qo6rBScgHwcDuctdlnobV/r4FqGzilsr7FV86JlO+h63dupyRwL4LNLgrYuUOgpAtBTu3+sQ09p1zrWGnHSwFY70h5Uuo1lcvCRfUWkwcl5ACs2DfZPLKOVVW10r7ysOg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jun 10, 2025 at 07:20:23AM -0400, Pasha Tatashin wrote: > On Tue, Jun 10, 2025 at 1:44 AM Mike Rapoport wrote: > > > > On Mon, Jun 09, 2025 at 04:07:50PM -0400, Pasha Tatashin wrote: > > > On Mon, Jun 9, 2025 at 3:36 PM Mike Rapoport wrote: > > > > > > > > Hi Pratyush, > > > > > > > > On Fri, Jun 06, 2025 at 06:23:06PM +0200, Pratyush Yadav wrote: > > > > > Hi Mike, > > > > > > > > > > On Fri, Jun 06 2025, Mike Rapoport wrote: > > > > > > > > > > > On Thu, Jun 05, 2025 at 07:11:41PM +0200, Pratyush Yadav wrote: > > > > > >> From: Pratyush Yadav > > > > > >> > > > > > >> --- a/kernel/kexec_handover.c > > > > > >> +++ b/kernel/kexec_handover.c > > > > > >> @@ -157,11 +157,21 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, > > > > > >> } > > > > > >> > > > > > >> /* almost as free_reserved_page(), just don't free the page */ > > > > > >> -static void kho_restore_page(struct page *page) > > > > > >> +static void kho_restore_page(struct page *page, unsigned int order) > > > > > >> { > > > > > >> - ClearPageReserved(page); > > > > > > > > > > > > So now we don't clear PG_Reserved even on order-0 pages? ;-) > > > > > > > > > > We don't need to. As I mentioned in the commit message as well, > > > > > PG_Reserved is never set for KHO pages since they are reserved with > > > > > MEMBLOCK_RSRV_NOINIT, so memmap_init_reserved_pages() skips over them. > > > > > > > > You are right, I missed it. > > > > > > > > > That said, while reading through some of the code, I noticed another > > > > > bug: because KHO reserves the preserved pages as NOINIT, with > > > > > CONFIG_DEFERRED_STRUCT_PAGE_INIT == n, all the pages get initialized > > > > > when memmap_init_range() is called from setup_arch (paging_init() on > > > > > x86). This happens before kho_memory_init(), so the KHO-preserved pages > > > > > are not marked as reserved to memblock yet. > > > > > > > > > > With deferred page init, some pages might not get initialized early, and > > > > > get initialized after kho_memory_init(), by which time the KHO-preserved > > > > > pages are marked as reserved. So, deferred_init_maxorder() will skip > > > > > over those pages and leave them uninitialized. > > > > > > > > > > So we need to either also call init_deferred_page(), or remove the > > > > > memblock_reserved_mark_noinit() call in deserialize_bitmap(). And TBH, I > > > > > am not sure why KHO pages even need to be marked noinit in the first > > > > > place. Probably the only benefit would be if a large chunk of memory is > > > > > KHO-preserved, the pages can be initialized later on-demand, reducing > > > > > bootup time a bit. > > > > > > > > One benefit is performance indeed, because in not deferred case the > > > > initialization of reserved pages in memmap_init_reserved_pages() is really > > > > excessive. > > > > > > > > But more importantly, if we remove memblock_reserved_mark_noinit(), with > > > > CONFIG_DEFERRED_STRUCT_PAGE_INIT we'd loose page->private because the > > > > struct page will be cleared after kho_mem_deserialize(). > > > > > > > > > What do you think? Should we drop noinit or call init_deferred_page()? > > > > > FWIW, my preference is to drop noinit, since init_deferred_page() is > > > > > __meminit and we would have to make sure it doesn't go away after boot. > > > > > > > > We can't drop noinit and calling init_deferred_page() after boot just won't > > > > work because it uses memblock to find the page's node and memblock is gone > > > > after init. > > > > > > > > The simplest short-term solution is to disable KHO when > > > > CONFIG_DEFERRED_STRUCT_PAGE_INIT is set and then find an efficient way to > > > > make it all work together. > > > > > > This is what I've done in LUOv3 WIP: > > > https://github.com/soleen/linux/commit/3059f38ac0a39a397873759fb429bd5d1f8ea681 > > > > I think it should be the other way around, KHO should depend on > > !DEFERRED_STRUCT_PAGE_INIT. > > Agreed, and this is what I first tried, but that does not work, there > is some circular dependency breaking the build. If you feel > adventurous you can try that :-) Hmm, weird, worked for me :/ > > > We will need to teah KHO to work with deferred struct page init. I > > > suspect, we could init preserved struct pages and then skip over them > > > during deferred init. > > > > We could, but with that would mean we'll run this before SMP and it's not > > desirable. Also, init_deferred_page() for a random page requires > > We already run KHO init before smp_init: > start_kernel() -> mm_core_init() -> kho_memory_init() -> > kho_restore_folio() -> struct pages must be already initialized here! > > While deferred struct pages are initialized: > start_kernel() -> rest_init() -> kernel_init() -> > kernel_init_freeable() -> page_alloc_init_late() -> > deferred_init_memmap() > > If the number of preserved pages that is needed during early boot is > relatively small, that it should not be an issue to pre-initialize > struct pages for them before deferred struct pages are initialized. We > already pre-initialize some "struct pages" that are needed during > early boot before the reset are initialized, see deferred_grow_zone() deferred_grow_zone() takes a chunk in the beginning of uninitialized range, with kho we are talking about some random pages. If we preinit them early, deferred_init_memmap() will overwrite them. Anyway, I'm going to look into it, hopefully I'll have something Really Soon (tm). > Pasha -- Sincerely yours, Mike.