From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE28C288530 for ; Tue, 17 Mar 2026 11:28:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773746923; cv=none; b=KP0Sq21BIxBeGxJlN3p4qlKLOvuVHtqbjPf1LEjdJsHD288Tsa1SRCxGsEycRFG9eqBRobWojAiGEysmPtDjjDAm0yvPT4bhMWbjcgSOY3xbdneOUKryMNjHktydWUqlDCPJxiDQWbvPldsg8r3pph7W/a/8zdi6hp6b4N9lMp0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773746923; c=relaxed/simple; bh=32qbYYZEgqHyAOOKzk5WrymnIe+x73RqYh712Mog1As=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=F8xWoHHpXKe2OnoO9Ouul92ptOgq72G4zOuj2pOVjLzfhz3v7QmaWPdDJaqStg+MSs0I8TuF99Shxp7kGRu2ONfnnGhfkSictPG8g7P5sdECBDixe//5tj4rQM2AiRhTBlL0QoKu238lIM8Xw5t/KcvCDBYohaOG154TDix3Nys= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hdIGeuR5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hdIGeuR5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C537EC4AF09; Tue, 17 Mar 2026 11:28:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773746922; bh=32qbYYZEgqHyAOOKzk5WrymnIe+x73RqYh712Mog1As=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=hdIGeuR59iiqFhPnWcecegpZD30lBPjH6oyJgNrZ6sTuqfj67G8KwOQMWRpH66ln7 Tb+7rKSnyapJxOuOYPf0JqsZH2UEsi7+u7bYXa84wITJpd0GoQExdA157pROiQ0xUG FkP/HT0QhUuw5NL2RQRZavEB2AO8Z6ynUUE65XnPSOjoNXoU74KUjQ+SHXoxtRuIUS TOnUblaSujaS4wEbaEDE+Z0BBhVvhMdSbMomo10xrl9uEkH5esYJK4Bo9jOSvD5lO9 UHg+lN1Vc3K6yiC0r5V5MihGfutd9Xtk6BEwYZhd4ijbJ2iH45Yye6vueuxh2SZZSA WmIwjPwb4DRKw== Received: from phl-compute-11.internal (phl-compute-11.internal [10.202.2.51]) by mailfauth.phl.internal (Postfix) with ESMTP id E28B8F40068; Tue, 17 Mar 2026 07:28:40 -0400 (EDT) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-11.internal (MEProxy); Tue, 17 Mar 2026 07:28:40 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgdeftdduudefucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkrghssehkvghrnhgvlhdrohhrgheqnecuggftrfgrthhtvg hrnhepueeijeeiffekheeffffftdekleefleehhfefhfduheejhedvffeluedvudefgfek necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepkhhirh hilhhlodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdduieduudeivdeiheeh qddvkeeggeegjedvkedqkhgrsheppehkvghrnhgvlhdrohhrghesshhhuhhtvghmohhvrd hnrghmvgdpnhgspghrtghpthhtohepheegpdhmohguvgepshhmthhpohhuthdprhgtphht thhopegurghvihgusehkvghrnhgvlhdrohhrghdprhgtphhtthhopegrkhhpmheslhhinh hugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtohepmhhutghhuhhnrdhsohhn gheslhhinhhugidruggvvhdprhgtphhtthhopeifihhllhihsehinhhfrhgruggvrggurd horhhgpdhrtghpthhtohepuhhsrghmrggrrhhifheigedvsehgmhgrihhlrdgtohhmpdhr tghpthhtohepfhhvughlsehgohhoghhlvgdrtghomhdprhgtphhtthhopehoshgrlhhvrg guohhrsehsuhhsvgdruggvpdhrtghpthhtoheprhhpphhtsehkvghrnhgvlhdrohhrghdp rhgtphhtthhopehvsggrsghkrgesshhushgvrdgtii X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 17 Mar 2026 07:28:38 -0400 (EDT) Date: Tue, 17 Mar 2026 11:28:34 +0000 From: Kiryl Shutsemau To: "David Hildenbrand (Arm)" Cc: Andrew Morton , Muchun Song , Matthew Wilcox , Usama Arif , Frank van der Linden , Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , Huacai Chen , WANG Xuerui , Palmer Dabbelt , Paul Walmsley , Albert Ou , Alexandre Ghiti , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org Subject: Re: [PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages Message-ID: References: <20260227194302.274384-1-kas@kernel.org> <20260227194302.274384-10-kas@kernel.org> <4e52f70d-e0c3-471f-8073-68c0e9bc94ca@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4e52f70d-e0c3-471f-8073-68c0e9bc94ca@kernel.org> On Mon, Mar 16, 2026 at 05:48:24PM +0100, David Hildenbrand (Arm) wrote: > On 2/27/26 20:42, Kiryl Shutsemau (Meta) wrote: > > Currently, the vmemmap for bootmem-allocated gigantic pages is populated > > early in hugetlb_vmemmap_init_early(). However, the zone information is > > only available after zones are initialized. If it is later discovered > > that a page spans multiple zones, the HVO mapping must be undone and > > replaced with a normal mapping using vmemmap_undo_hvo(). > > > > Defer the actual vmemmap population to hugetlb_vmemmap_init_late(). At > > this stage, zones are already initialized, so it can be checked if the > > page is valid for HVO before deciding how to populate the vmemmap. > > > > This allows us to remove vmemmap_undo_hvo() and the complex logic > > required to rollback HVO mappings. > > > > In hugetlb_vmemmap_init_late(), if HVO population fails or if the zones > > are invalid, fall back to a normal vmemmap population. > > > > Postponing population until hugetlb_vmemmap_init_late() also makes zone > > information available from within vmemmap_populate_hvo(). > > So we'll keep marking the sections as SECTION_IS_VMEMMAP_PREINIT such > that sparse_init_nid() will still properly skip it and leave population > to hugetlb_vmemmap_init_late(). > > Should we clear SECTION_IS_VMEMMAP_PREINIT in case we run into the > hugetlb_bootmem_page_zones_valid() scenario? > > I suspect we don't care about SECTION_IS_VMEMMAP_PREINIT after boot and > can just leave the flag set. (maybe we wan to add a comment in the code? > above the vmemmap_populate() ?) I think keeping the flag is right thing to do. SECTION_IS_VMEMMAP_PREINIT indicates to core-sparse that the section should not be populated and it will be initialized elsewhere. Even in !hugetlb_bootmem_page_zones_valid() we take care of it in hugetlb_vmemmap_init_late(). And, as you mentioned, nobody looks at the flag after boot. > Nothing else jumped at me > > Acked-by: David Hildenbrand (Arm) > > -- > Cheers, > > David -- Kiryl Shutsemau / Kirill A. Shutemov