From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8471044D6AB for ; Fri, 27 Feb 2026 19:30:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772220658; cv=none; b=fKRbojpkA2j/bKbL9ENp2M2rB053p2VMC5gKyxbQsu7NJ19U6CNYVGilJ01k32rnmSut4B6T+F+dRQTZWYM6zwLLyqMVjo9jjGGvMOJMAgDyIuvgeRNNm2qb1kwo0mfHzdnXU11uFn9tFcdGVDXkP6OgTOtPCujAHi0CuL2ayGc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772220658; c=relaxed/simple; bh=A/jdrFN99nKynIb0vG8N4R+NM5G2b3V4kufeAPBCD60=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TfDV2B4uBjeRFcxFzQa/0ofA8ja7jKfqK1flA51s9P5Mflrhs8U9P6li0EfxN+14PZHp2Y5vi7qKCAUZIkfqEDEbHwNovnjNxcTbkFFjtDTQCiqh+/hK6LLSc/uqJcTDaGPo43yhP+2+mTova8mQyuLXxPboXBnq6IYhndeN4ic= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rSFKLZSO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rSFKLZSO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 51033C2BCB5; Fri, 27 Feb 2026 19:30:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772220658; bh=A/jdrFN99nKynIb0vG8N4R+NM5G2b3V4kufeAPBCD60=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rSFKLZSO4lWNGXCNUh+P71xmPX0QUJ+Ec50/tflmgMmdHcKLvW2ZyHl+titnnokPL R5QYlS+hePIWpXIzDfqwOxBMyvNlG1Olce0X6+lHrQ8eAdbxdE0ABbuN06hqnRUPXp ctljm94E8HvfrdYcmrwFah9Ywbvz79Hcmun44//EQ1cr7L4dvjF5icijxfinbp6Tad /Lw3sYyx9ekAMhrW04pHrapa1WWbv1wSNldgTE7prF2jzerqRtZ0Fnk6JFnI134KVA ojiJzARawguIiPiRdxhKqqTCNg4vjGbve5HoXesL+ywpS4hk/oVhtpmG1COcy0uRDV EPAUylQWgHb/A== Received: from phl-compute-02.internal (phl-compute-02.internal [10.202.2.42]) by mailfauth.phl.internal (Postfix) with ESMTP id 5E497F40068; Fri, 27 Feb 2026 14:30:56 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-02.internal (MEProxy); Fri, 27 Feb 2026 14:30:56 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddvgeelkeegucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfmihhrhihl ucfuhhhuthhsvghmrghuucdlofgvthgrmddfuceokhgrsheskhgvrhhnvghlrdhorhhgqe enucggtffrrghtthgvrhhnpefhudejfedvgeekffefvdekheekkeeuveeftdelheegteel gfefveevueekhfdtteenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrih hlfhhrohhmpehkihhrihhllhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidq udeiudduiedvieehhedqvdekgeeggeejvdekqdhkrghspeepkhgvrhhnvghlrdhorhhgse hshhhuthgvmhhovhdrnhgrmhgvpdhnsggprhgtphhtthhopedvledpmhhouggvpehsmhht phhouhhtpdhrtghpthhtoheprghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdroh hrghdprhgtphhtthhopehmuhgthhhunhdrshhonhhgsehlihhnuhigrdguvghvpdhrtghp thhtohepuggrvhhiugesrhgvughhrghtrdgtohhmpdhrtghpthhtohepfihilhhlhiesih hnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehushgrmhgrrghrihhfieegvdesghhm rghilhdrtghomhdprhgtphhtthhopehfvhgulhesghhoohhglhgvrdgtohhmpdhrtghpth htohepohhsrghlvhgrughorhesshhushgvrdguvgdprhgtphhtthhopehrphhptheskhgv rhhnvghlrdhorhhgpdhrtghpthhtohepvhgsrggskhgrsehsuhhsvgdrtgii X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 27 Feb 2026 14:30:55 -0500 (EST) From: "Kiryl Shutsemau (Meta)" To: Andrew Morton , Muchun Song , David Hildenbrand , Matthew Wilcox , Usama Arif , Frank van der Linden Cc: Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , Huacai Chen , WANG Xuerui , Palmer Dabbelt , Paul Walmsley , Albert Ou , Alexandre Ghiti , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org, Kiryl Shutsemau , "David Hildenbrand (Arm)" Subject: [PATCHv7 15/18] mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key Date: Fri, 27 Feb 2026 19:30:16 +0000 Message-ID: <20260227193030.272078-15-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20260202155634.650837-1-kas@kernel.org> References: <20260202155634.650837-1-kas@kernel.org> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Kiryl Shutsemau The hugetlb_optimize_vmemmap_key static key was used to guard fake head detection in compound_head() and related functions. It allowed skipping the fake head checks entirely when HVO was not in use. With fake heads eliminated and the detection code removed, the static key serves no purpose. Remove its definition and all increment/decrement calls. Signed-off-by: Kiryl Shutsemau Reviewed-by: Muchun Song Acked-by: David Hildenbrand (Arm) --- include/linux/page-flags.h | 2 -- mm/hugetlb_vmemmap.c | 14 ++------------ 2 files changed, 2 insertions(+), 14 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9f22e4f4ef9f..8716a27ef347 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -221,8 +221,6 @@ static __always_inline bool compound_info_has_mask(void) return is_power_of_2(sizeof(struct page)); } -DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); - static __always_inline unsigned long _compound_head(const struct page *page) { unsigned long info = READ_ONCE(page->compound_info); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index fd1d5d5d12b4..4a077d231d3a 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -385,9 +385,6 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, return vmemmap_remap_range(start, end, &walk); } -DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); -EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); - static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); static int __init hugetlb_vmemmap_optimize_param(char *buf) { @@ -419,10 +416,8 @@ static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, * discarded vmemmap pages must be allocated and remapping. */ ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, flags); - if (!ret) { + if (!ret) folio_clear_hugetlb_vmemmap_optimized(folio); - static_branch_dec(&hugetlb_optimize_vmemmap_key); - } return ret; } @@ -544,8 +539,6 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, if (!vmemmap_tail) return -ENOMEM; - static_branch_inc(&hugetlb_optimize_vmemmap_key); - /* * Very Subtle * If VMEMMAP_REMAP_NO_TLB_FLUSH is set, TLB flushing is not performed @@ -581,10 +574,8 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, vmemmap_head, vmemmap_tail, vmemmap_pages, flags); out: - if (ret) { - static_branch_dec(&hugetlb_optimize_vmemmap_key); + if (ret) folio_clear_hugetlb_vmemmap_optimized(folio); - } return ret; } @@ -650,7 +641,6 @@ static void __hugetlb_vmemmap_optimize_folios(struct hstate *h, register_page_bootmem_memmap(pfn_to_section_nr(spfn), &folio->page, HUGETLB_VMEMMAP_RESERVE_SIZE); - static_branch_inc(&hugetlb_optimize_vmemmap_key); continue; } -- 2.51.2