From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A0BAEE7DEF7 for ; Mon, 2 Feb 2026 15:58:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Uyo/OoFqQXqQvUQ0oKYIFPB4jc09Qas9jk7v+lR/5Dc=; b=nJeunPgJMOV7ap P/TbtJ88gbA7ElDWIaLohxa+bomeE+X98/m7Jq8rd69WpWJI/r75WSTZUiYWNrqy6TZgzn8FrdK5m Fv0d1MxdK7ULQE4kuDkPksL/nXnFWpbnqV4LqdIB78RA4Unbu6pfRXl2GDR4H310LC69N/cvJwnMF 2PIu6XGWls4RBAUot+tiSy7GyHJ5X3EVTpGS6BBOEhdTp7wQN+W7RkH4287sqt6D9NIz7VvqBpDo5 RGWKVmor/w+n0NBjxcZ5Uptfni2UGINXM7+8dPQyGGaZOQSCLzTCjJ26d6zpm/iswUM1lBoGIm3oe bvaiC2sJDJG5T4NynajA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vmwJg-00000005EgQ-32Ui; Mon, 02 Feb 2026 15:58:22 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vmwJf-00000005Efp-3mNq for linux-riscv@bombadil.infradead.org; Mon, 02 Feb 2026 15:58:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=thzRNh9rAZK8qHDFwycDHVrMaoNZgzgZsSom1OMIaIs=; b=lHF0uOhCwU77Sh4au/w6+YtZyl zGtc2L2pMzWZDc5f53HIhDYS+47jVj74px0sTzwngbStqCGHZYvSv3OWUJILdv2MYaNC2szXDrqPf xnOwVjQ6Hgul30oOJhTLXME2OzQ7UTGXH20IwL/NKtcjZcCj4BsWIwFBTss5kNhmt46+8LOEuFdD8 tAym7px1mGrYLBQ1G1R1qpiuRB0SI2dMGiW3uvwPGHHbC4cvnC0PbTG1PZInEqzD/lXE+Bcg7iPGG tcDhpeOYVdhDSCvCNKMocRUmY3zJAlD5nwXrR/xYbaMp5j7jdIwbAGAZTfQkXYydLEqhLERh0WYXo 4zXWWe5Q==; Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by desiato.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vmwJc-0000000Elxq-1KWm for linux-riscv@lists.infradead.org; Mon, 02 Feb 2026 15:58:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 0F374441DB; Mon, 2 Feb 2026 15:58:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1F9D9C4AF0B; Mon, 2 Feb 2026 15:58:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770047894; bh=8150cyoGX9SS1fXE9plqym887h813uJpRn/Ewh7r9v8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pCTVTrn2xIfXEgDFZxJ7MNSokBBZtXjXd8VlKnhKCBCzTx8fZnHY4HyrN17XCH5F2 /kTY/xFUOm+7GUvGwvpA+ZLlPJ0UuNQctj+cZUYNbxl9LQ+wNhL8L7Pm64Y3ZRiWIw zClXFNqdvSrARA5RBgAc6nkgwiREJUW/5Ep2NvO2y7FKBGqMpum0Tk8qsFmtE3A9LE 9ES7+2doVIX89ZSYIVNNTuBkgACnEEeGmeghL3cTJzksK4tpzxJZW6JzuhaFjpBCbm CT2b1KrxAn/OVgL2YguYC081eZTzFh57KgVGWA9z0UmzyfiUvPw1WLw16LgFLvNOA4 dyANFCuX2yexg== Received: from phl-compute-08.internal (phl-compute-08.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id 4456BF40069; Mon, 2 Feb 2026 10:58:13 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Mon, 02 Feb 2026 10:58:13 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddujeektdeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkrghssehkvghrnhgvlhdrohhrgheqnecuggftrfgrthhtvg hrnhephfdufeejhefhkedtuedvfeevjeffvdfhvedtudfgudffjeefieekleehvdetvdev necuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepkhhirh hilhhlodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdduieduudeivdeiheeh qddvkeeggeegjedvkedqkhgrsheppehkvghrnhgvlhdrohhrghesshhhuhhtvghmohhvrd hnrghmvgdpnhgspghrtghpthhtohepvdekpdhmohguvgepshhmthhpohhuthdprhgtphht thhopegrkhhpmheslhhinhhugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtoh epmhhutghhuhhnrdhsohhngheslhhinhhugidruggvvhdprhgtphhtthhopegurghvihgu sehrvgguhhgrthdrtghomhdprhgtphhtthhopeifihhllhihsehinhhfrhgruggvrggurd horhhgpdhrtghpthhtohepuhhsrghmrggrrhhifheigedvsehgmhgrihhlrdgtohhmpdhr tghpthhtohepfhhvughlsehgohhoghhlvgdrtghomhdprhgtphhtthhopehoshgrlhhvrg guohhrsehsuhhsvgdruggvpdhrtghpthhtoheprhhpphhtsehkvghrnhgvlhdrohhrghdp rhgtphhtthhopehvsggrsghkrgesshhushgvrdgtii X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 2 Feb 2026 10:58:11 -0500 (EST) From: Kiryl Shutsemau To: Andrew Morton , Muchun Song , David Hildenbrand , Matthew Wilcox , Usama Arif , Frank van der Linden Cc: Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , Huacai Chen , WANG Xuerui , Palmer Dabbelt , Paul Walmsley , Albert Ou , Alexandre Ghiti , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org, Kiryl Shutsemau Subject: [PATCHv6 11/17] mm/hugetlb: Remove fake head pages Date: Mon, 2 Feb 2026 15:56:27 +0000 Message-ID: <20260202155634.650837-12-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20260202155634.650837-1-kas@kernel.org> References: <20260202155634.650837-1-kas@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260202_155816_853471_32BBF42A X-CRM114-Status: GOOD ( 20.72 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org HugeTLB Vmemmap Optimization (HVO) reduces memory usage by freeing most vmemmap pages for huge pages and remapping the freed range to a single page containing the struct page metadata. With the new mask-based compound_info encoding (for power-of-2 struct page sizes), all tail pages of the same order are now identical regardless of which compound page they belong to. This means the tail pages can be truly shared without fake heads. Allocate a single page of initialized tail struct pages per NUMA node per order in the vmemmap_tails[] array in pglist_data. All huge pages of that order on the node share this tail page, mapped read-only into their vmemmap. The head page remains unique per huge page. Redefine MAX_FOLIO_ORDER using ilog2(). The define has to produce a compile-constant as it is used to specify vmemmap_tail array size. For some reason, compiler is not able to solve get_order() at compile-time, but ilog2() works. Avoid PUD_ORDER to define MAX_FOLIO_ORDER as it adds dependency to which generates hard-to-break include loop. This eliminates fake heads while maintaining the same memory savings, and simplifies compound_head() by removing fake head detection. Signed-off-by: Kiryl Shutsemau --- include/linux/mmzone.h | 19 +++++++++++++++++-- mm/hugetlb_vmemmap.c | 34 +++++++++++++++++++++++++++++++-- mm/sparse-vmemmap.c | 43 ++++++++++++++++++++++++++++++++++-------- 3 files changed, 84 insertions(+), 12 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 192143b5cdc0..c01f8235743b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -81,13 +81,17 @@ * currently expect (see CONFIG_HAVE_GIGANTIC_FOLIOS): with hugetlb, we expect * no folios larger than 16 GiB on 64bit and 1 GiB on 32bit. */ -#define MAX_FOLIO_ORDER get_order(IS_ENABLED(CONFIG_64BIT) ? SZ_16G : SZ_1G) +#ifdef CONFIG_64BIT +#define MAX_FOLIO_ORDER (ilog2(SZ_16G) - PAGE_SHIFT) +#else +#define MAX_FOLIO_ORDER (ilog2(SZ_1G) - PAGE_SHIFT) +#endif #else /* * Without hugetlb, gigantic folios that are bigger than a single PUD are * currently impossible. */ -#define MAX_FOLIO_ORDER PUD_ORDER +#define MAX_FOLIO_ORDER (PUD_SHIFT - PAGE_SHIFT) #endif #define MAX_FOLIO_NR_PAGES (1UL << MAX_FOLIO_ORDER) @@ -1402,6 +1406,14 @@ struct memory_failure_stats { }; #endif +/* + * vmemmap optimization (like HVO) is only possible for page orders that fill + * two or more pages with struct pages. + */ +#define VMEMMAP_TAIL_MIN_ORDER (ilog2(2 * PAGE_SIZE / sizeof(struct page))) +#define __NR_VMEMMAP_TAILS (MAX_FOLIO_ORDER - VMEMMAP_TAIL_MIN_ORDER + 1) +#define NR_VMEMMAP_TAILS (__NR_VMEMMAP_TAILS > 0 ? __NR_VMEMMAP_TAILS : 0) + /* * On NUMA machines, each NUMA node would have a pg_data_t to describe * it's memory layout. On UMA machines there is a single pglist_data which @@ -1550,6 +1562,9 @@ typedef struct pglist_data { #ifdef CONFIG_MEMORY_FAILURE struct memory_failure_stats mf_stats; #endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP + struct page *vmemmap_tails[NR_VMEMMAP_TAILS]; +#endif } pg_data_t; #define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a39a301e08b9..688764c52c72 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -19,6 +19,7 @@ #include #include "hugetlb_vmemmap.h" +#include "internal.h" /** * struct vmemmap_remap_walk - walk vmemmap page table @@ -505,6 +506,32 @@ static bool vmemmap_should_optimize_folio(const struct hstate *h, struct folio * return true; } +static struct page *vmemmap_get_tail(unsigned int order, int node) +{ + struct page *tail, *p; + unsigned int idx; + + idx = order - VMEMMAP_TAIL_MIN_ORDER; + tail = READ_ONCE(NODE_DATA(node)->vmemmap_tails[idx]); + if (tail) + return tail; + + tail = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0); + if (!tail) + return NULL; + + p = page_to_virt(tail); + for (int i = 0; i < PAGE_SIZE / sizeof(struct page); i++) + prep_compound_tail(p + i, NULL, order); + + if (cmpxchg(&NODE_DATA(node)->vmemmap_tails[idx], NULL, tail)) { + __free_page(tail); + tail = READ_ONCE(NODE_DATA(node)->vmemmap_tails[idx]); + } + + return tail; +} + static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *folio, struct list_head *vmemmap_pages, @@ -520,6 +547,11 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, if (!vmemmap_should_optimize_folio(h, folio)) return ret; + nid = folio_nid(folio); + vmemmap_tail = vmemmap_get_tail(h->order, nid); + if (!vmemmap_tail) + return -ENOMEM; + static_branch_inc(&hugetlb_optimize_vmemmap_key); if (flags & VMEMMAP_SYNCHRONIZE_RCU) @@ -537,7 +569,6 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, */ folio_set_hugetlb_vmemmap_optimized(folio); - nid = folio_nid(folio); vmemmap_head = alloc_pages_node(nid, GFP_KERNEL, 0); if (!vmemmap_head) { ret = -ENOMEM; @@ -548,7 +579,6 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, list_add(&vmemmap_head->lru, vmemmap_pages); memmap_pages_add(1); - vmemmap_tail = vmemmap_head; vmemmap_start = (unsigned long)&folio->page; vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 37522d6cb398..13bcf5562f1b 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -378,16 +378,44 @@ void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, } } -/* - * Populate vmemmap pages HVO-style. The first page contains the head - * page and needed tail pages, the other ones are mirrors of the first - * page. - */ +static __meminit unsigned long vmemmap_get_tail(unsigned int order, int node) +{ + struct page *p, *tail; + unsigned int idx; + + BUG_ON(order < VMEMMAP_TAIL_MIN_ORDER); + BUG_ON(order > MAX_FOLIO_ORDER); + + idx = order - VMEMMAP_TAIL_MIN_ORDER; + tail = NODE_DATA(node)->vmemmap_tails[idx]; + if (tail) + return page_to_pfn(tail); + + p = vmemmap_alloc_block_zero(PAGE_SIZE, node); + if (!p) + return 0; + + for (int i = 0; i < PAGE_SIZE / sizeof(struct page); i++) + prep_compound_tail(p + i, NULL, order); + + tail = virt_to_page(p); + NODE_DATA(node)->vmemmap_tails[idx] = tail; + + return page_to_pfn(tail); +} + int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, int node, unsigned long headsize) { + unsigned long maddr, len, tail_pfn; + unsigned int order; pte_t *pte; - unsigned long maddr; + + len = end - addr; + order = ilog2(len * sizeof(struct page) / PAGE_SIZE); + tail_pfn = vmemmap_get_tail(order, node); + if (!tail_pfn) + return -ENOMEM; for (maddr = addr; maddr < addr + headsize; maddr += PAGE_SIZE) { pte = vmemmap_populate_address(maddr, node, NULL, -1, 0); @@ -398,8 +426,7 @@ int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, /* * Reuse the last page struct page mapped above for the rest. */ - return vmemmap_populate_range(maddr, end, node, NULL, - pte_pfn(ptep_get(pte)), 0); + return vmemmap_populate_range(maddr, end, node, NULL, tail_pfn, 0); } void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, -- 2.51.2 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv