From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83597335098 for ; Thu, 19 Feb 2026 15:55:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771516506; cv=none; b=RS2BYBoeA7wk6BslC86ZM4osekexOAenrnIXFJgLyuiXLK8Xk72j8ZKo0Fr3LgyvqKMTnvL4eg0SNpU1U5pJ9oJ6pEZ5EQLqZuMD2ApL+1sPJo2FUzY7c1x40brBAS0moGfCv5SbRnFdLUJDgdPYfn7JmAE6fgSE5f7b9NVF9yk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771516506; c=relaxed/simple; bh=ysZ1rB9WGPsuOPfKK6gFWncDtc3U3ibMHdLln5hfXoE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=itRxs7lrOp/+5gYJH+ME/c/rZmhYtAPJ9/owBfy+hR7StrZz3oeqQ0OGDsDimRvv1DN0qXqV1d1phwY6WOUDSw4SinMmuJ60p4LctahGy8mx493teOlLh4t4G8orXZ1KoBO7guY3exeSVUa55ZIY3BM9w0wm6cHNZBpsc2/XFAs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Dqn7K7QM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Dqn7K7QM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7785BC116D0; Thu, 19 Feb 2026 15:55:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771516506; bh=ysZ1rB9WGPsuOPfKK6gFWncDtc3U3ibMHdLln5hfXoE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Dqn7K7QMRfA7RN7YAygZGCSPXq155RfxlQvALphmgLqWiGcFxHTGT3O5yv4oCExgn lhOY6X0zp0XhdOUSdKy/e4+PgKXB5kqDtiqgXNR3l1hlToNdbco6FwUBIeEkHqcfk8 gQc7yvp1zTjkSHXfakyjA59jdsoe9WprZYvcZb174sB6HlhYBoFGR9mO3PD1Jpe3Mk xHtsi0vYKD9aN+N/vXqn6LFwbPkaxsb7/qsjQNsmQ7tQhnM3Tdsy0IrToisb8VMgZJ J7lk4f3mA4boiHt7BUzpxahOXYB46QrEYEQ64Q6wpR1PufMa8G837uAKERKtl8H8pN T2miDyvzk++7A== Received: from phl-compute-06.internal (phl-compute-06.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 7CECFF4006A; Thu, 19 Feb 2026 10:55:04 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-06.internal (MEProxy); Thu, 19 Feb 2026 10:55:04 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddvvdehleegucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkrghssehkvghrnhgvlhdrohhrgheqnecuggftrfgrthhtvg hrnhepueeijeeiffekheeffffftdekleefleehhfefhfduheejhedvffeluedvudefgfek necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepkhhirh hilhhlodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdduieduudeivdeiheeh qddvkeeggeegjedvkedqkhgrsheppehkvghrnhgvlhdrohhrghesshhhuhhtvghmohhvrd hnrghmvgdpnhgspghrtghpthhtohepfedvpdhmohguvgepshhmthhpohhuthdprhgtphht thhopegurghvihgusehkvghrnhgvlhdrohhrghdprhgtphhtthhopehlshhfqdhptgeslh hishhtshdrlhhinhhugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtoheplhhi nhhugidqmhhmsehkvhgrtghkrdhorhhgpdhrtghpthhtohepgiekieeskhgvrhhnvghlrd horhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheprghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdroh hrghdprhgtphhtthhopehtghhlgieslhhinhhuthhrohhnihigrdguvgdprhgtphhtthho pehmihhnghhosehrvgguhhgrthdrtghomhdprhgtphhtthhopegsphesrghlihgvnhekrd guvg X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 19 Feb 2026 10:55:01 -0500 (EST) Date: Thu, 19 Feb 2026 15:54:56 +0000 From: Kiryl Shutsemau To: "David Hildenbrand (Arm)" Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Matthew Wilcox , Johannes Weiner , Usama Arif Subject: Re: [LSF/MM/BPF TOPIC] 64k (or 16k) base page size on x86 Message-ID: References: <915aafb3-d1ff-4ae9-8751-f78e333a1f5f@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <915aafb3-d1ff-4ae9-8751-f78e333a1f5f@kernel.org> On Thu, Feb 19, 2026 at 04:39:34PM +0100, David Hildenbrand (Arm) wrote: > On 2/19/26 16:08, Kiryl Shutsemau wrote: > > No, there's no new hardware (that I know of). I want to explore what page size > > means. > > > > The kernel uses the same value - PAGE_SIZE - for two things: > > > > - the order-0 buddy allocation size; > > > > - the granularity of virtual address space mapping; > > > > I think we can benefit from separating these two meanings and allowing > > order-0 allocations to be larger than the virtual address space covered by a > > PTE entry. > > > > The main motivation is scalability. Managing memory on multi-terabyte > > machines in 4k is suboptimal, to say the least. > > > > Potential benefits of the approach (assuming 64k pages): > > > > - The order-0 page size cuts struct page overhead by a factor of 16. From > > ~1.6% of RAM to ~0.1%; > > > > - TLB wins on machines with TLB coalescing as long as mapping is naturally > > aligned; > > > > - Order-5 allocation is 2M, resulting in less pressure on the zone lock; > > > > - 1G pages are within possibility for the buddy allocator - order-14 > > allocation. It can open the road to 1G THPs. > > > > - As with THP, fewer pages - less pressure on the LRU lock; > > > > - ... > > > > The trade-off is memory waste (similar to what we have on architectures with > > native 64k pages today) and complexity, mostly in the core-MM code. > > > > == Design considerations == > > > > I want to split PAGE_SIZE into two distinct values: > > > > - PTE_SIZE defines the virtual address space granularity; > > > > - PG_SIZE defines the size of the order-0 buddy allocation; > > > > PAGE_SIZE is only defined if PTE_SIZE == PG_SIZE. It will flag which code > > requires conversion, and keep existing code working while conversion is in > > progress. > > > > The same split happens for other page-related macros: mask, shift, > > alignment helpers, etc. > > > > PFNs are in PTE_SIZE units. > > > > The buddy allocator and page cache (as well as all I/O) operate in PG_SIZE > > units. > > > > Userspace mappings are maintained with PTE_SIZE granularity. No ABI changes > > for userspace. But we might want to communicate PG_SIZE to userspace to > > get the optimal results for userspace that cares. > > > > PTE_SIZE granularity requires a substantial rework of page fault and VMA > > handling: > > > > - A struct page pointer and pgprot_t are not enough to create a PTE entry. > > We also need the offset within the page we are creating the PTE for. > > > > - Since the VMA start can be aligned arbitrarily with respect to the > > underlying page, vma->vm_pgoff has to be changed to vma->vm_pteoff, > > which is in PTE_SIZE units. > > > > - The page fault handler needs to handle PTE_SIZE < PG_SIZE, including > > misaligned cases; > > > > Page faults into file mappings are relatively simple to handle as we > > always have the page cache to refer to. So you can map only the part of the > > page that fits in the page table, similarly to fault-around. > > > > Anonymous and file-CoW faults should also be simple as long as the VMA is > > aligned to PG_SIZE in both the virtual address space and with respect to > > vm_pgoff. We might waste some memory on the ends of the VMA, but it is > > tolerable. > > > > Misaligned anonymous and file-CoW faults are a pain. Specifically, mapping > > pages across a page table boundary. In the worst case, a page is mapped across > > a PGD entry boundary and PTEs for the page have to be put in two separate > > subtrees of page tables. > > > > A naive implementation would map different pages on different sides of a > > page table boundary and accept the waste of one page per page table crossing. > > The hope is that misaligned mappings are rare, but this is suboptimal. > > > > mremap(2) is the ultimate stress test for the design. > > > > On x86, page tables are allocated from the buddy allocator and if PG_SIZE > > is greater than 4 KB, we need a way to pack multiple page tables into a > > single page. We could use the slab allocator for this, but it would > > require relocating the page-table metadata out of struct page. > > When discussing per-process page sizes with Ryan and Dev, I mentioned that > having a larger emulated page size could be interesting for other > architectures as well. > > That is, we would emulate a 64K page size on Intel for user space as well, > but let the OS work with 4K pages. > > We'd only allocate+map large folios into user space + pagecache, but still > allow for page tables etc. to not waste memory. > > So "most" of your allocations in the system would actually be at least 64k, > reducing zone lock contention etc. I am not convinced emulation would help zone lock contention. I expect contention to be higher if page allocator would see a mix of 4k and 64k requests. It sounds like constant split/merge under the lock. > It doesn't solve all the problems you wanted to tackle on your list (e.g., > "struct page" overhead, which will be sorted out by memdescs). I don't think we can serve 1G pages out of buddy allocator with 4k order-0. And without it, I don't see how to get to a viable 1G THPs. -- Kiryl Shutsemau / Kirill A. Shutemov