From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3473E155C87 for ; Fri, 20 Feb 2026 12:10:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771589451; cv=none; b=X4Al0yFXE8VIz1XsC/z6H8rCWU6MjhrBwfX2ctotqbF5t4lB739Ml85PzSk17bG3sMZ7KKZ3PwLa4VzTCUcRDjP1nfhY5iH2v7n+pWu2krbMiMjPmdM+y49lOXb71dfi6bzwyVSZzwp1cx4Ugsc8yksCQqJP3/l1m9gOwZGGIMI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771589451; c=relaxed/simple; bh=CkuMvJSvk6Bm3IKgxxcihreO2cDBuepEy+Dn/PXjewM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=cv8gyfFNR8yJifhUnpYj/xfwcRcCdpfqPOCcafDJoiEn8g2KichMrAT51Ei/9auLctJeZuae0MH8q8I+k+4X9HdIBsRtaQTej1OysO+M6JpP4QQlH4SXYCLrQIw+FUN02hXxfSLSD0IExwD4O5urFM8DcWEDWgB3vKzxDPN2j50= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NZrAb9YX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NZrAb9YX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B63E0C116C6; Fri, 20 Feb 2026 12:10:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771589450; bh=CkuMvJSvk6Bm3IKgxxcihreO2cDBuepEy+Dn/PXjewM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=NZrAb9YXiTa5a3dkoDI2JcAKRafqDqYVTcfhKHK+9W2w9WUObL4bCtnNg5CZgupNm 2wZeg82UdHp5PeaUq94b1mvwNtF9OjJlIBvSgsPXmTxDyzF8WrdpTihmmTWbyK2pO4 DBv8TidrpW29F1ziJ4SMKiRWPls1UKbDeMVVLWRUP3YDAbzBHSP44Yurs+G7w/SKnn 5Af3LRaiHx6Fg+J9GW5C27HVMj/pNzkkPMA5gHh3LgsL8bc5a2f82I16zynaXkwe9t wzPLqj7Wwg8yzRjUjr0remSGyM9AjoKPmW+mda0cRCGKXEt/dvAl0sAbwXGSdMPMBY hmxsM+CDOuMJQ== Received: from phl-compute-01.internal (phl-compute-01.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 92595F40068; Fri, 20 Feb 2026 07:10:48 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-01.internal (MEProxy); Fri, 20 Feb 2026 07:10:48 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddvvdekgedtucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepfffhvfevuffkfhggtggugfgjsehtkeertddttdejnecuhfhrohhmpefmihhrhihl ucfuhhhuthhsvghmrghuuceokhgrsheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtth gvrhhnpeeutdelvddtvedthfefffejjedvieelleektdfgfeelveejfeehheeuffeuhedt veenucffohhmrghinheprghnughrohhiugdrtghomhenucevlhhushhtvghrufhiiigvpe dtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehkihhrihhllhdomhgvshhmthhprghuthhh phgvrhhsohhnrghlihhthidqudeiudduiedvieehhedqvdekgeeggeejvdekqdhkrghspe epkhgvrhhnvghlrdhorhhgsehshhhuthgvmhhovhdrnhgrmhgvpdhnsggprhgtphhtthho peegvddpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtohepkhgrlhgvshhhshhinhhghh esghhoohhglhgvrdgtohhmpdhrtghpthhtohepuggrvhhiugeskhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhsfhdqphgtsehlihhsthhsrdhlihhnuhigqdhfohhunhgurghtih honhdrohhrghdprhgtphhtthhopehlihhnuhigqdhmmheskhhvrggtkhdrohhrghdprhgt phhtthhopeigkeeisehkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhigqdhkvg hrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopegrkhhpmheslhhi nhhugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtohepthhglhigsehlihhnuh htrhhonhhigidruggvpdhrtghpthhtohepmhhinhhgohesrhgvughhrghtrdgtohhm X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 20 Feb 2026 07:10:46 -0500 (EST) Date: Fri, 20 Feb 2026 12:10:41 +0000 From: Kiryl Shutsemau To: Kalesh Singh Cc: "David Hildenbrand (Arm)" , lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Matthew Wilcox , Johannes Weiner , Usama Arif , android-mm , Adrian =?utf-8?Q?Barna=C5=9B?= , Mateusz =?utf-8?Q?Ma=C4=87kowski?= , Steven Moreland Subject: Re: [LSF/MM/BPF TOPIC] 64k (or 16k) base page size on x86 Message-ID: References: <915aafb3-d1ff-4ae9-8751-f78e333a1f5f@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Thu, Feb 19, 2026 at 03:24:37PM -0800, Kalesh Singh wrote: > On Thu, Feb 19, 2026 at 7:39 AM David Hildenbrand (Arm) > wrote: > > > > On 2/19/26 16:08, Kiryl Shutsemau wrote: > > > No, there's no new hardware (that I know of). I want to explore what page size > > > means. > > > > > > The kernel uses the same value - PAGE_SIZE - for two things: > > > > > > - the order-0 buddy allocation size; > > > > > > - the granularity of virtual address space mapping; > > > > > > I think we can benefit from separating these two meanings and allowing > > > order-0 allocations to be larger than the virtual address space covered by a > > > PTE entry. > > > > > > The main motivation is scalability. Managing memory on multi-terabyte > > > machines in 4k is suboptimal, to say the least. > > > > > > Potential benefits of the approach (assuming 64k pages): > > > > > > - The order-0 page size cuts struct page overhead by a factor of 16. From > > > ~1.6% of RAM to ~0.1%; > > > > > > - TLB wins on machines with TLB coalescing as long as mapping is naturally > > > aligned; > > > > > > - Order-5 allocation is 2M, resulting in less pressure on the zone lock; > > > > > > - 1G pages are within possibility for the buddy allocator - order-14 > > > allocation. It can open the road to 1G THPs. > > > > > > - As with THP, fewer pages - less pressure on the LRU lock; > > > > > > - ... > > > > > > The trade-off is memory waste (similar to what we have on architectures with > > > native 64k pages today) and complexity, mostly in the core-MM code. > > > > > > == Design considerations == > > > > > > I want to split PAGE_SIZE into two distinct values: > > > > > > - PTE_SIZE defines the virtual address space granularity; > > > > > > - PG_SIZE defines the size of the order-0 buddy allocation; > > > > > > PAGE_SIZE is only defined if PTE_SIZE == PG_SIZE. It will flag which code > > > requires conversion, and keep existing code working while conversion is in > > > progress. > > > > > > The same split happens for other page-related macros: mask, shift, > > > alignment helpers, etc. > > > > > > PFNs are in PTE_SIZE units. > > > > > > The buddy allocator and page cache (as well as all I/O) operate in PG_SIZE > > > units. > > > > > > Userspace mappings are maintained with PTE_SIZE granularity. No ABI changes > > > for userspace. But we might want to communicate PG_SIZE to userspace to > > > get the optimal results for userspace that cares. > > > > > > PTE_SIZE granularity requires a substantial rework of page fault and VMA > > > handling: > > > > > > - A struct page pointer and pgprot_t are not enough to create a PTE entry. > > > We also need the offset within the page we are creating the PTE for. > > > > > > - Since the VMA start can be aligned arbitrarily with respect to the > > > underlying page, vma->vm_pgoff has to be changed to vma->vm_pteoff, > > > which is in PTE_SIZE units. > > > > > > - The page fault handler needs to handle PTE_SIZE < PG_SIZE, including > > > misaligned cases; > > > > > > Page faults into file mappings are relatively simple to handle as we > > > always have the page cache to refer to. So you can map only the part of the > > > page that fits in the page table, similarly to fault-around. > > > > > > Anonymous and file-CoW faults should also be simple as long as the VMA is > > > aligned to PG_SIZE in both the virtual address space and with respect to > > > vm_pgoff. We might waste some memory on the ends of the VMA, but it is > > > tolerable. > > > > > > Misaligned anonymous and file-CoW faults are a pain. Specifically, mapping > > > pages across a page table boundary. In the worst case, a page is mapped across > > > a PGD entry boundary and PTEs for the page have to be put in two separate > > > subtrees of page tables. > > > > > > A naive implementation would map different pages on different sides of a > > > page table boundary and accept the waste of one page per page table crossing. > > > The hope is that misaligned mappings are rare, but this is suboptimal. > > > > > > mremap(2) is the ultimate stress test for the design. > > > > > > On x86, page tables are allocated from the buddy allocator and if PG_SIZE > > > is greater than 4 KB, we need a way to pack multiple page tables into a > > > single page. We could use the slab allocator for this, but it would > > > require relocating the page-table metadata out of struct page. > > > > When discussing per-process page sizes with Ryan and Dev, I mentioned > > that having a larger emulated page size could be interesting for other > > architectures as well. > > > > That is, we would emulate a 64K page size on Intel for user space as > > well, but let the OS work with 4K pages. > > > > We'd only allocate+map large folios into user space + pagecache, but > > still allow for page tables etc. to not waste memory. > > > > So "most" of your allocations in the system would actually be at least > > 64k, reducing zone lock contention etc. > > > > > > It doesn't solve all the problems you wanted to tackle on your list > > (e.g., "struct page" overhead, which will be sorted out by memdescs). > > Hi Kiryl, > > I'd be interested to discuss this at LSFMM. > > On Android, we have a separate but related use case: we emulate the > userspace page size on x86, primarily to enable app developers to > conduct compatibility testing of their apps for 16KB Android devices. > [1] > > It mainly works by enforcing a larger granularity on the VMAs to > emulate a userspace page size, somewhat similar to what David > mentioned, while the underlying kernel still operates on a 4KB > granularity. [2] > > IIUC the current design would not enfore the larger granularity / > alignment for VMAs to avoid breaking ABI. However, I'd be interest to > discuss whether it can be extended to cover this usecase as well. I don't want to break ABI, but might add a knob (maybe personality(2) ?) for enforcement to see what breaks. In general, I would prefer to advertise a new value to userspace that would mean preferred virtual address space granularity. > > [1] https://developer.android.com/guide/practices/page-sizes#16kb-emulator > [2] https://source.android.com/docs/core/architecture/16kb-page-size/getting-started-cf-x86-64-pgagnostic > > Thanks, > Kalesh > > > > > > > > -- > > Cheers, > > > > David > > -- Kiryl Shutsemau / Kirill A. Shutemov