From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8FC423191CE for ; Fri, 20 Feb 2026 12:07:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771589262; cv=none; b=YhzhUV0k8EzgEzx9MFAzd2K8N1M6xOyKudPk4fV5NFlFPY8KYYRocFmSKwvHpXbzGu4CXMEONz/lEts8hzIKtcMbAFXoywRovq/kefu+tN3WHqTWnhyV+ySUSve4UBL5IX5MturDqi/+YdIS1cRo83NaF/CfduJtMSTIuhSDdxY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771589262; c=relaxed/simple; bh=/CPahougHkSBh/z622YkITmIeyFUeeEQQj35cCE/BuU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=JZtPPyPt8PUJXG0CjoqTi7KhcPDOpJWbDva2ApPhEBnJEVVr/uqDszsAPrpQtv8iN7iLfnOewGbLRve/gocNJU+YwEnLl0g+qEr0BuZFyxoTNVRZeapbbVMoOe0ntQ6Xwt4znJ9/Ou3fnrYMug4IALJPnj3MkUkqG9fBuvuSgh4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=APd/nDnx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="APd/nDnx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 175DCC4AF09; Fri, 20 Feb 2026 12:07:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771589262; bh=/CPahougHkSBh/z622YkITmIeyFUeeEQQj35cCE/BuU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=APd/nDnx0Jxftpr7TEfSXE4qaoBlGB3/SdGGFZ7IkW9b1GNciMauzaW/WHVnzQkxu Cxyp3EfRZ0OlV9Vcfk7F8+vX94Ov0OhLTNsNidQVqDBSU9JYdZmRRfZHKnthDjsXM4 PjeYwn6xH+2vvpiry+IGBqo00iql44VEd5IhD8U4Jlf/u+1528OnHmzKwVQcTJfDzW JMtg/dBlQhy/GDDIUNxW/yed36R69t8NswzHNH9qz24TJpYStP+qCcaOgNbw1ikCq/ RPMAhV/Ch0KOUvJ0qn4/Ai4KsNvG2dG9K3vvbKKqUvDLHlWuQnFzdUNxyLSImp1tsb fmHnC1MVIWZbw== Received: from phl-compute-07.internal (phl-compute-07.internal [10.202.2.47]) by mailfauth.phl.internal (Postfix) with ESMTP id CDDC5F40069; Fri, 20 Feb 2026 07:07:39 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-07.internal (MEProxy); Fri, 20 Feb 2026 07:07:39 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddvvdekgedtucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkrghssehkvghrnhgvlhdrohhrgheqnecuggftrfgrthhtvg hrnhepueeijeeiffekheeffffftdekleefleehhfefhfduheejhedvffeluedvudefgfek necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepkhhirh hilhhlodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdduieduudeivdeiheeh qddvkeeggeegjedvkedqkhgrsheppehkvghrnhgvlhdrohhrghesshhhuhhtvghmohhvrd hnrghmvgdpnhgspghrtghpthhtohepfedvpdhmohguvgepshhmthhpohhuthdprhgtphht thhopegurghvihgusehkvghrnhgvlhdrohhrghdprhgtphhtthhopehlshhfqdhptgeslh hishhtshdrlhhinhhugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtoheplhhi nhhugidqmhhmsehkvhgrtghkrdhorhhgpdhrtghpthhtohepgiekieeskhgvrhhnvghlrd horhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheprghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdroh hrghdprhgtphhtthhopehtghhlgieslhhinhhuthhrohhnihigrdguvgdprhgtphhtthho pehmihhnghhosehrvgguhhgrthdrtghomhdprhgtphhtthhopegsphesrghlihgvnhekrd guvg X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 20 Feb 2026 07:07:37 -0500 (EST) Date: Fri, 20 Feb 2026 12:07:32 +0000 From: Kiryl Shutsemau To: "David Hildenbrand (Arm)" Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Matthew Wilcox , Johannes Weiner , Usama Arif Subject: Re: [LSF/MM/BPF TOPIC] 64k (or 16k) base page size on x86 Message-ID: References: <915aafb3-d1ff-4ae9-8751-f78e333a1f5f@kernel.org> <17c5708d-3859-49a5-814e-bc3564bc3ac6@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <17c5708d-3859-49a5-814e-bc3564bc3ac6@kernel.org> On Fri, Feb 20, 2026 at 11:24:37AM +0100, David Hildenbrand (Arm) wrote: > > > When discussing per-process page sizes with Ryan and Dev, I mentioned that > > > having a larger emulated page size could be interesting for other > > > architectures as well. > > > > > > That is, we would emulate a 64K page size on Intel for user space as well, > > > but let the OS work with 4K pages. > > > > Just to clarify, do you want it to be enforced on userspace ABI. > > Like, all mappings are 64k aligned? > > Right, see the proposal from Dev on the list. > > From user-space POV, the pagesize would be 64K for these emulated processes. > That is, VMAs must be suitable aligned etc. Well, it will drastically limit the adoption. We have too much legacy stuff on x86. > > > We'd only allocate+map large folios into user space + pagecache, but still > > > allow for page tables etc. to not waste memory. > > > > Waste of memory for page table is solvable and pretty straight forward. > > Most of such cases can be solve mechanically by switching to slab. > > Well, yes, like Willy says, there are already similar custom solutions for > s390x and ppc. > > Pasha talked recently about the memory waste of 16k kernel stacks and how we > would want to reduce that to 4k. In your proposal, it would be 64k, unless > you somehow manage to allocate multiple kernel stacks from the same 64k > page. My head hurts thinking about whether that could work, maybe it could > (no idea about guard pages in there, though). Kernel stack is allocated from vmalloc. I think mapping them with sub-page granularity should be doable. BTW, do you see any reason why slab-allocated stack wouldn't work for large base page sizes? There's no requirement for it be aligned to page or PTE, right? > Let's take a look at the history of page size usage on Arm (people can feel > free to correct me): > > (1) Most distros were using 64k on Arm. > > (2) People realized that 64k was suboptimal many use cases (memory > waste for stacks, pagecache, etc) and started to switch to 4k. I > remember that mostly HPC-centric users sticked to 64k, but there was > also demand from others to be able to stay on 64k. > > (3) Arm improved performance on a 4k kernel by adding cont-pte support, > trying to get closer to 64k native performance. > > (4) Achieving 64k native performance is hard, which is why per-process > page sizes are being explored to get the best out of both worlds > (use 64k page size only where it really matters for performance). > > Arm clearly has the added benefit of actually benefiting from hardware > support for 64k. > > IIUC, what you are proposing feels a bit like traveling back in time when it > comes to the memory waste problem that Arm users encountered. > > Where do you see the big difference to 64k on Arm in your proposal? Would > you currently also be running 64k Arm in production and the memory waste etc > is acceptable? That's the point. I don't see a big difference to 64k Arm. I want to bring this option to x86: at some machine size it makes sense trade memory consumption for scalability. I am targeting it to machines with over 2TiB of RAM. BTW, we do run 64k Arm in our fleet. There's some growing pains, but it looks good in general We have no plans to switch to 4k (or 16k) at the moment. 512M THPs also look good on some workloads. -- Kiryl Shutsemau / Kirill A. Shutemov