From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D79F5108E1E9 for ; Thu, 19 Mar 2026 11:13:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:Message-ID:In-Reply-To:Date:From:Cc:To:Subject: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:References:List-Owner; bh=MUPDhvPSx+66SZIexuLyyu9BzksoKDPfFoPDdSeaIas=; b=4j9jFtRrrYlWFDaOHn97Gzzo/1 Dbfq2McaVvvFfQPrYZU+jY6ilZOq35Q3o5Qdib6b5B6T3akFX2pHbBVGiG0/Of8nLJWk4OWxRTmT0 ex3FJA8QyUEHX0ISwWjDVtAhAk7SZQLs/oxtJfuQ9uo44txTtoTYnL5Sq6rXblJXuFVDLIE4Ln+kV FHmfgOJPcXC8srxzs7v6KxhTxXB16rr7C7GZXOVtWoQ8gI8p1Kgq5cgRF90G2SzNi6EWviSFZWwHF Im1OeYBvj906i/eLs2JARrENlMpG+5N8lQeo2LMroPcbzmDnQiXRFaSGiJK0f543YfwfhXNyf5Ifa Z88EfuFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1w3BJK-0000000AUuS-0Oi1; Thu, 19 Mar 2026 11:13:06 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1w3BJG-0000000AUtj-3fX1 for linux-arm-kernel@lists.infradead.org; Thu, 19 Mar 2026 11:13:04 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 952FA4165C; Thu, 19 Mar 2026 11:13:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0EECCC2BCB0; Thu, 19 Mar 2026 11:13:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1773918782; bh=NcW57ixRunKayto2p4KPZQjt+pn6IeiOmQ4mabgLcT0=; h=Subject:To:Cc:From:Date:In-Reply-To:From; b=fVN3BKiaSPGB7WwlmefJ8/+yB2ASlftY9wZztov82LEfAebkY/pPGBnL+mfhWlQEl O5p34AN92c34nEECdG1uDgHWWg7s1mrjMhIH9oU8PXtj9MyIsytYOM1kGjXXk/Qnfl T5M0FJ3OvHYjysFB3X754yxlAn0FBo/s1WznR0yI= Subject: Patch "arm64: mm: Don't remap pgtables for allocate vs populate" has been added to the 6.6-stable tree To: Jim.Perrin@microsoft.com,ardb@kernel.org,catalin.marinas@arm.com,echanude@redhat.com,itaru.kitayama@fujitsu.com,jaboutboul@microsoft.com,linux-arm-kernel@lists.infradead.org,mark.rutland@arm.com,nmeyerhans@microsoft.com,ryan.roberts@arm.com,sgeorgejohn@microsoft.com,will@kernel.org Cc: From: Date: Thu, 19 Mar 2026 12:12:51 +0100 In-Reply-To: <20260217133411.2881311-4-ryan.roberts@arm.com> Message-ID: <2026031951-tropical-unsocial-d94b@gregkh> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit X-stable: commit X-Patchwork-Hint: ignore X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260319_041302_968055_19BCD97C X-CRM114-Status: GOOD ( 20.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This is a note to let you know that I've just added the patch titled arm64: mm: Don't remap pgtables for allocate vs populate to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: arm64-mm-don-t-remap-pgtables-for-allocate-vs-populate.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >From stable+bounces-216827-greg=kroah.com@vger.kernel.org Tue Feb 17 14:34:55 2026 From: Ryan Roberts Date: Tue, 17 Feb 2026 13:34:08 +0000 Subject: arm64: mm: Don't remap pgtables for allocate vs populate To: stable@vger.kernel.org Cc: Ryan Roberts , catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jack Aboutboul , Sharath George John , Noah Meyerhans , Jim Perrin , Mark Rutland , Itaru Kitayama , Eric Chanudet , Ard Biesheuvel Message-ID: <20260217133411.2881311-4-ryan.roberts@arm.com> From: Ryan Roberts [ Upstream commit 0e9df1c905d8293d333ace86c13d147382f5caf9 ] During linear map pgtable creation, each pgtable is fixmapped / fixunmapped twice; once during allocation to zero the memory, and a again during population to write the entries. This means each table has 2 TLB invalidations issued against it. Let's fix this so that each table is only fixmapped/fixunmapped once, halving the number of TLBIs, and improving performance. Achieve this by separating allocation and initialization (zeroing) of the page. The allocated page is now fixmapped directly by the walker and initialized, before being populated and finally fixunmapped. This approach keeps the change small, but has the side effect that late allocations (using __get_free_page()) must also go through the generic memory clearing routine. So let's tell __get_free_page() not to zero the memory to avoid duplication. Additionally this approach means that fixmap/fixunmap is still used for late pgtable modifications. That's not technically needed since the memory is all mapped in the linear map by that point. That's left as a possible future optimization if found to be needed. Execution time of map_mem(), which creates the kernel linear map page tables, was measured on different machines with different RAM configs: | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra | VM, 16G | VM, 64G | VM, 256G | Metal, 512G ---------------|-------------|-------------|-------------|------------- | ms (%) | ms (%) | ms (%) | ms (%) ---------------|-------------|-------------|-------------|------------- before | 11 (0%) | 161 (0%) | 656 (0%) | 1654 (0%) after | 10 (-11%) | 104 (-35%) | 438 (-33%) | 1223 (-26%) Signed-off-by: Ryan Roberts Suggested-by: Mark Rutland Tested-by: Itaru Kitayama Tested-by: Eric Chanudet Reviewed-by: Mark Rutland Reviewed-by: Ard Biesheuvel Link: https://lore.kernel.org/r/20240412131908.433043-4-ryan.roberts@arm.com Signed-off-by: Will Deacon [ Ryan: Trivial backport ] Signed-off-by: Ryan Roberts Signed-off-by: Greg Kroah-Hartman --- arch/arm64/mm/mmu.c | 58 ++++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -106,28 +106,12 @@ EXPORT_SYMBOL(phys_mem_access_prot); static phys_addr_t __init early_pgtable_alloc(int shift) { phys_addr_t phys; - void *ptr; phys = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, MEMBLOCK_ALLOC_NOLEAKTRACE); if (!phys) panic("Failed to allocate page table page\n"); - /* - * The FIX_{PGD,PUD,PMD} slots may be in active use, but the FIX_PTE - * slot will be free, so we can (ab)use the FIX_PTE slot to initialise - * any level of table. - */ - ptr = pte_set_fixmap(phys); - - memset(ptr, 0, PAGE_SIZE); - - /* - * Implicit barriers also ensure the zeroed page is visible to the page - * table walker - */ - pte_clear_fixmap(); - return phys; } @@ -169,6 +153,14 @@ bool pgattr_change_is_safe(u64 old, u64 return ((old ^ new) & ~mask) == 0; } +static void init_clear_pgtable(void *table) +{ + clear_page(table); + + /* Ensure the zeroing is observed by page table walks. */ + dsb(ishst); +} + static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot) { @@ -211,12 +203,15 @@ static void alloc_init_cont_pte(pmd_t *p pmdval |= PMD_TABLE_PXN; BUG_ON(!pgtable_alloc); pte_phys = pgtable_alloc(PAGE_SHIFT); + ptep = pte_set_fixmap(pte_phys); + init_clear_pgtable(ptep); + ptep += pte_index(addr); __pmd_populate(pmdp, pte_phys, pmdval); - pmd = READ_ONCE(*pmdp); + } else { + BUG_ON(pmd_bad(pmd)); + ptep = pte_set_fixmap_offset(pmdp, addr); } - BUG_ON(pmd_bad(pmd)); - ptep = pte_set_fixmap_offset(pmdp, addr); do { pgprot_t __prot = prot; @@ -295,12 +290,15 @@ static void alloc_init_cont_pmd(pud_t *p pudval |= PUD_TABLE_PXN; BUG_ON(!pgtable_alloc); pmd_phys = pgtable_alloc(PMD_SHIFT); + pmdp = pmd_set_fixmap(pmd_phys); + init_clear_pgtable(pmdp); + pmdp += pmd_index(addr); __pud_populate(pudp, pmd_phys, pudval); - pud = READ_ONCE(*pudp); + } else { + BUG_ON(pud_bad(pud)); + pmdp = pmd_set_fixmap_offset(pudp, addr); } - BUG_ON(pud_bad(pud)); - pmdp = pmd_set_fixmap_offset(pudp, addr); do { pgprot_t __prot = prot; @@ -338,12 +336,15 @@ static void alloc_init_pud(pgd_t *pgdp, p4dval |= P4D_TABLE_PXN; BUG_ON(!pgtable_alloc); pud_phys = pgtable_alloc(PUD_SHIFT); + pudp = pud_set_fixmap(pud_phys); + init_clear_pgtable(pudp); + pudp += pud_index(addr); __p4d_populate(p4dp, pud_phys, p4dval); - p4d = READ_ONCE(*p4dp); + } else { + BUG_ON(p4d_bad(p4d)); + pudp = pud_set_fixmap_offset(p4dp, addr); } - BUG_ON(p4d_bad(p4d)); - pudp = pud_set_fixmap_offset(p4dp, addr); do { pud_t old_pud = READ_ONCE(*pudp); @@ -425,11 +426,10 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdi static phys_addr_t __pgd_pgtable_alloc(int shift) { - void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL); - BUG_ON(!ptr); + /* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */ + void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL & ~__GFP_ZERO); - /* Ensure the zeroed page is visible to the page table walker */ - dsb(ishst); + BUG_ON(!ptr); return __pa(ptr); } Patches currently in stable-queue which might be from ryan.roberts@arm.com are queue-6.6/arm64-mm-don-t-remap-pgtables-per-cont-pte-pmd-block.patch queue-6.6/arm64-mm-don-t-remap-pgtables-for-allocate-vs-populate.patch queue-6.6/arm64-mm-batch-dsb-and-isb-when-populating-pgtables.patch