linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
@ 2025-05-31  2:41 Yang Shi
  2025-05-31  2:41 ` [PATCH 1/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
                   ` (4 more replies)
  0 siblings, 5 replies; 34+ messages in thread
From: Yang Shi @ 2025-05-31  2:41 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel


Changelog
=========
v4:
  * Rebased to v6.15-rc4.
  * Based on Miko's latest BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250428153514.55772-4-miko.lenczewski@arm.com/).
  * Keep block mappings rather than splitting to PTEs if it is fully contained
    per Ryan.
  * Return -EINVAL if page table allocation failed instead of BUG_ON per Ryan.
  * When page table allocation failed, return -1 instead of 0 per Ryan.
  * Allocate page table with GFP_ATOMIC for repainting per Ryan.
  * Use idmap to wait for repainting is done per Ryan.
  * Some minor fixes per the discussion for v3.
  * Some clean up to reduce redundant code.

v3:
  * Rebased to v6.14-rc4.
  * Based on Miko's BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250228182403.6269-3-miko.lenczewski@arm.com/).
    Also included in this series in order to have the complete patchset.
  * Enhanced __create_pgd_mapping() to handle split as well per Ryan.
  * Supported CONT mappings per Ryan.
  * Supported asymmetric system by splitting kernel linear mapping if such
    system is detected per Ryan. I don't have such system to test, so the
    testing is done by hacking kernel to call linear mapping repainting
    unconditionally. The linear mapping doesn't have any block and cont
    mappings after booting.

RFC v2:
  * Used allowlist to advertise BBM lv2 on the CPUs which can handle TLB
    conflict gracefully per Will Deacon
  * Rebased onto v6.13-rc5
  * https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/

v3: https://lore.kernel.org/linux-arm-kernel/20250304222018.615808-1-yang@os.amperecomputing.com/
RFC v2: https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/
RFC v1: https://lore.kernel.org/lkml/20241118181711.962576-1-yang@os.amperecomputing.com/

Description
===========
When rodata=full kernel linear mapping is mapped by PTE due to arm's
break-before-make rule.

A number of performance issues arise when the kernel linear map is using
PTE entries due to arm's break-before-make rule:
  - performance degradation
  - more TLB pressure
  - memory waste for kernel page table

These issues can be avoided by specifying rodata=on the kernel command
line but this disables the alias checks on page table permissions and
therefore compromises security somewhat.

With FEAT_BBM level 2 support it is no longer necessary to invalidate the
page table entry when changing page sizes.  This allows the kernel to
split large mappings after boot is complete.

This patch adds support for splitting large mappings when FEAT_BBM level 2
is available and rodata=full is used. This functionality will be used
when modifying page permissions for individual page frames.

Without FEAT_BBM level 2 we will keep the kernel linear map using PTEs
only.

If the system is asymmetric, the kernel linear mapping may be repainted once
the BBML2 capability is finalized on all CPUs.  See patch #4 for more details.

We saw significant performance increases in some benchmarks with
rodata=full without compromising the security features of the kernel.

Testing
=======
The test was done on AmpereOne machine (192 cores, 1P) with 256GB memory and
4K page size + 48 bit VA.

Function test (4K/16K/64K page size)
  - Kernel boot.  Kernel needs change kernel linear mapping permission at
    boot stage, if the patch didn't work, kernel typically didn't boot.
  - Module stress from stress-ng. Kernel module load change permission for
    linear mapping.
  - A test kernel module which allocates 80% of total memory via vmalloc(),
    then change the vmalloc area permission to RO, this also change linear
    mapping permission to RO, then change it back before vfree(). Then launch
    a VM which consumes almost all physical memory.
  - VM with the patchset applied in guest kernel too.
  - Kernel build in VM with guest kernel which has this series applied.
  - rodata=on. Make sure other rodata mode is not broken.
  - Boot on the machine which doesn't support BBML2.

Performance
===========
Memory consumption
Before:
MemTotal:       258988984 kB
MemFree:        254821700 kB

After:
MemTotal:       259505132 kB
MemFree:        255410264 kB

Around 500MB more memory are free to use.  The larger the machine, the
more memory saved.

Performance benchmarking
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses.  The kernel TLB
MPKI is reduced by 28.5%.

The benchmark data is now on par with rodata=on too.

* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt with no read/write workqueue).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
    --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
    --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
    --name=iops-test-job --eta-newline=1 --size 100G

The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.

* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.


Yang Shi (4):
      arm64: cpufeature: add AmpereOne to BBML2 allow list
      arm64: mm: make __create_pgd_mapping() and helpers non-void
      arm64: mm: support large block mapping when rodata=full
      arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs

 arch/arm64/include/asm/cpufeature.h |  26 +++++++
 arch/arm64/include/asm/mmu.h        |   4 +
 arch/arm64/include/asm/pgtable.h    |  12 ++-
 arch/arm64/kernel/cpufeature.c      |  30 ++++++--
 arch/arm64/mm/mmu.c                 | 505 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
 arch/arm64/mm/pageattr.c            |  37 +++++++--
 arch/arm64/mm/proc.S                |  41 ++++++++++
 7 files changed, 585 insertions(+), 70 deletions(-)



^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
  2025-05-31  2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
@ 2025-05-31  2:41 ` Yang Shi
  2025-05-31  2:41 ` [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void Yang Shi
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-05-31  2:41 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

AmpereOne supports BBML2 without conflict abort, add to the allow list.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/kernel/cpufeature.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 327eeabbb449..25e1fbfab6a3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2224,6 +2224,8 @@ static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
 	static const struct midr_range supports_bbml2_noabort_list[] = {
 		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
 		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
 		{}
 	};
 
-- 
2.48.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void
  2025-05-31  2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
  2025-05-31  2:41 ` [PATCH 1/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
@ 2025-05-31  2:41 ` Yang Shi
  2025-06-16 10:04   ` Ryan Roberts
  2025-05-31  2:41 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-05-31  2:41 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

The later patch will enhance __create_pgd_mapping() and related helpers
to split kernel linear mapping, it requires have return value.  So make
__create_pgd_mapping() and helpers non-void functions.

And move the BUG_ON() out of page table alloc helper since failing
splitting kernel linear mapping is not fatal and can be handled by the
callers in the later patch.  Have BUG_ON() after
__create_pgd_mapping_locked() returns to keep the current callers behavior
intact.

Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/kernel/cpufeature.c |  10 ++-
 arch/arm64/mm/mmu.c            | 130 +++++++++++++++++++++++----------
 2 files changed, 99 insertions(+), 41 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 25e1fbfab6a3..e879bfcf853b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1933,9 +1933,9 @@ static bool has_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
 #define KPTI_NG_TEMP_VA		(-(1UL << PMD_SHIFT))
 
 extern
-void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
-			     phys_addr_t size, pgprot_t prot,
-			     phys_addr_t (*pgtable_alloc)(int), int flags);
+int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
+			    phys_addr_t size, pgprot_t prot,
+			    phys_addr_t (*pgtable_alloc)(int), int flags);
 
 static phys_addr_t __initdata kpti_ng_temp_alloc;
 
@@ -1957,6 +1957,7 @@ static int __init __kpti_install_ng_mappings(void *__unused)
 	u64 kpti_ng_temp_pgd_pa = 0;
 	pgd_t *kpti_ng_temp_pgd;
 	u64 alloc = 0;
+	int err;
 
 	if (levels == 5 && !pgtable_l5_enabled())
 		levels = 4;
@@ -1986,9 +1987,10 @@ static int __init __kpti_install_ng_mappings(void *__unused)
 		// covers the PTE[] page itself, the remaining entries are free
 		// to be used as a ad-hoc fixmap.
 		//
-		create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
+		err = create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
 					KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL,
 					kpti_ng_pgd_alloc, 0);
+		BUG_ON(err);
 	}
 
 	cpu_install_idmap();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ea6695d53fb9..775c0536b194 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -189,15 +189,16 @@ static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
 	} while (ptep++, addr += PAGE_SIZE, addr != end);
 }
 
-static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
-				unsigned long end, phys_addr_t phys,
-				pgprot_t prot,
-				phys_addr_t (*pgtable_alloc)(int),
-				int flags)
+static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
+			       unsigned long end, phys_addr_t phys,
+			       pgprot_t prot,
+			       phys_addr_t (*pgtable_alloc)(int),
+			       int flags)
 {
 	unsigned long next;
 	pmd_t pmd = READ_ONCE(*pmdp);
 	pte_t *ptep;
+	int ret = 0;
 
 	BUG_ON(pmd_sect(pmd));
 	if (pmd_none(pmd)) {
@@ -208,6 +209,10 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 			pmdval |= PMD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pte_phys = pgtable_alloc(PAGE_SHIFT);
+		if (pte_phys == -1) {
+			ret = -ENOMEM;
+			goto out;
+		}
 		ptep = pte_set_fixmap(pte_phys);
 		init_clear_pgtable(ptep);
 		ptep += pte_index(addr);
@@ -239,13 +244,17 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 	 * walker.
 	 */
 	pte_clear_fixmap();
+
+out:
+	return ret;
 }
 
-static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
-		     phys_addr_t phys, pgprot_t prot,
-		     phys_addr_t (*pgtable_alloc)(int), int flags)
+static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
+		    phys_addr_t phys, pgprot_t prot,
+		    phys_addr_t (*pgtable_alloc)(int), int flags)
 {
 	unsigned long next;
+	int ret = 0;
 
 	do {
 		pmd_t old_pmd = READ_ONCE(*pmdp);
@@ -264,22 +273,27 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
 			BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
 						      READ_ONCE(pmd_val(*pmdp))));
 		} else {
-			alloc_init_cont_pte(pmdp, addr, next, phys, prot,
+			ret = alloc_init_cont_pte(pmdp, addr, next, phys, prot,
 					    pgtable_alloc, flags);
+			if (ret)
+				break;
 
 			BUG_ON(pmd_val(old_pmd) != 0 &&
 			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
 		}
 		phys += next - addr;
 	} while (pmdp++, addr = next, addr != end);
+
+	return ret;
 }
 
-static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
-				unsigned long end, phys_addr_t phys,
-				pgprot_t prot,
-				phys_addr_t (*pgtable_alloc)(int), int flags)
+static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
+			       unsigned long end, phys_addr_t phys,
+			       pgprot_t prot,
+			       phys_addr_t (*pgtable_alloc)(int), int flags)
 {
 	unsigned long next;
+	int ret = 0;
 	pud_t pud = READ_ONCE(*pudp);
 	pmd_t *pmdp;
 
@@ -295,6 +309,10 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 			pudval |= PUD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pmd_phys = pgtable_alloc(PMD_SHIFT);
+		if (pmd_phys == -1) {
+			ret = -ENOMEM;
+			goto out;
+		}
 		pmdp = pmd_set_fixmap(pmd_phys);
 		init_clear_pgtable(pmdp);
 		pmdp += pmd_index(addr);
@@ -314,21 +332,27 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 		    (flags & NO_CONT_MAPPINGS) == 0)
 			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
 
-		init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
+		ret = init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
+		if (ret)
+			break;
 
 		pmdp += pmd_index(next) - pmd_index(addr);
 		phys += next - addr;
 	} while (addr = next, addr != end);
 
 	pmd_clear_fixmap();
+
+out:
+	return ret;
 }
 
-static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
-			   phys_addr_t phys, pgprot_t prot,
-			   phys_addr_t (*pgtable_alloc)(int),
-			   int flags)
+static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
+			  phys_addr_t phys, pgprot_t prot,
+			  phys_addr_t (*pgtable_alloc)(int),
+			  int flags)
 {
 	unsigned long next;
+	int ret = 0;
 	p4d_t p4d = READ_ONCE(*p4dp);
 	pud_t *pudp;
 
@@ -340,6 +364,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
 			p4dval |= P4D_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pud_phys = pgtable_alloc(PUD_SHIFT);
+		if (pud_phys == -1) {
+			ret = -ENOMEM;
+			goto out;
+		}
 		pudp = pud_set_fixmap(pud_phys);
 		init_clear_pgtable(pudp);
 		pudp += pud_index(addr);
@@ -369,8 +397,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
 			BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
 						      READ_ONCE(pud_val(*pudp))));
 		} else {
-			alloc_init_cont_pmd(pudp, addr, next, phys, prot,
+			ret = alloc_init_cont_pmd(pudp, addr, next, phys, prot,
 					    pgtable_alloc, flags);
+			if (ret)
+				break;
 
 			BUG_ON(pud_val(old_pud) != 0 &&
 			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
@@ -379,14 +409,18 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
 	} while (pudp++, addr = next, addr != end);
 
 	pud_clear_fixmap();
+
+out:
+	return ret;
 }
 
-static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
-			   phys_addr_t phys, pgprot_t prot,
-			   phys_addr_t (*pgtable_alloc)(int),
-			   int flags)
+static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
+			  phys_addr_t phys, pgprot_t prot,
+			  phys_addr_t (*pgtable_alloc)(int),
+			  int flags)
 {
 	unsigned long next;
+	int ret = 0;
 	pgd_t pgd = READ_ONCE(*pgdp);
 	p4d_t *p4dp;
 
@@ -398,6 +432,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
 			pgdval |= PGD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		p4d_phys = pgtable_alloc(P4D_SHIFT);
+		if (p4d_phys == -1) {
+			ret = -ENOMEM;
+			goto out;
+		}
 		p4dp = p4d_set_fixmap(p4d_phys);
 		init_clear_pgtable(p4dp);
 		p4dp += p4d_index(addr);
@@ -412,8 +450,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
 
 		next = p4d_addr_end(addr, end);
 
-		alloc_init_pud(p4dp, addr, next, phys, prot,
+		ret = alloc_init_pud(p4dp, addr, next, phys, prot,
 			       pgtable_alloc, flags);
+		if (ret)
+			break;
 
 		BUG_ON(p4d_val(old_p4d) != 0 &&
 		       p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp)));
@@ -422,23 +462,27 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
 	} while (p4dp++, addr = next, addr != end);
 
 	p4d_clear_fixmap();
+
+out:
+	return ret;
 }
 
-static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
-					unsigned long virt, phys_addr_t size,
-					pgprot_t prot,
-					phys_addr_t (*pgtable_alloc)(int),
-					int flags)
+static int __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
+				       unsigned long virt, phys_addr_t size,
+				       pgprot_t prot,
+				       phys_addr_t (*pgtable_alloc)(int),
+				       int flags)
 {
 	unsigned long addr, end, next;
 	pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
+	int ret = 0;
 
 	/*
 	 * If the virtual and physical address don't have the same offset
 	 * within a page, we cannot map the region as the caller expects.
 	 */
 	if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
-		return;
+		return -EINVAL;
 
 	phys &= PAGE_MASK;
 	addr = virt & PAGE_MASK;
@@ -446,10 +490,14 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
 
 	do {
 		next = pgd_addr_end(addr, end);
-		alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
+		ret = alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
 			       flags);
+		if (ret)
+			break;
 		phys += next - addr;
 	} while (pgdp++, addr = next, addr != end);
+
+	return ret;
 }
 
 static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
@@ -458,17 +506,20 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
 				 phys_addr_t (*pgtable_alloc)(int),
 				 int flags)
 {
+	int err;
+
 	mutex_lock(&fixmap_lock);
-	__create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
-				    pgtable_alloc, flags);
+	err = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
+					  pgtable_alloc, flags);
+	BUG_ON(err);
 	mutex_unlock(&fixmap_lock);
 }
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 extern __alias(__create_pgd_mapping_locked)
-void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
-			     phys_addr_t size, pgprot_t prot,
-			     phys_addr_t (*pgtable_alloc)(int), int flags);
+int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
+			    phys_addr_t size, pgprot_t prot,
+			    phys_addr_t (*pgtable_alloc)(int), int flags);
 #endif
 
 static phys_addr_t __pgd_pgtable_alloc(int shift)
@@ -476,13 +527,17 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
 	/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
 	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL & ~__GFP_ZERO);
 
-	BUG_ON(!ptr);
+	if (!ptr)
+		return -1;
+
 	return __pa(ptr);
 }
 
 static phys_addr_t pgd_pgtable_alloc(int shift)
 {
 	phys_addr_t pa = __pgd_pgtable_alloc(shift);
+	if (pa == -1)
+		goto out;
 	struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
 
 	/*
@@ -498,6 +553,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 	else if (shift == PMD_SHIFT)
 		BUG_ON(!pagetable_pmd_ctor(ptdesc));
 
+out:
 	return pa;
 }
 
-- 
2.48.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-05-31  2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
  2025-05-31  2:41 ` [PATCH 1/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
  2025-05-31  2:41 ` [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void Yang Shi
@ 2025-05-31  2:41 ` Yang Shi
  2025-06-16 11:58   ` Ryan Roberts
  2025-06-16 16:24   ` Ryan Roberts
  2025-05-31  2:41 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
  2025-06-13 17:21 ` [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
  4 siblings, 2 replies; 34+ messages in thread
From: Yang Shi @ 2025-05-31  2:41 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.

This resulted in a couple of problems:
  - performance degradation
  - more TLB pressure
  - memory waste for kernel page table

With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.

Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full.  When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.

The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.

With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P) with
256GB memory.

* Memory use after boot
Before:
MemTotal:       258988984 kB
MemFree:        254821700 kB

After:
MemTotal:       259505132 kB
MemFree:        255410264 kB

Around 500MB more memory are free to use.  The larger the machine, the
more memory saved.

* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses.  The kernel TLB
MPKI is reduced by 28.5%.

The benchmark data is now on par with rodata=on too.

* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
    --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
    --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
    --name=iops-test-job --eta-newline=1 --size 100G

The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.

* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/include/asm/cpufeature.h |  26 +++
 arch/arm64/include/asm/mmu.h        |   1 +
 arch/arm64/include/asm/pgtable.h    |  12 +-
 arch/arm64/kernel/cpufeature.c      |   2 +-
 arch/arm64/mm/mmu.c                 | 269 +++++++++++++++++++++++++---
 arch/arm64/mm/pageattr.c            |  37 +++-
 6 files changed, 319 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8f36ffa16b73..a95806980298 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
 #endif
 }
 
+bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
+
+static inline bool has_nobbml2_override(void)
+{
+	u64 mmfr2;
+	unsigned int bbm;
+
+	mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+	mmfr2 &= ~id_aa64mmfr2_override.mask;
+	mmfr2 |= id_aa64mmfr2_override.val;
+	bbm = cpuid_feature_extract_unsigned_field(mmfr2,
+						   ID_AA64MMFR2_EL1_BBM_SHIFT);
+	return bbm == 0;
+}
+
+/*
+ * Called at early boot stage on boot CPU before cpu info and cpu feature
+ * are ready.
+ */
+static inline bool bbml2_noabort_available(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
+	       cpu_has_bbml2_noabort(read_cpuid_id()) &&
+	       !has_nobbml2_override();
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..2693d63bf837 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
 extern void mark_linear_text_alias_ro(void);
+extern int split_linear_mapping(unsigned long start, unsigned long end);
 
 /*
  * This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d3b538be1500..bf3cef31d243 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
 	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
 }
 
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
 static inline pte_t pte_mkdevmap(pte_t pte)
 {
 	return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
@@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
 	        ((unsigned long)swapper_pg_dir & PAGE_MASK);
 }
 
-static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
 {
 #ifdef __PAGETABLE_PMD_FOLDED
 	if (in_swapper_pgdir(pmdp)) {
@@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 #endif /* __PAGETABLE_PMD_FOLDED */
 
 	WRITE_ONCE(*pmdp, pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	__set_pmd_nosync(pmdp, pmd);
 
 	if (pmd_valid(pmd)) {
 		dsb(ishst);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e879bfcf853b..5fc2a4a804de 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
 	return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
 }
 
-static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
+bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
 {
 	/*
 	 * We want to allow usage of bbml2 in as wide a range of kernel contexts
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 775c0536b194..4c5d3aa35d62 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -45,6 +45,7 @@
 #define NO_BLOCK_MAPPINGS	BIT(0)
 #define NO_CONT_MAPPINGS	BIT(1)
 #define NO_EXEC_MAPPINGS	BIT(2)	/* assumes FEAT_HPDS is not used */
+#define SPLIT_MAPPINGS		BIT(3)
 
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
@@ -166,12 +167,91 @@ static void init_clear_pgtable(void *table)
 	dsb(ishst);
 }
 
+static void split_cont_pte(pte_t *ptep)
+{
+	pte_t *_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+	pte_t _pte;
+
+	for (int i = 0; i < CONT_PTES; i++, _ptep++) {
+		_pte = READ_ONCE(*_ptep);
+		_pte = pte_mknoncont(_pte);
+		__set_pte_nosync(_ptep, _pte);
+	}
+
+	dsb(ishst);
+	isb();
+}
+
+static void split_cont_pmd(pmd_t *pmdp)
+{
+	pmd_t *_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+	pmd_t _pmd;
+
+	for (int i = 0; i < CONT_PMDS; i++, _pmdp++) {
+		_pmd = READ_ONCE(*_pmdp);
+		_pmd = pmd_mknoncont(_pmd);
+		set_pmd(_pmdp, _pmd);
+	}
+}
+
+static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
+{
+	pte_t *ptep;
+	unsigned long pfn;
+	pgprot_t prot;
+
+	pfn = pmd_pfn(pmd);
+	prot = pmd_pgprot(pmd);
+	prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PTE_TYPE_PAGE);
+
+	ptep = (pte_t *)phys_to_virt(pte_phys);
+
+	/* It must be naturally aligned if PMD is leaf */
+	if ((flags & NO_CONT_MAPPINGS) == 0)
+		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+	for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+		__set_pte_nosync(ptep, pfn_pte(pfn, prot));
+
+	dsb(ishst);
+}
+
+static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
+{
+	pmd_t *pmdp;
+	unsigned long pfn;
+	pgprot_t prot;
+	unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+
+	pfn = pud_pfn(pud);
+	prot = pud_pgprot(pud);
+	pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+	/* It must be naturally aligned if PUD is leaf */
+	if ((flags & NO_CONT_MAPPINGS) == 0)
+		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+	for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
+		__set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
+		pfn += step;
+	}
+
+	dsb(ishst);
+}
+
 static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
-		     phys_addr_t phys, pgprot_t prot)
+		     phys_addr_t phys, pgprot_t prot, int flags)
 {
 	do {
 		pte_t old_pte = __ptep_get(ptep);
 
+		if (flags & SPLIT_MAPPINGS) {
+			if (pte_cont(old_pte))
+				split_cont_pte(ptep);
+
+			continue;
+		}
+
 		/*
 		 * Required barriers to make this visible to the table walker
 		 * are deferred to the end of alloc_init_cont_pte().
@@ -199,11 +279,20 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 	pmd_t pmd = READ_ONCE(*pmdp);
 	pte_t *ptep;
 	int ret = 0;
+	bool split = flags & SPLIT_MAPPINGS;
+	pmdval_t pmdval;
+	phys_addr_t pte_phys;
 
-	BUG_ON(pmd_sect(pmd));
-	if (pmd_none(pmd)) {
-		pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
-		phys_addr_t pte_phys;
+	if (!split)
+		BUG_ON(pmd_sect(pmd));
+
+	if (pmd_none(pmd) && split) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (pmd_none(pmd) || (split && pmd_leaf(pmd))) {
+		pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
 
 		if (flags & NO_EXEC_MAPPINGS)
 			pmdval |= PMD_TABLE_PXN;
@@ -213,6 +302,18 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 			ret = -ENOMEM;
 			goto out;
 		}
+	}
+
+	if (split) {
+		if (pmd_leaf(pmd)) {
+			split_pmd(pmd, pte_phys, flags);
+			__pmd_populate(pmdp, pte_phys, pmdval);
+		}
+		ptep = pte_offset_kernel(pmdp, addr);
+		goto split_pgtable;
+	}
+
+	if (pmd_none(pmd)) {
 		ptep = pte_set_fixmap(pte_phys);
 		init_clear_pgtable(ptep);
 		ptep += pte_index(addr);
@@ -222,17 +323,28 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 		ptep = pte_set_fixmap_offset(pmdp, addr);
 	}
 
+split_pgtable:
 	do {
 		pgprot_t __prot = prot;
 
 		next = pte_cont_addr_end(addr, end);
 
+		if (split) {
+			pte_t pteval = READ_ONCE(*ptep);
+			bool cont = pte_cont(pteval);
+
+			if (cont &&
+			    ((addr | next) & ~CONT_PTE_MASK) == 0 &&
+			    (flags & NO_CONT_MAPPINGS) == 0)
+				continue;
+		}
+
 		/* use a contiguous mapping if the range is suitably aligned */
 		if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
 		    (flags & NO_CONT_MAPPINGS) == 0)
 			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
 
-		init_pte(ptep, addr, next, phys, __prot);
+		init_pte(ptep, addr, next, phys, __prot, flags);
 
 		ptep += pte_index(next) - pte_index(addr);
 		phys += next - addr;
@@ -243,7 +355,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 	 * ensure that all previous pgtable writes are visible to the table
 	 * walker.
 	 */
-	pte_clear_fixmap();
+	if (!split)
+		pte_clear_fixmap();
 
 out:
 	return ret;
@@ -255,15 +368,29 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
 {
 	unsigned long next;
 	int ret = 0;
+	bool split = flags & SPLIT_MAPPINGS;
+	bool cont;
 
 	do {
 		pmd_t old_pmd = READ_ONCE(*pmdp);
 
 		next = pmd_addr_end(addr, end);
 
+		if (split && pmd_leaf(old_pmd)) {
+			cont = pgprot_val(pmd_pgprot(old_pmd)) & PTE_CONT;
+			if (cont)
+				split_cont_pmd(pmdp);
+
+			/* The PMD is fully contained in the range */
+			if (((addr | next) & ~PMD_MASK) == 0 &&
+			    (flags & NO_BLOCK_MAPPINGS) == 0)
+				continue;
+		}
+
 		/* try section mapping first */
 		if (((addr | next | phys) & ~PMD_MASK) == 0 &&
-		    (flags & NO_BLOCK_MAPPINGS) == 0) {
+		    (flags & NO_BLOCK_MAPPINGS) == 0 &&
+		    (flags & SPLIT_MAPPINGS) == 0) {
 			pmd_set_huge(pmdp, phys, prot);
 
 			/*
@@ -278,7 +405,7 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
 			if (ret)
 				break;
 
-			BUG_ON(pmd_val(old_pmd) != 0 &&
+			BUG_ON(!split && pmd_val(old_pmd) != 0 &&
 			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
 		}
 		phys += next - addr;
@@ -296,14 +423,23 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 	int ret = 0;
 	pud_t pud = READ_ONCE(*pudp);
 	pmd_t *pmdp;
+	bool split = flags & SPLIT_MAPPINGS;
+	pudval_t pudval;
+	phys_addr_t pmd_phys;
 
 	/*
 	 * Check for initial section mappings in the pgd/pud.
 	 */
-	BUG_ON(pud_sect(pud));
-	if (pud_none(pud)) {
-		pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
-		phys_addr_t pmd_phys;
+	if (!split)
+		BUG_ON(pud_sect(pud));
+
+	if (pud_none(pud) && split) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (pud_none(pud) || (split && pud_leaf(pud))) {
+		pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
 
 		if (flags & NO_EXEC_MAPPINGS)
 			pudval |= PUD_TABLE_PXN;
@@ -313,6 +449,18 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 			ret = -ENOMEM;
 			goto out;
 		}
+	}
+
+	if (split) {
+		if (pud_leaf(pud)) {
+			split_pud(pud, pmd_phys, flags);
+			__pud_populate(pudp, pmd_phys, pudval);
+		}
+		pmdp = pmd_offset(pudp, addr);
+		goto split_pgtable;
+	}
+
+	if (pud_none(pud)) {
 		pmdp = pmd_set_fixmap(pmd_phys);
 		init_clear_pgtable(pmdp);
 		pmdp += pmd_index(addr);
@@ -322,11 +470,22 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 		pmdp = pmd_set_fixmap_offset(pudp, addr);
 	}
 
+split_pgtable:
 	do {
 		pgprot_t __prot = prot;
 
 		next = pmd_cont_addr_end(addr, end);
 
+		if (split) {
+			pmd_t pmdval = READ_ONCE(*pmdp);
+			bool cont = pgprot_val(pmd_pgprot(pmdval)) & PTE_CONT;
+
+			if (cont &&
+			    ((addr | next) & ~CONT_PMD_MASK) == 0 &&
+			    (flags & NO_CONT_MAPPINGS) == 0)
+				continue;
+		}
+
 		/* use a contiguous mapping if the range is suitably aligned */
 		if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
 		    (flags & NO_CONT_MAPPINGS) == 0)
@@ -340,7 +499,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 		phys += next - addr;
 	} while (addr = next, addr != end);
 
-	pmd_clear_fixmap();
+	if (!split)
+		pmd_clear_fixmap();
 
 out:
 	return ret;
@@ -355,6 +515,16 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
 	int ret = 0;
 	p4d_t p4d = READ_ONCE(*p4dp);
 	pud_t *pudp;
+	bool split = flags & SPLIT_MAPPINGS;
+
+	if (split) {
+		if (p4d_none(p4d)) {
+			ret= -EINVAL;
+			goto out;
+		}
+		pudp = pud_offset(p4dp, addr);
+		goto split_pgtable;
+	}
 
 	if (p4d_none(p4d)) {
 		p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN | P4D_TABLE_AF;
@@ -377,17 +547,26 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
 		pudp = pud_set_fixmap_offset(p4dp, addr);
 	}
 
+split_pgtable:
 	do {
 		pud_t old_pud = READ_ONCE(*pudp);
 
 		next = pud_addr_end(addr, end);
 
+		if (split && pud_leaf(old_pud)) {
+			/* The PUD is fully contained in the range */
+			if (((addr | next) & ~PUD_MASK) == 0 &&
+			    (flags & NO_BLOCK_MAPPINGS) == 0)
+				continue;
+		}
+
 		/*
 		 * For 4K granule only, attempt to put down a 1GB block
 		 */
 		if (pud_sect_supported() &&
 		   ((addr | next | phys) & ~PUD_MASK) == 0 &&
-		    (flags & NO_BLOCK_MAPPINGS) == 0) {
+		    (flags & NO_BLOCK_MAPPINGS) == 0 &&
+		    (flags & SPLIT_MAPPINGS) == 0) {
 			pud_set_huge(pudp, phys, prot);
 
 			/*
@@ -402,13 +581,14 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
 			if (ret)
 				break;
 
-			BUG_ON(pud_val(old_pud) != 0 &&
+			BUG_ON(!split && pud_val(old_pud) != 0 &&
 			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
 		}
 		phys += next - addr;
 	} while (pudp++, addr = next, addr != end);
 
-	pud_clear_fixmap();
+	if (!split)
+		pud_clear_fixmap();
 
 out:
 	return ret;
@@ -423,6 +603,16 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
 	int ret = 0;
 	pgd_t pgd = READ_ONCE(*pgdp);
 	p4d_t *p4dp;
+	bool split = flags & SPLIT_MAPPINGS;
+
+	if (split) {
+		if (pgd_none(pgd)) {
+			ret = -EINVAL;
+			goto out;
+		}
+		p4dp = p4d_offset(pgdp, addr);
+		goto split_pgtable;
+	}
 
 	if (pgd_none(pgd)) {
 		pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN | PGD_TABLE_AF;
@@ -445,6 +635,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
 		p4dp = p4d_set_fixmap_offset(pgdp, addr);
 	}
 
+split_pgtable:
 	do {
 		p4d_t old_p4d = READ_ONCE(*p4dp);
 
@@ -461,7 +652,8 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
 		phys += next - addr;
 	} while (p4dp++, addr = next, addr != end);
 
-	p4d_clear_fixmap();
+	if (!split)
+		p4d_clear_fixmap();
 
 out:
 	return ret;
@@ -557,6 +749,25 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 	return pa;
 }
 
+int split_linear_mapping(unsigned long start, unsigned long end)
+{
+	int ret = 0;
+
+	if (!system_supports_bbml2_noabort())
+		return 0;
+
+	mmap_write_lock(&init_mm);
+	/* NO_EXEC_MAPPINGS is needed when splitting linear map */
+	ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
+					  start, (end - start), __pgprot(0),
+					  __pgd_pgtable_alloc,
+					  NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
+	mmap_write_unlock(&init_mm);
+	flush_tlb_kernel_range(start, end);
+
+	return ret;
+}
+
 /*
  * This function can only be used to modify existing table entries,
  * without allocating new levels of table. Note that this permits the
@@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
 
 #endif /* CONFIG_KFENCE */
 
+static inline bool force_pte_mapping(void)
+{
+	/*
+	 * Can't use cpufeature API to determine whether BBML2 supported
+	 * or not since cpufeature have not been finalized yet.
+	 *
+	 * Checking the boot CPU only for now.  If the boot CPU has
+	 * BBML2, paint linear mapping with block mapping.  If it turns
+	 * out the secondary CPUs don't support BBML2 once cpufeature is
+	 * fininalized, the linear mapping will be repainted with PTE
+	 * mapping.
+	 */
+	return (rodata_full && !bbml2_noabort_available()) ||
+		debug_pagealloc_enabled() ||
+		arm64_kfence_can_set_direct_map() ||
+		is_realm_world();
+}
+
 static void __init map_mem(pgd_t *pgdp)
 {
 	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
 
 	early_kfence_pool = arm64_kfence_alloc_pool();
 
-	if (can_set_direct_map())
+	if (force_pte_mapping())
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 
 	VM_BUG_ON(!mhp_range_allowed(start, size, true));
 
-	if (can_set_direct_map())
+	if (force_pte_mapping())
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 39fd1f7ff02a..25c068712cb5 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -10,6 +10,7 @@
 #include <linux/vmalloc.h>
 
 #include <asm/cacheflush.h>
+#include <asm/mmu.h>
 #include <asm/pgtable-prot.h>
 #include <asm/set_memory.h>
 #include <asm/tlbflush.h>
@@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
 	struct page_change_data *cdata = data;
 	pte_t pte = __ptep_get(ptep);
 
+	BUG_ON(pte_cont(pte));
+
 	pte = clear_pte_bit(pte, cdata->clear_mask);
 	pte = set_pte_bit(pte, cdata->set_mask);
 
@@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
 	unsigned long start = addr;
 	unsigned long size = PAGE_SIZE * numpages;
 	unsigned long end = start + size;
+	unsigned long l_start;
 	struct vm_struct *area;
-	int i;
+	int i, ret;
 
 	if (!PAGE_ALIGNED(addr)) {
 		start &= PAGE_MASK;
@@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
 	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
 			    pgprot_val(clear_mask) == PTE_RDONLY)) {
 		for (i = 0; i < area->nr_pages; i++) {
-			__change_memory_common((u64)page_address(area->pages[i]),
+			l_start = (u64)page_address(area->pages[i]);
+			ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
+			if (WARN_ON_ONCE(ret))
+				return ret;
+
+			__change_memory_common(l_start,
 					       PAGE_SIZE, set_mask, clear_mask);
 		}
 	}
@@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 
 int set_direct_map_invalid_noflush(struct page *page)
 {
+	unsigned long l_start;
+	int ret;
+
 	struct page_change_data data = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(PTE_VALID),
@@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
 	if (!can_set_direct_map())
 		return 0;
 
+	l_start = (unsigned long)page_address(page);
+	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
 	return apply_to_page_range(&init_mm,
-				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   l_start, PAGE_SIZE, change_page_range,
+				   &data);
 }
 
 int set_direct_map_default_noflush(struct page *page)
 {
+	unsigned long l_start;
+	int ret;
+
 	struct page_change_data data = {
 		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
 		.clear_mask = __pgprot(PTE_RDONLY),
@@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
 	if (!can_set_direct_map())
 		return 0;
 
+	l_start = (unsigned long)page_address(page);
+	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
 	return apply_to_page_range(&init_mm,
-				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   l_start, PAGE_SIZE, change_page_range,
+				   &data);
 }
 
 static int __set_memory_enc_dec(unsigned long addr,
-- 
2.48.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-05-31  2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
                   ` (2 preceding siblings ...)
  2025-05-31  2:41 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-05-31  2:41 ` Yang Shi
  2025-06-23 12:26   ` Ryan Roberts
  2025-06-13 17:21 ` [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
  4 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-05-31  2:41 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point.  So the linear
mapping is determined by the capability of boot CPU.  If the boot CPU
supports BBML2, large block mapping will be used for linear mapping.

But the secondary CPUs may not support BBML2, so repaint the linear mapping
if large block mapping is used and the secondary CPUs don't support BBML2
once cpufeature is finalized on all CPUs.

If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/include/asm/mmu.h   |   3 +
 arch/arm64/kernel/cpufeature.c |  16 +++++
 arch/arm64/mm/mmu.c            | 108 ++++++++++++++++++++++++++++++++-
 arch/arm64/mm/proc.S           |  41 +++++++++++++
 4 files changed, 166 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 2693d63bf837..ad38135d1aa1 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -56,6 +56,8 @@ typedef struct {
  */
 #define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
 
+extern bool block_mapping;
+
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
 	return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
@@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
 extern void mark_linear_text_alias_ro(void);
 extern int split_linear_mapping(unsigned long start, unsigned long end);
+extern int __repaint_linear_mappings(void *__unused);
 
 /*
  * This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5fc2a4a804de..5151c101fbaf 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
 #include <asm/insn.h>
 #include <asm/kvm_host.h>
 #include <asm/mmu_context.h>
+#include <asm/mmu.h>
 #include <asm/mte.h>
 #include <asm/hypervisor.h>
 #include <asm/processor.h>
@@ -2005,6 +2006,20 @@ static int __init __kpti_install_ng_mappings(void *__unused)
 	return 0;
 }
 
+static void __init repaint_linear_mappings(void)
+{
+	if (!block_mapping)
+		return;
+
+	if (!rodata_full)
+		return;
+
+	if (system_supports_bbml2_noabort())
+		return;
+
+	stop_machine(__repaint_linear_mappings, NULL, cpu_online_mask);
+}
+
 static void __init kpti_install_ng_mappings(void)
 {
 	/* Check whether KPTI is going to be used */
@@ -3868,6 +3883,7 @@ void __init setup_system_features(void)
 {
 	setup_system_capabilities();
 
+	repaint_linear_mappings();
 	kpti_install_ng_mappings();
 
 	sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 4c5d3aa35d62..3922af89abbb 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -209,6 +209,8 @@ static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
 	/* It must be naturally aligned if PMD is leaf */
 	if ((flags & NO_CONT_MAPPINGS) == 0)
 		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+	else
+		prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
 
 	for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
 		__set_pte_nosync(ptep, pfn_pte(pfn, prot));
@@ -230,6 +232,8 @@ static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
 	/* It must be naturally aligned if PUD is leaf */
 	if ((flags & NO_CONT_MAPPINGS) == 0)
 		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+	else
+		prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
 
 	for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
 		__set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
@@ -833,6 +837,86 @@ void __init mark_linear_text_alias_ro(void)
 			    PAGE_KERNEL_RO);
 }
 
+static phys_addr_t repaint_pgtable_alloc(int shift)
+{
+	void *ptr;
+
+	ptr = (void *)__get_free_page(GFP_ATOMIC);
+	if (!ptr)
+		return -1;
+
+	return __pa(ptr);
+}
+
+extern u32 repaint_done;
+
+int __init __repaint_linear_mappings(void *__unused)
+{
+	typedef void (repaint_wait_fn)(void);
+	extern repaint_wait_fn bbml2_wait_for_repainting;
+	repaint_wait_fn *wait_fn;
+
+	phys_addr_t kernel_start = __pa_symbol(_stext);
+	phys_addr_t kernel_end = __pa_symbol(__init_begin);
+	phys_addr_t start, end;
+	unsigned long vstart, vend;
+	u64 i;
+	int ret;
+	int flags = NO_EXEC_MAPPINGS | NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS |
+		    SPLIT_MAPPINGS;
+	int cpu = smp_processor_id();
+
+	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
+
+	/*
+	 * Repainting just can be run on CPU 0 because we just can be sure
+	 * CPU 0 supports BBML2.
+	 */
+	if (!cpu) {
+		/*
+		 * Wait for all secondary CPUs get prepared for repainting
+		 * the linear mapping.
+		 */
+wait_for_secondary:
+		if (READ_ONCE(repaint_done) != num_online_cpus())
+			goto wait_for_secondary;
+
+		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+		/* Split the whole linear mapping */
+		for_each_mem_range(i, &start, &end) {
+			if (start >= end)
+				return -EINVAL;
+
+			vstart = __phys_to_virt(start);
+			vend = __phys_to_virt(end);
+			ret = __create_pgd_mapping_locked(init_mm.pgd, start,
+					vstart, (end - start), __pgprot(0),
+					repaint_pgtable_alloc, flags);
+			if (ret)
+				panic("Failed to split linear mappings\n");
+
+			flush_tlb_kernel_range(vstart, vend);
+		}
+		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
+
+		WRITE_ONCE(repaint_done, 0);
+	} else {
+		/*
+		 * The secondary CPUs can't run in the same address space
+		 * with CPU 0 because accessing the linear mapping address
+		 * when CPU 0 is repainting it is not safe.
+		 *
+		 * Let the secondary CPUs run busy loop in idmap address
+		 * space when repainting is ongoing.
+		 */
+		cpu_install_idmap();
+		wait_fn();
+		cpu_uninstall_idmap();
+	}
+
+	return 0;
+}
+
 #ifdef CONFIG_KFENCE
 
 bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
@@ -887,6 +971,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
 
 #endif /* CONFIG_KFENCE */
 
+bool block_mapping;
+
 static inline bool force_pte_mapping(void)
 {
 	/*
@@ -915,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
 	int flags = NO_EXEC_MAPPINGS;
 	u64 i;
 
+	block_mapping = true;
+
 	/*
 	 * Setting hierarchical PXNTable attributes on table entries covering
 	 * the linear region is only possible if it is guaranteed that no table
@@ -930,8 +1018,10 @@ static void __init map_mem(pgd_t *pgdp)
 
 	early_kfence_pool = arm64_kfence_alloc_pool();
 
-	if (force_pte_mapping())
+	if (force_pte_mapping()) {
+		block_mapping = false;
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+	}
 
 	/*
 	 * Take care not to create a writable alias for the
@@ -1063,7 +1153,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
 		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
 
 static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
-	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
+	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
 
 static void __init create_idmap(void)
 {
@@ -1088,6 +1179,19 @@ static void __init create_idmap(void)
 			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
 			       __phys_to_virt(ptep) - ptep);
 	}
+
+	/*
+	 * Setup idmap mapping for repaint_done flag.  It will be used if
+	 * repainting the linear mapping is needed later.
+	 */
+	if (block_mapping) {
+		u64 pa = __pa_symbol(&repaint_done);
+		ptep = __pa_symbol(bbml2_ptes);
+
+		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
+			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
+			       __phys_to_virt(ptep) - ptep);
+	}
 }
 
 void __init paging_init(void)
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index fb30c8804f87..c40e6126c093 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -440,6 +440,47 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
 	.popsection
 #endif
 
+/*
+ * Wait for repainting is done. Run on secondary CPUs
+ * only.
+ */
+	.pushsection	".data", "aw", %progbits
+SYM_DATA(repaint_done, .long 1)
+	.popsection
+
+	.pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
+	swapper_ttb	.req	x0
+	flag_ptr	.req	x1
+
+	mrs	swapper_ttb, ttbr1_el1
+	adr_l	flag_ptr, repaint_done
+
+	/* Uninstall swapper before surgery begins */
+	__idmap_cpu_set_reserved_ttbr1 x16, x17
+
+	/* Increment the flag to let the boot CPU we're ready */
+1:	ldxr	w16, [flag_ptr]
+	add	w16, w16, #1
+	stxr	w17, w16, [flag_ptr]
+	cbnz	w17, 1b
+
+	/* Wait for the boot CPU to finish repainting */
+	sevl
+1:	wfe
+	ldxr	w16, [flag_ptr]
+	cbnz	w16, 1b
+
+	/* All done, act like nothing happened */
+	msr	ttbr1_el1, swapper_ttb
+	isb
+	ret
+
+	.unreq	swapper_ttb
+	.unreq	flag_ptr
+SYM_FUNC_END(bbml2_wait_for_repainting)
+	.popsection
+
 /*
  *	__cpu_setup
  *
-- 
2.48.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
  2025-05-31  2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
                   ` (3 preceding siblings ...)
  2025-05-31  2:41 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
@ 2025-06-13 17:21 ` Yang Shi
  2025-06-16  9:09   ` Ryan Roberts
  4 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-06-13 17:21 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

Hi Ryan,

Gently ping... any comments for this version?

It looks Dev's series is getting stable except some nits. I went through 
his patches and all the call sites for changing page permission. They are:
   1. change_memory_common(): called by set_memory_{ro|rw|x|nx}. It 
iterates every single page mapped in the vm area then change permission 
on page basis. It depends on whether the vm area is block mapped or not 
if we want to change
      permission on block mapping.
   2. set_memory_valid(): it looks it assumes the [addr, addr + size) 
range is mapped contiguously, but it depends on the callers pass in 
block size (nr > 1). There are two sub cases:
      2.a kfence and debugalloc just work for PTE mapping, so they pass 
in single page.
      2.b The execmem passes in large page on x86, arm64 has not 
supported huge execmem cache yet, so it should still pass in singe page 
for the time being. But my series + Dev's series can handle both single 
page mapping and block mapping well
          for this case. So changing permission for block mapping can be 
supported automatically once arm64 supports huge execmem cache.
   3. set_direct_map_{invalid|default}_noflush(): it looks they are page 
basis. So Dev's series has no change to them.
   4. realm: if I remember correctly, realm forces PTE mapping for 
linear address space all the time, so no impact.

So it looks like just #1 may need some extra work. But it seems simple. 
I should just need advance the address range in (1 << vm's order) 
stride. So there should be just some minor changes when I rebase my 
patches on top of Dev's, mainly context changes. It has no impact to the 
split primitive and repainting linear mapping.

Thanks,
Yang


On 5/30/25 7:41 PM, Yang Shi wrote:
> Changelog
> =========
> v4:
>    * Rebased to v6.15-rc4.
>    * Based on Miko's latest BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250428153514.55772-4-miko.lenczewski@arm.com/).
>    * Keep block mappings rather than splitting to PTEs if it is fully contained
>      per Ryan.
>    * Return -EINVAL if page table allocation failed instead of BUG_ON per Ryan.
>    * When page table allocation failed, return -1 instead of 0 per Ryan.
>    * Allocate page table with GFP_ATOMIC for repainting per Ryan.
>    * Use idmap to wait for repainting is done per Ryan.
>    * Some minor fixes per the discussion for v3.
>    * Some clean up to reduce redundant code.
>
> v3:
>    * Rebased to v6.14-rc4.
>    * Based on Miko's BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250228182403.6269-3-miko.lenczewski@arm.com/).
>      Also included in this series in order to have the complete patchset.
>    * Enhanced __create_pgd_mapping() to handle split as well per Ryan.
>    * Supported CONT mappings per Ryan.
>    * Supported asymmetric system by splitting kernel linear mapping if such
>      system is detected per Ryan. I don't have such system to test, so the
>      testing is done by hacking kernel to call linear mapping repainting
>      unconditionally. The linear mapping doesn't have any block and cont
>      mappings after booting.
>
> RFC v2:
>    * Used allowlist to advertise BBM lv2 on the CPUs which can handle TLB
>      conflict gracefully per Will Deacon
>    * Rebased onto v6.13-rc5
>    * https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/
>
> v3: https://lore.kernel.org/linux-arm-kernel/20250304222018.615808-1-yang@os.amperecomputing.com/
> RFC v2: https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/
> RFC v1: https://lore.kernel.org/lkml/20241118181711.962576-1-yang@os.amperecomputing.com/
>
> Description
> ===========
> When rodata=full kernel linear mapping is mapped by PTE due to arm's
> break-before-make rule.
>
> A number of performance issues arise when the kernel linear map is using
> PTE entries due to arm's break-before-make rule:
>    - performance degradation
>    - more TLB pressure
>    - memory waste for kernel page table
>
> These issues can be avoided by specifying rodata=on the kernel command
> line but this disables the alias checks on page table permissions and
> therefore compromises security somewhat.
>
> With FEAT_BBM level 2 support it is no longer necessary to invalidate the
> page table entry when changing page sizes.  This allows the kernel to
> split large mappings after boot is complete.
>
> This patch adds support for splitting large mappings when FEAT_BBM level 2
> is available and rodata=full is used. This functionality will be used
> when modifying page permissions for individual page frames.
>
> Without FEAT_BBM level 2 we will keep the kernel linear map using PTEs
> only.
>
> If the system is asymmetric, the kernel linear mapping may be repainted once
> the BBML2 capability is finalized on all CPUs.  See patch #4 for more details.
>
> We saw significant performance increases in some benchmarks with
> rodata=full without compromising the security features of the kernel.
>
> Testing
> =======
> The test was done on AmpereOne machine (192 cores, 1P) with 256GB memory and
> 4K page size + 48 bit VA.
>
> Function test (4K/16K/64K page size)
>    - Kernel boot.  Kernel needs change kernel linear mapping permission at
>      boot stage, if the patch didn't work, kernel typically didn't boot.
>    - Module stress from stress-ng. Kernel module load change permission for
>      linear mapping.
>    - A test kernel module which allocates 80% of total memory via vmalloc(),
>      then change the vmalloc area permission to RO, this also change linear
>      mapping permission to RO, then change it back before vfree(). Then launch
>      a VM which consumes almost all physical memory.
>    - VM with the patchset applied in guest kernel too.
>    - Kernel build in VM with guest kernel which has this series applied.
>    - rodata=on. Make sure other rodata mode is not broken.
>    - Boot on the machine which doesn't support BBML2.
>
> Performance
> ===========
> Memory consumption
> Before:
> MemTotal:       258988984 kB
> MemFree:        254821700 kB
>
> After:
> MemTotal:       259505132 kB
> MemFree:        255410264 kB
>
> Around 500MB more memory are free to use.  The larger the machine, the
> more memory saved.
>
> Performance benchmarking
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt with no read/write workqueue).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>      --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>      --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>      --name=iops-test-job --eta-newline=1 --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
>
>
> Yang Shi (4):
>        arm64: cpufeature: add AmpereOne to BBML2 allow list
>        arm64: mm: make __create_pgd_mapping() and helpers non-void
>        arm64: mm: support large block mapping when rodata=full
>        arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
>
>   arch/arm64/include/asm/cpufeature.h |  26 +++++++
>   arch/arm64/include/asm/mmu.h        |   4 +
>   arch/arm64/include/asm/pgtable.h    |  12 ++-
>   arch/arm64/kernel/cpufeature.c      |  30 ++++++--
>   arch/arm64/mm/mmu.c                 | 505 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
>   arch/arm64/mm/pageattr.c            |  37 +++++++--
>   arch/arm64/mm/proc.S                |  41 ++++++++++
>   7 files changed, 585 insertions(+), 70 deletions(-)
>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
  2025-06-13 17:21 ` [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
@ 2025-06-16  9:09   ` Ryan Roberts
  2025-06-17 20:57     ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-06-16  9:09 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel

On 13/06/2025 18:21, Yang Shi wrote:
> Hi Ryan,
> 
> Gently ping... any comments for this version?

Hi Yang, yes sorry for slow response - It's been in my queue. I'm going to start
looking at it now and plan to get you some feedback in the next couple of days.

> 
> It looks Dev's series is getting stable except some nits. I went through his
> patches and all the call sites for changing page permission. They are:
>   1. change_memory_common(): called by set_memory_{ro|rw|x|nx}. It iterates
> every single page mapped in the vm area then change permission on page basis. It
> depends on whether the vm area is block mapped or not if we want to change
>      permission on block mapping.
>   2. set_memory_valid(): it looks it assumes the [addr, addr + size) range is
> mapped contiguously, but it depends on the callers pass in block size (nr > 1).
> There are two sub cases:
>      2.a kfence and debugalloc just work for PTE mapping, so they pass in single
> page.
>      2.b The execmem passes in large page on x86, arm64 has not supported huge
> execmem cache yet, so it should still pass in singe page for the time being. But
> my series + Dev's series can handle both single page mapping and block mapping well
>          for this case. So changing permission for block mapping can be
> supported automatically once arm64 supports huge execmem cache.
>   3. set_direct_map_{invalid|default}_noflush(): it looks they are page basis.
> So Dev's series has no change to them.
>   4. realm: if I remember correctly, realm forces PTE mapping for linear address
> space all the time, so no impact.

Yes for realm, we currently force PTE mapping - that's because we need page
granularity for sharing certain portions back to the host. But with this work I
think we will be able to do the splitting on the fly and map using big blocks
even for realms.

> 
> So it looks like just #1 may need some extra work. But it seems simple. I should
> just need advance the address range in (1 << vm's order) stride. So there should
> be just some minor changes when I rebase my patches on top of Dev's, mainly
> context changes. It has no impact to the split primitive and repainting linear
> mapping.

I haven't looked at your series yet, but I had assumed that the most convenient
(and only) integration point would be to call your split primitive from dev's
___change_memory_common() (note 3 underscores at beginning). Something like this:

___change_memory_common(unsigned long start, unsigned long size, ...)
{
	// This will need to return error for case where splitting would have
	// been required but system does not support BBML2_NOABORT
	ret = split_mapping_granularity(start, start + size)
	if (ret)
		return ret;

	...
}

> 
> Thanks,
> Yang
> 
> 
> On 5/30/25 7:41 PM, Yang Shi wrote:
>> Changelog
>> =========
>> v4:
>>    * Rebased to v6.15-rc4.
>>    * Based on Miko's latest BBML2 cpufeature patch (https://lore.kernel.org/
>> linux-arm-kernel/20250428153514.55772-4-miko.lenczewski@arm.com/).
>>    * Keep block mappings rather than splitting to PTEs if it is fully contained
>>      per Ryan.
>>    * Return -EINVAL if page table allocation failed instead of BUG_ON per Ryan.
>>    * When page table allocation failed, return -1 instead of 0 per Ryan.
>>    * Allocate page table with GFP_ATOMIC for repainting per Ryan.
>>    * Use idmap to wait for repainting is done per Ryan.
>>    * Some minor fixes per the discussion for v3.
>>    * Some clean up to reduce redundant code.
>>
>> v3:
>>    * Rebased to v6.14-rc4.
>>    * Based on Miko's BBML2 cpufeature patch (https://lore.kernel.org/linux-
>> arm-kernel/20250228182403.6269-3-miko.lenczewski@arm.com/).
>>      Also included in this series in order to have the complete patchset.
>>    * Enhanced __create_pgd_mapping() to handle split as well per Ryan.
>>    * Supported CONT mappings per Ryan.
>>    * Supported asymmetric system by splitting kernel linear mapping if such
>>      system is detected per Ryan. I don't have such system to test, so the
>>      testing is done by hacking kernel to call linear mapping repainting
>>      unconditionally. The linear mapping doesn't have any block and cont
>>      mappings after booting.
>>
>> RFC v2:
>>    * Used allowlist to advertise BBM lv2 on the CPUs which can handle TLB
>>      conflict gracefully per Will Deacon
>>    * Rebased onto v6.13-rc5
>>    * https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-
>> yang@os.amperecomputing.com/
>>
>> v3: https://lore.kernel.org/linux-arm-kernel/20250304222018.615808-1-
>> yang@os.amperecomputing.com/
>> RFC v2: https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-
>> yang@os.amperecomputing.com/
>> RFC v1: https://lore.kernel.org/lkml/20241118181711.962576-1-
>> yang@os.amperecomputing.com/
>>
>> Description
>> ===========
>> When rodata=full kernel linear mapping is mapped by PTE due to arm's
>> break-before-make rule.
>>
>> A number of performance issues arise when the kernel linear map is using
>> PTE entries due to arm's break-before-make rule:
>>    - performance degradation
>>    - more TLB pressure
>>    - memory waste for kernel page table
>>
>> These issues can be avoided by specifying rodata=on the kernel command
>> line but this disables the alias checks on page table permissions and
>> therefore compromises security somewhat.
>>
>> With FEAT_BBM level 2 support it is no longer necessary to invalidate the
>> page table entry when changing page sizes.  This allows the kernel to
>> split large mappings after boot is complete.
>>
>> This patch adds support for splitting large mappings when FEAT_BBM level 2
>> is available and rodata=full is used. This functionality will be used
>> when modifying page permissions for individual page frames.
>>
>> Without FEAT_BBM level 2 we will keep the kernel linear map using PTEs
>> only.
>>
>> If the system is asymmetric, the kernel linear mapping may be repainted once
>> the BBML2 capability is finalized on all CPUs.  See patch #4 for more details.
>>
>> We saw significant performance increases in some benchmarks with
>> rodata=full without compromising the security features of the kernel.
>>
>> Testing
>> =======
>> The test was done on AmpereOne machine (192 cores, 1P) with 256GB memory and
>> 4K page size + 48 bit VA.
>>
>> Function test (4K/16K/64K page size)
>>    - Kernel boot.  Kernel needs change kernel linear mapping permission at
>>      boot stage, if the patch didn't work, kernel typically didn't boot.
>>    - Module stress from stress-ng. Kernel module load change permission for
>>      linear mapping.
>>    - A test kernel module which allocates 80% of total memory via vmalloc(),
>>      then change the vmalloc area permission to RO, this also change linear
>>      mapping permission to RO, then change it back before vfree(). Then launch
>>      a VM which consumes almost all physical memory.
>>    - VM with the patchset applied in guest kernel too.
>>    - Kernel build in VM with guest kernel which has this series applied.
>>    - rodata=on. Make sure other rodata mode is not broken.
>>    - Boot on the machine which doesn't support BBML2.
>>
>> Performance
>> ===========
>> Memory consumption
>> Before:
>> MemTotal:       258988984 kB
>> MemFree:        254821700 kB
>>
>> After:
>> MemTotal:       259505132 kB
>> MemFree:        255410264 kB
>>
>> Around 500MB more memory are free to use.  The larger the machine, the
>> more memory saved.
>>
>> Performance benchmarking
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt with no read/write workqueue).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>      --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>      --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>      --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>>
>> Yang Shi (4):
>>        arm64: cpufeature: add AmpereOne to BBML2 allow list
>>        arm64: mm: make __create_pgd_mapping() and helpers non-void
>>        arm64: mm: support large block mapping when rodata=full
>>        arm64: mm: split linear mapping if BBML2 is not supported on secondary
>> CPUs
>>
>>   arch/arm64/include/asm/cpufeature.h |  26 +++++++
>>   arch/arm64/include/asm/mmu.h        |   4 +
>>   arch/arm64/include/asm/pgtable.h    |  12 ++-
>>   arch/arm64/kernel/cpufeature.c      |  30 ++++++--
>>   arch/arm64/mm/mmu.c                 | 505 ++++++++++++++++++++++++++++++++++
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> +---------------
>>   arch/arm64/mm/pageattr.c            |  37 +++++++--
>>   arch/arm64/mm/proc.S                |  41 ++++++++++
>>   7 files changed, 585 insertions(+), 70 deletions(-)
>>
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void
  2025-05-31  2:41 ` [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void Yang Shi
@ 2025-06-16 10:04   ` Ryan Roberts
  2025-06-17 21:11     ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-06-16 10:04 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel, Chaitanya S Prakash

On 31/05/2025 03:41, Yang Shi wrote:
> The later patch will enhance __create_pgd_mapping() and related helpers
> to split kernel linear mapping, it requires have return value.  So make
> __create_pgd_mapping() and helpers non-void functions.
> 
> And move the BUG_ON() out of page table alloc helper since failing
> splitting kernel linear mapping is not fatal and can be handled by the
> callers in the later patch.  Have BUG_ON() after
> __create_pgd_mapping_locked() returns to keep the current callers behavior
> intact.
> 
> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>

With the nits below taken care of:

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

> ---
>  arch/arm64/kernel/cpufeature.c |  10 ++-
>  arch/arm64/mm/mmu.c            | 130 +++++++++++++++++++++++----------
>  2 files changed, 99 insertions(+), 41 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 25e1fbfab6a3..e879bfcf853b 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1933,9 +1933,9 @@ static bool has_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
>  #define KPTI_NG_TEMP_VA		(-(1UL << PMD_SHIFT))
>  
>  extern
> -void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> -			     phys_addr_t size, pgprot_t prot,
> -			     phys_addr_t (*pgtable_alloc)(int), int flags);
> +int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> +			    phys_addr_t size, pgprot_t prot,
> +			    phys_addr_t (*pgtable_alloc)(int), int flags);
>  
>  static phys_addr_t __initdata kpti_ng_temp_alloc;
>  
> @@ -1957,6 +1957,7 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>  	u64 kpti_ng_temp_pgd_pa = 0;
>  	pgd_t *kpti_ng_temp_pgd;
>  	u64 alloc = 0;
> +	int err;
>  
>  	if (levels == 5 && !pgtable_l5_enabled())
>  		levels = 4;
> @@ -1986,9 +1987,10 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>  		// covers the PTE[] page itself, the remaining entries are free
>  		// to be used as a ad-hoc fixmap.
>  		//
> -		create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
> +		err = create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
>  					KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL,
>  					kpti_ng_pgd_alloc, 0);
> +		BUG_ON(err);
>  	}
>  
>  	cpu_install_idmap();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ea6695d53fb9..775c0536b194 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -189,15 +189,16 @@ static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
>  	} while (ptep++, addr += PAGE_SIZE, addr != end);
>  }
>  
> -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> -				unsigned long end, phys_addr_t phys,
> -				pgprot_t prot,
> -				phys_addr_t (*pgtable_alloc)(int),
> -				int flags)
> +static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> +			       unsigned long end, phys_addr_t phys,
> +			       pgprot_t prot,
> +			       phys_addr_t (*pgtable_alloc)(int),
> +			       int flags)
>  {
>  	unsigned long next;
>  	pmd_t pmd = READ_ONCE(*pmdp);
>  	pte_t *ptep;
> +	int ret = 0;
>  
>  	BUG_ON(pmd_sect(pmd));
>  	if (pmd_none(pmd)) {
> @@ -208,6 +209,10 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  			pmdval |= PMD_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		pte_phys = pgtable_alloc(PAGE_SHIFT);
> +		if (pte_phys == -1) {

It would be better to have a macro definition for the invalid PA case instead of
using the magic -1 everywhere. I think it can be local to this file. Perhaps:

#define INVAL_PHYS_ADDR -1

> +			ret = -ENOMEM;
> +			goto out;
> +		}
>  		ptep = pte_set_fixmap(pte_phys);
>  		init_clear_pgtable(ptep);
>  		ptep += pte_index(addr);
> @@ -239,13 +244,17 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  	 * walker.
>  	 */
>  	pte_clear_fixmap();
> +
> +out:
> +	return ret;
>  }
>  
> -static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
> -		     phys_addr_t phys, pgprot_t prot,
> -		     phys_addr_t (*pgtable_alloc)(int), int flags)
> +static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
> +		    phys_addr_t phys, pgprot_t prot,
> +		    phys_addr_t (*pgtable_alloc)(int), int flags)
>  {
>  	unsigned long next;
> +	int ret = 0;
>  
>  	do {
>  		pmd_t old_pmd = READ_ONCE(*pmdp);
> @@ -264,22 +273,27 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>  			BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
>  						      READ_ONCE(pmd_val(*pmdp))));
>  		} else {
> -			alloc_init_cont_pte(pmdp, addr, next, phys, prot,
> +			ret = alloc_init_cont_pte(pmdp, addr, next, phys, prot,
>  					    pgtable_alloc, flags);
> +			if (ret)
> +				break;
>  
>  			BUG_ON(pmd_val(old_pmd) != 0 &&
>  			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
>  		}
>  		phys += next - addr;
>  	} while (pmdp++, addr = next, addr != end);
> +
> +	return ret;
>  }
>  
> -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> -				unsigned long end, phys_addr_t phys,
> -				pgprot_t prot,
> -				phys_addr_t (*pgtable_alloc)(int), int flags)
> +static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> +			       unsigned long end, phys_addr_t phys,
> +			       pgprot_t prot,
> +			       phys_addr_t (*pgtable_alloc)(int), int flags)
>  {
>  	unsigned long next;
> +	int ret = 0;
>  	pud_t pud = READ_ONCE(*pudp);
>  	pmd_t *pmdp;
>  
> @@ -295,6 +309,10 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  			pudval |= PUD_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		pmd_phys = pgtable_alloc(PMD_SHIFT);
> +		if (pmd_phys == -1) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
>  		pmdp = pmd_set_fixmap(pmd_phys);
>  		init_clear_pgtable(pmdp);
>  		pmdp += pmd_index(addr);
> @@ -314,21 +332,27 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  		    (flags & NO_CONT_MAPPINGS) == 0)
>  			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>  
> -		init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
> +		ret = init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
> +		if (ret)
> +			break;
>  
>  		pmdp += pmd_index(next) - pmd_index(addr);
>  		phys += next - addr;
>  	} while (addr = next, addr != end);
>  
>  	pmd_clear_fixmap();
> +
> +out:
> +	return ret;
>  }
>  
> -static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
> -			   phys_addr_t phys, pgprot_t prot,
> -			   phys_addr_t (*pgtable_alloc)(int),
> -			   int flags)
> +static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
> +			  phys_addr_t phys, pgprot_t prot,
> +			  phys_addr_t (*pgtable_alloc)(int),
> +			  int flags)
>  {
>  	unsigned long next;
> +	int ret = 0;
>  	p4d_t p4d = READ_ONCE(*p4dp);
>  	pud_t *pudp;
>  
> @@ -340,6 +364,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>  			p4dval |= P4D_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		pud_phys = pgtable_alloc(PUD_SHIFT);
> +		if (pud_phys == -1) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
>  		pudp = pud_set_fixmap(pud_phys);
>  		init_clear_pgtable(pudp);
>  		pudp += pud_index(addr);
> @@ -369,8 +397,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>  			BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
>  						      READ_ONCE(pud_val(*pudp))));
>  		} else {
> -			alloc_init_cont_pmd(pudp, addr, next, phys, prot,
> +			ret = alloc_init_cont_pmd(pudp, addr, next, phys, prot,
>  					    pgtable_alloc, flags);
> +			if (ret)
> +				break;
>  
>  			BUG_ON(pud_val(old_pud) != 0 &&
>  			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
> @@ -379,14 +409,18 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>  	} while (pudp++, addr = next, addr != end);
>  
>  	pud_clear_fixmap();
> +
> +out:
> +	return ret;
>  }
>  
> -static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
> -			   phys_addr_t phys, pgprot_t prot,
> -			   phys_addr_t (*pgtable_alloc)(int),
> -			   int flags)
> +static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
> +			  phys_addr_t phys, pgprot_t prot,
> +			  phys_addr_t (*pgtable_alloc)(int),
> +			  int flags)
>  {
>  	unsigned long next;
> +	int ret = 0;
>  	pgd_t pgd = READ_ONCE(*pgdp);
>  	p4d_t *p4dp;
>  
> @@ -398,6 +432,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  			pgdval |= PGD_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		p4d_phys = pgtable_alloc(P4D_SHIFT);
> +		if (p4d_phys == -1) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
>  		p4dp = p4d_set_fixmap(p4d_phys);
>  		init_clear_pgtable(p4dp);
>  		p4dp += p4d_index(addr);
> @@ -412,8 +450,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  
>  		next = p4d_addr_end(addr, end);
>  
> -		alloc_init_pud(p4dp, addr, next, phys, prot,
> +		ret = alloc_init_pud(p4dp, addr, next, phys, prot,
>  			       pgtable_alloc, flags);
> +		if (ret)
> +			break;
>  
>  		BUG_ON(p4d_val(old_p4d) != 0 &&
>  		       p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp)));
> @@ -422,23 +462,27 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  	} while (p4dp++, addr = next, addr != end);
>  
>  	p4d_clear_fixmap();
> +
> +out:
> +	return ret;
>  }
>  
> -static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> -					unsigned long virt, phys_addr_t size,
> -					pgprot_t prot,
> -					phys_addr_t (*pgtable_alloc)(int),
> -					int flags)
> +static int __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> +				       unsigned long virt, phys_addr_t size,
> +				       pgprot_t prot,
> +				       phys_addr_t (*pgtable_alloc)(int),
> +				       int flags)
>  {
>  	unsigned long addr, end, next;
>  	pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
> +	int ret = 0;
>  
>  	/*
>  	 * If the virtual and physical address don't have the same offset
>  	 * within a page, we cannot map the region as the caller expects.
>  	 */
>  	if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
> -		return;
> +		return -EINVAL;
>  
>  	phys &= PAGE_MASK;
>  	addr = virt & PAGE_MASK;
> @@ -446,10 +490,14 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>  
>  	do {
>  		next = pgd_addr_end(addr, end);
> -		alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
> +		ret = alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
>  			       flags);
> +		if (ret)
> +			break;
>  		phys += next - addr;
>  	} while (pgdp++, addr = next, addr != end);
> +
> +	return ret;
>  }
>  
>  static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
> @@ -458,17 +506,20 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
>  				 phys_addr_t (*pgtable_alloc)(int),
>  				 int flags)
>  {
> +	int err;
> +
>  	mutex_lock(&fixmap_lock);
> -	__create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
> -				    pgtable_alloc, flags);
> +	err = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
> +					  pgtable_alloc, flags);
> +	BUG_ON(err);
>  	mutex_unlock(&fixmap_lock);
>  }
>  
>  #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  extern __alias(__create_pgd_mapping_locked)
> -void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> -			     phys_addr_t size, pgprot_t prot,
> -			     phys_addr_t (*pgtable_alloc)(int), int flags);
> +int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> +			    phys_addr_t size, pgprot_t prot,
> +			    phys_addr_t (*pgtable_alloc)(int), int flags);
>  #endif

Personally I would have converted this from an alias to a wrapper:

void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
			     phys_addr_t size, pgprot_t prot,
			     phys_addr_t (*pgtable_alloc)(int), int flags)
{
	int ret;

	ret = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
					  pgtable_alloc, flags);
	BUG_ON(err);
}

Then there is no churn in cpufeature.c. But it's not a strong opinion. If you
prefer it like this then I'm ok with it (We'll need to see what Catalin and Will
prefer ultimately anyway).

Thanks,
Ryan

>  
>  static phys_addr_t __pgd_pgtable_alloc(int shift)
> @@ -476,13 +527,17 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
>  	/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
>  	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL & ~__GFP_ZERO);
>  
> -	BUG_ON(!ptr);
> +	if (!ptr)
> +		return -1;
> +
>  	return __pa(ptr);
>  }
>  
>  static phys_addr_t pgd_pgtable_alloc(int shift)
>  {
>  	phys_addr_t pa = __pgd_pgtable_alloc(shift);
> +	if (pa == -1)
> +		goto out;
>  	struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
>  
>  	/*
> @@ -498,6 +553,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>  	else if (shift == PMD_SHIFT)
>  		BUG_ON(!pagetable_pmd_ctor(ptdesc));
>  
> +out:
>  	return pa;
>  }
>  



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-05-31  2:41 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-06-16 11:58   ` Ryan Roberts
  2025-06-16 12:33     ` Ryan Roberts
  2025-06-16 16:24   ` Ryan Roberts
  1 sibling, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-06-16 11:58 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel

On 31/05/2025 03:41, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
> 
> This resulted in a couple of problems:
>   - performance degradation
>   - more TLB pressure
>   - memory waste for kernel page table
> 
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
> 
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full.  When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
> 
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
> 
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
> 
> * Memory use after boot
> Before:
> MemTotal:       258988984 kB
> MemFree:        254821700 kB
> 
> After:
> MemTotal:       259505132 kB
> MemFree:        255410264 kB
> 
> Around 500MB more memory are free to use.  The larger the machine, the
> more memory saved.
> 
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
> MPKI is reduced by 28.5%.
> 
> The benchmark data is now on par with rodata=on too.
> 
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>     --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>     --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>     --name=iops-test-job --eta-newline=1 --size 100G
> 
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
> 
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  26 +++
>  arch/arm64/include/asm/mmu.h        |   1 +
>  arch/arm64/include/asm/pgtable.h    |  12 +-
>  arch/arm64/kernel/cpufeature.c      |   2 +-
>  arch/arm64/mm/mmu.c                 | 269 +++++++++++++++++++++++++---
>  arch/arm64/mm/pageattr.c            |  37 +++-
>  6 files changed, 319 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 8f36ffa16b73..a95806980298 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>  #endif
>  }
>  
> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
> +
> +static inline bool has_nobbml2_override(void)
> +{
> +	u64 mmfr2;
> +	unsigned int bbm;
> +
> +	mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> +	mmfr2 &= ~id_aa64mmfr2_override.mask;
> +	mmfr2 |= id_aa64mmfr2_override.val;
> +	bbm = cpuid_feature_extract_unsigned_field(mmfr2,
> +						   ID_AA64MMFR2_EL1_BBM_SHIFT);
> +	return bbm == 0;
> +}
> +
> +/*
> + * Called at early boot stage on boot CPU before cpu info and cpu feature
> + * are ready.
> + */
> +static inline bool bbml2_noabort_available(void)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
> +	       cpu_has_bbml2_noabort(read_cpuid_id()) &&
> +	       !has_nobbml2_override();

Based on Will's feedback, The Kconfig and the cmdline override will both
disappear in Miko's next version and we will only use the MIDR list to decide
BBML2_NOABORT status, so this will significantly simplify. Sorry about the churn
here.

> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 6e8aa8e72601..2693d63bf837 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>  extern void mark_linear_text_alias_ro(void);
> +extern int split_linear_mapping(unsigned long start, unsigned long end);

nit: Perhaps split_leaf_mapping() or split_kernel_pgtable_mapping() or something
similar is more generic which will benefit us in future when using this for
vmalloc too?

>  
>  /*
>   * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index d3b538be1500..bf3cef31d243 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>  	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>  }
>  
> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
> +{
> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
> +}
> +
>  static inline pte_t pte_mkdevmap(pte_t pte)
>  {
>  	return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
> @@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
>  	        ((unsigned long)swapper_pg_dir & PAGE_MASK);
>  }
>  
> -static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> +static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
>  {
>  #ifdef __PAGETABLE_PMD_FOLDED
>  	if (in_swapper_pgdir(pmdp)) {
> @@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>  #endif /* __PAGETABLE_PMD_FOLDED */
>  
>  	WRITE_ONCE(*pmdp, pmd);
> +}
> +
> +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> +{
> +	__set_pmd_nosync(pmdp, pmd);
>  
>  	if (pmd_valid(pmd)) {
>  		dsb(ishst);
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e879bfcf853b..5fc2a4a804de 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>  	return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
>  }
>  
> -static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>  {
>  	/*
>  	 * We want to allow usage of bbml2 in as wide a range of kernel contexts


[...] I'll send a separate response for the mmu.c table walker changes.

>  
> +int split_linear_mapping(unsigned long start, unsigned long end)
> +{
> +	int ret = 0;
> +
> +	if (!system_supports_bbml2_noabort())
> +		return 0;

Hmm... I guess the thinking here is that for !BBML2_NOABORT you are expecting
this function should only be called in the first place if we know we are
pte-mapped. So I guess this is ok... it just means that if we are not
pte-mapped, warnings will be emitted while walking the pgtables (as is the case
today). So I think this approach is ok.

> +
> +	mmap_write_lock(&init_mm);

What is the lock protecting? I was orignally thinking no locking should be
needed because it's not needed for permission changes today; But I think you are
right here and we do need locking; multiple owners could share a large leaf
mapping, I guess? And in that case you could get concurrent attempts to split
from both owners.

I'm not really a fan of adding the extra locking though; It might introduce a
new bottleneck. I wonder if there is a way we could do this locklessly? i.e.
allocate the new table, then cmpxchg to insert and the loser has to free? That
doesn't work for contiguous mappings though...

> +	/* NO_EXEC_MAPPINGS is needed when splitting linear map */
> +	ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
> +					  start, (end - start), __pgprot(0),
> +					  __pgd_pgtable_alloc,
> +					  NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
> +	mmap_write_unlock(&init_mm);
> +	flush_tlb_kernel_range(start, end);

I don't believe we should need to flush the TLB when only changing entry sizes
when BBML2 is supported. Miko's series has a massive comment explaining the
reasoning. That applies to user space though. We should consider if this all
works safely for kernel space too, and hopefully remove the flush.

> +
> +	return ret;
> +}
> +
>  /*
>   * This function can only be used to modify existing table entries,
>   * without allocating new levels of table. Note that this permits the
> @@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>  
>  #endif /* CONFIG_KFENCE */
>  
> +static inline bool force_pte_mapping(void)
> +{
> +	/*
> +	 * Can't use cpufeature API to determine whether BBML2 supported
> +	 * or not since cpufeature have not been finalized yet.
> +	 *
> +	 * Checking the boot CPU only for now.  If the boot CPU has
> +	 * BBML2, paint linear mapping with block mapping.  If it turns
> +	 * out the secondary CPUs don't support BBML2 once cpufeature is
> +	 * fininalized, the linear mapping will be repainted with PTE
> +	 * mapping.
> +	 */
> +	return (rodata_full && !bbml2_noabort_available()) ||

So this is the case where we don't have BBML2 and need to modify protections at
page granularity - I agree we need to force pte mappings here.

> +		debug_pagealloc_enabled() ||

This is the case where every page is made invalid on free and valid on
allocation, so no point in having block mappings because it will soon degenerate
into page mappings because we will have to split on every allocation. Agree here
too.

> +		arm64_kfence_can_set_direct_map() ||

After looking into how kfence works, I don't agree with this one. It has a
dedicated pool where it allocates from. That pool may be allocated early by the
arch or may be allocated late by the core code. Either way, kfence will only
modify protections within that pool. You current approach is forcing pte
mappings if the pool allocation is late (i.e. not performed by the arch code
during boot). But I think "late" is the most common case; kfence is compiled
into the kernel but not active at boot. Certainly that's how my Ubuntu kernel is
configured. So I think we should just ignore kfence here. If it's "early" then
we map the pool with page granularity (as an optimization). If it's "late" your
splitter will degenerate the whole kfence pool to page mappings over time as
kfence_protect_page() -> set_memory_valid() is called. But the bulk of the
linear map will remain mapped with large blocks.

> +		is_realm_world();

I think the only reason this requires pte mappings is for
__set_memory_enc_dec(). But that can now deal with block mappings given the
ability to split the mappings as needed. So I think this condition can be
removed too.

> +}

Additionally, for can_set_direct_map(); at minimum it's comment should be tidied
up, but really I think it should return true if "BBML2_NOABORT ||
force_pte_mapping()". Because they are the conditions under which we can now
safely modify the linear map.

> +
>  static void __init map_mem(pgd_t *pgdp)
>  {
>  	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
>  
>  	early_kfence_pool = arm64_kfence_alloc_pool();
>  
> -	if (can_set_direct_map())
> +	if (force_pte_mapping())
>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	/*
> @@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>  
>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>  
> -	if (can_set_direct_map())
> +	if (force_pte_mapping())
>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 39fd1f7ff02a..25c068712cb5 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -10,6 +10,7 @@
>  #include <linux/vmalloc.h>
>  
>  #include <asm/cacheflush.h>
> +#include <asm/mmu.h>
>  #include <asm/pgtable-prot.h>
>  #include <asm/set_memory.h>
>  #include <asm/tlbflush.h>
> @@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
>  	struct page_change_data *cdata = data;
>  	pte_t pte = __ptep_get(ptep);
>  
> +	BUG_ON(pte_cont(pte));

I don't think this is required; We want to enable using contiguous mappings
where it makes sense. As long as we have BBML2, we can update contiguous pte
mappings in place, as long as we update all of the ptes in the contiguous block.
split_linear_map() should either have converted to non-cont mappings if the
contiguous block straddled the split point, or would have left as is (or
downgraded a PMD-block to a contpte block) if fully contained within the split
range.

> +
>  	pte = clear_pte_bit(pte, cdata->clear_mask);
>  	pte = set_pte_bit(pte, cdata->set_mask);
>  
> @@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>  	unsigned long start = addr;
>  	unsigned long size = PAGE_SIZE * numpages;
>  	unsigned long end = start + size;
> +	unsigned long l_start;
>  	struct vm_struct *area;
> -	int i;
> +	int i, ret;
>  
>  	if (!PAGE_ALIGNED(addr)) {
>  		start &= PAGE_MASK;
> @@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
>  	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>  			    pgprot_val(clear_mask) == PTE_RDONLY)) {
>  		for (i = 0; i < area->nr_pages; i++) {
> -			__change_memory_common((u64)page_address(area->pages[i]),
> +			l_start = (u64)page_address(area->pages[i]);
> +			ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
> +			if (WARN_ON_ONCE(ret))
> +				return ret;

I don't think this is the right place to integrate; I think the split should be
done inside __change_memory_common(). Then it caters to all possibilities (i.e.
set_memory_valid() and __set_memory_enc_dec()). This means it will run for
vmalloc too, but for now, that will be a nop because everything should already
be split as required on entry and in future we will get that for free.

Once you have integrated Dev's series, the hook becomes
___change_memory_common() (3 underscores)...

> +
> +			__change_memory_common(l_start,
>  					       PAGE_SIZE, set_mask, clear_mask);
>  		}
>  	}
> @@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
>  
>  int set_direct_map_invalid_noflush(struct page *page)
>  {
> +	unsigned long l_start;
> +	int ret;
> +
>  	struct page_change_data data = {
>  		.set_mask = __pgprot(0),
>  		.clear_mask = __pgprot(PTE_VALID),
> @@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
>  	if (!can_set_direct_map())
>  		return 0;
>  
> +	l_start = (unsigned long)page_address(page);
> +	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
> +	if (WARN_ON_ONCE(ret))
> +		return ret;
> +
>  	return apply_to_page_range(&init_mm,
> -				   (unsigned long)page_address(page),
> -				   PAGE_SIZE, change_page_range, &data);
> +				   l_start, PAGE_SIZE, change_page_range,
> +				   &data);

...and once integrated with Dev's series you don't need any changes here...

>  }
>  
>  int set_direct_map_default_noflush(struct page *page)
>  {
> +	unsigned long l_start;
> +	int ret;
> +
>  	struct page_change_data data = {
>  		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
>  		.clear_mask = __pgprot(PTE_RDONLY),
> @@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
>  	if (!can_set_direct_map())
>  		return 0;
>  
> +	l_start = (unsigned long)page_address(page);
> +	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
> +	if (WARN_ON_ONCE(ret))
> +		return ret;
> +
>  	return apply_to_page_range(&init_mm,
> -				   (unsigned long)page_address(page),
> -				   PAGE_SIZE, change_page_range, &data);
> +				   l_start, PAGE_SIZE, change_page_range,
> +				   &data);

...or here.

Thanks,
Ryan

>  }
>  
>  static int __set_memory_enc_dec(unsigned long addr,



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-16 11:58   ` Ryan Roberts
@ 2025-06-16 12:33     ` Ryan Roberts
  2025-06-17 21:01       ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-06-16 12:33 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel

On 16/06/2025 12:58, Ryan Roberts wrote:
> On 31/05/2025 03:41, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>>   - performance degradation
>>   - more TLB pressure
>>   - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full.  When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal:       258988984 kB
>> MemFree:        254821700 kB
>>
>> After:
>> MemTotal:       259505132 kB
>> MemFree:        255410264 kB
>>
>> Around 500MB more memory are free to use.  The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>     --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>     --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>     --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>>  arch/arm64/include/asm/cpufeature.h |  26 +++
>>  arch/arm64/include/asm/mmu.h        |   1 +
>>  arch/arm64/include/asm/pgtable.h    |  12 +-
>>  arch/arm64/kernel/cpufeature.c      |   2 +-
>>  arch/arm64/mm/mmu.c                 | 269 +++++++++++++++++++++++++---
>>  arch/arm64/mm/pageattr.c            |  37 +++-
>>  6 files changed, 319 insertions(+), 28 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 8f36ffa16b73..a95806980298 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>>  #endif
>>  }
>>  
>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
>> +
>> +static inline bool has_nobbml2_override(void)
>> +{
>> +	u64 mmfr2;
>> +	unsigned int bbm;
>> +
>> +	mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
>> +	mmfr2 &= ~id_aa64mmfr2_override.mask;
>> +	mmfr2 |= id_aa64mmfr2_override.val;
>> +	bbm = cpuid_feature_extract_unsigned_field(mmfr2,
>> +						   ID_AA64MMFR2_EL1_BBM_SHIFT);
>> +	return bbm == 0;
>> +}
>> +
>> +/*
>> + * Called at early boot stage on boot CPU before cpu info and cpu feature
>> + * are ready.
>> + */
>> +static inline bool bbml2_noabort_available(void)
>> +{
>> +	return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
>> +	       cpu_has_bbml2_noabort(read_cpuid_id()) &&
>> +	       !has_nobbml2_override();
> 
> Based on Will's feedback, The Kconfig and the cmdline override will both
> disappear in Miko's next version and we will only use the MIDR list to decide
> BBML2_NOABORT status, so this will significantly simplify. Sorry about the churn
> here.
> 
>> +}
>> +
>>  #endif /* __ASSEMBLY__ */
>>  
>>  #endif
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 6e8aa8e72601..2693d63bf837 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>  			       pgprot_t prot, bool page_mappings_only);
>>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>  extern void mark_linear_text_alias_ro(void);
>> +extern int split_linear_mapping(unsigned long start, unsigned long end);
> 
> nit: Perhaps split_leaf_mapping() or split_kernel_pgtable_mapping() or something
> similar is more generic which will benefit us in future when using this for
> vmalloc too?
> 
>>  
>>  /*
>>   * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index d3b538be1500..bf3cef31d243 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>>  	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>>  }
>>  
>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>> +{
>> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>> +}
>> +
>>  static inline pte_t pte_mkdevmap(pte_t pte)
>>  {
>>  	return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
>> @@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
>>  	        ((unsigned long)swapper_pg_dir & PAGE_MASK);
>>  }
>>  
>> -static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>> +static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
>>  {
>>  #ifdef __PAGETABLE_PMD_FOLDED
>>  	if (in_swapper_pgdir(pmdp)) {
>> @@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>  #endif /* __PAGETABLE_PMD_FOLDED */
>>  
>>  	WRITE_ONCE(*pmdp, pmd);
>> +}
>> +
>> +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>> +{
>> +	__set_pmd_nosync(pmdp, pmd);
>>  
>>  	if (pmd_valid(pmd)) {
>>  		dsb(ishst);
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index e879bfcf853b..5fc2a4a804de 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>>  	return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
>>  }
>>  
>> -static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>>  {
>>  	/*
>>  	 * We want to allow usage of bbml2 in as wide a range of kernel contexts
> 
> 
> [...] I'll send a separate response for the mmu.c table walker changes.
> 
>>  
>> +int split_linear_mapping(unsigned long start, unsigned long end)
>> +{
>> +	int ret = 0;
>> +
>> +	if (!system_supports_bbml2_noabort())
>> +		return 0;
> 
> Hmm... I guess the thinking here is that for !BBML2_NOABORT you are expecting
> this function should only be called in the first place if we know we are
> pte-mapped. So I guess this is ok... it just means that if we are not
> pte-mapped, warnings will be emitted while walking the pgtables (as is the case
> today). So I think this approach is ok.
> 
>> +
>> +	mmap_write_lock(&init_mm);
> 
> What is the lock protecting? I was orignally thinking no locking should be
> needed because it's not needed for permission changes today; But I think you are
> right here and we do need locking; multiple owners could share a large leaf
> mapping, I guess? And in that case you could get concurrent attempts to split
> from both owners.
> 
> I'm not really a fan of adding the extra locking though; It might introduce a
> new bottleneck. I wonder if there is a way we could do this locklessly? i.e.
> allocate the new table, then cmpxchg to insert and the loser has to free? That
> doesn't work for contiguous mappings though...
> 
>> +	/* NO_EXEC_MAPPINGS is needed when splitting linear map */
>> +	ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
>> +					  start, (end - start), __pgprot(0),
>> +					  __pgd_pgtable_alloc,
>> +					  NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
>> +	mmap_write_unlock(&init_mm);
>> +	flush_tlb_kernel_range(start, end);
> 
> I don't believe we should need to flush the TLB when only changing entry sizes
> when BBML2 is supported. Miko's series has a massive comment explaining the
> reasoning. That applies to user space though. We should consider if this all
> works safely for kernel space too, and hopefully remove the flush.
> 
>> +
>> +	return ret;
>> +}
>> +
>>  /*
>>   * This function can only be used to modify existing table entries,
>>   * without allocating new levels of table. Note that this permits the
>> @@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>  
>>  #endif /* CONFIG_KFENCE */
>>  
>> +static inline bool force_pte_mapping(void)
>> +{
>> +	/*
>> +	 * Can't use cpufeature API to determine whether BBML2 supported
>> +	 * or not since cpufeature have not been finalized yet.
>> +	 *
>> +	 * Checking the boot CPU only for now.  If the boot CPU has
>> +	 * BBML2, paint linear mapping with block mapping.  If it turns
>> +	 * out the secondary CPUs don't support BBML2 once cpufeature is
>> +	 * fininalized, the linear mapping will be repainted with PTE
>> +	 * mapping.
>> +	 */
>> +	return (rodata_full && !bbml2_noabort_available()) ||
> 
> So this is the case where we don't have BBML2 and need to modify protections at
> page granularity - I agree we need to force pte mappings here.
> 
>> +		debug_pagealloc_enabled() ||
> 
> This is the case where every page is made invalid on free and valid on
> allocation, so no point in having block mappings because it will soon degenerate
> into page mappings because we will have to split on every allocation. Agree here
> too.
> 
>> +		arm64_kfence_can_set_direct_map() ||
> 
> After looking into how kfence works, I don't agree with this one. It has a
> dedicated pool where it allocates from. That pool may be allocated early by the
> arch or may be allocated late by the core code. Either way, kfence will only
> modify protections within that pool. You current approach is forcing pte
> mappings if the pool allocation is late (i.e. not performed by the arch code
> during boot). But I think "late" is the most common case; kfence is compiled
> into the kernel but not active at boot. Certainly that's how my Ubuntu kernel is
> configured. So I think we should just ignore kfence here. If it's "early" then
> we map the pool with page granularity (as an optimization). If it's "late" your
> splitter will degenerate the whole kfence pool to page mappings over time as
> kfence_protect_page() -> set_memory_valid() is called. But the bulk of the
> linear map will remain mapped with large blocks.
> 
>> +		is_realm_world();
> 
> I think the only reason this requires pte mappings is for
> __set_memory_enc_dec(). But that can now deal with block mappings given the
> ability to split the mappings as needed. So I think this condition can be
> removed too.

To clarify; the latter 2 would still be needed for the !BBML2_NOABORT case. So I
think the expression becomes:

	return (!bbml2_noabort_available() && (rodata_full ||
		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
		debug_pagealloc_enabled();

Thanks,
Ryan

> 
>> +}
> 
> Additionally, for can_set_direct_map(); at minimum it's comment should be tidied
> up, but really I think it should return true if "BBML2_NOABORT ||
> force_pte_mapping()". Because they are the conditions under which we can now
> safely modify the linear map.
> 
>> +
>>  static void __init map_mem(pgd_t *pgdp)
>>  {
>>  	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
>>  
>>  	early_kfence_pool = arm64_kfence_alloc_pool();
>>  
>> -	if (can_set_direct_map())
>> +	if (force_pte_mapping())
>>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>  
>>  	/*
>> @@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>  
>>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>  
>> -	if (can_set_direct_map())
>> +	if (force_pte_mapping())
>>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>  
>>  	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 39fd1f7ff02a..25c068712cb5 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -10,6 +10,7 @@
>>  #include <linux/vmalloc.h>
>>  
>>  #include <asm/cacheflush.h>
>> +#include <asm/mmu.h>
>>  #include <asm/pgtable-prot.h>
>>  #include <asm/set_memory.h>
>>  #include <asm/tlbflush.h>
>> @@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
>>  	struct page_change_data *cdata = data;
>>  	pte_t pte = __ptep_get(ptep);
>>  
>> +	BUG_ON(pte_cont(pte));
> 
> I don't think this is required; We want to enable using contiguous mappings
> where it makes sense. As long as we have BBML2, we can update contiguous pte
> mappings in place, as long as we update all of the ptes in the contiguous block.
> split_linear_map() should either have converted to non-cont mappings if the
> contiguous block straddled the split point, or would have left as is (or
> downgraded a PMD-block to a contpte block) if fully contained within the split
> range.
> 
>> +
>>  	pte = clear_pte_bit(pte, cdata->clear_mask);
>>  	pte = set_pte_bit(pte, cdata->set_mask);
>>  
>> @@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>>  	unsigned long start = addr;
>>  	unsigned long size = PAGE_SIZE * numpages;
>>  	unsigned long end = start + size;
>> +	unsigned long l_start;
>>  	struct vm_struct *area;
>> -	int i;
>> +	int i, ret;
>>  
>>  	if (!PAGE_ALIGNED(addr)) {
>>  		start &= PAGE_MASK;
>> @@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
>>  	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>  			    pgprot_val(clear_mask) == PTE_RDONLY)) {
>>  		for (i = 0; i < area->nr_pages; i++) {
>> -			__change_memory_common((u64)page_address(area->pages[i]),
>> +			l_start = (u64)page_address(area->pages[i]);
>> +			ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>> +			if (WARN_ON_ONCE(ret))
>> +				return ret;
> 
> I don't think this is the right place to integrate; I think the split should be
> done inside __change_memory_common(). Then it caters to all possibilities (i.e.
> set_memory_valid() and __set_memory_enc_dec()). This means it will run for
> vmalloc too, but for now, that will be a nop because everything should already
> be split as required on entry and in future we will get that for free.
> 
> Once you have integrated Dev's series, the hook becomes
> ___change_memory_common() (3 underscores)...
> 
>> +
>> +			__change_memory_common(l_start,
>>  					       PAGE_SIZE, set_mask, clear_mask);
>>  		}
>>  	}
>> @@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
>>  
>>  int set_direct_map_invalid_noflush(struct page *page)
>>  {
>> +	unsigned long l_start;
>> +	int ret;
>> +
>>  	struct page_change_data data = {
>>  		.set_mask = __pgprot(0),
>>  		.clear_mask = __pgprot(PTE_VALID),
>> @@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
>>  	if (!can_set_direct_map())
>>  		return 0;
>>  
>> +	l_start = (unsigned long)page_address(page);
>> +	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>> +	if (WARN_ON_ONCE(ret))
>> +		return ret;
>> +
>>  	return apply_to_page_range(&init_mm,
>> -				   (unsigned long)page_address(page),
>> -				   PAGE_SIZE, change_page_range, &data);
>> +				   l_start, PAGE_SIZE, change_page_range,
>> +				   &data);
> 
> ...and once integrated with Dev's series you don't need any changes here...
> 
>>  }
>>  
>>  int set_direct_map_default_noflush(struct page *page)
>>  {
>> +	unsigned long l_start;
>> +	int ret;
>> +
>>  	struct page_change_data data = {
>>  		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
>>  		.clear_mask = __pgprot(PTE_RDONLY),
>> @@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
>>  	if (!can_set_direct_map())
>>  		return 0;
>>  
>> +	l_start = (unsigned long)page_address(page);
>> +	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>> +	if (WARN_ON_ONCE(ret))
>> +		return ret;
>> +
>>  	return apply_to_page_range(&init_mm,
>> -				   (unsigned long)page_address(page),
>> -				   PAGE_SIZE, change_page_range, &data);
>> +				   l_start, PAGE_SIZE, change_page_range,
>> +				   &data);
> 
> ...or here.
> 
> Thanks,
> Ryan
> 
>>  }
>>  
>>  static int __set_memory_enc_dec(unsigned long addr,
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-05-31  2:41 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
  2025-06-16 11:58   ` Ryan Roberts
@ 2025-06-16 16:24   ` Ryan Roberts
  2025-06-17 21:09     ` Yang Shi
  1 sibling, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-06-16 16:24 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel

On 31/05/2025 03:41, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
> 
> This resulted in a couple of problems:
>   - performance degradation
>   - more TLB pressure
>   - memory waste for kernel page table
> 
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
> 
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full.  When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
> 
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
> 
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
> 
> * Memory use after boot
> Before:
> MemTotal:       258988984 kB
> MemFree:        254821700 kB
> 
> After:
> MemTotal:       259505132 kB
> MemFree:        255410264 kB
> 
> Around 500MB more memory are free to use.  The larger the machine, the
> more memory saved.
> 
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
> MPKI is reduced by 28.5%.
> 
> The benchmark data is now on par with rodata=on too.
> 
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>     --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>     --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>     --name=iops-test-job --eta-newline=1 --size 100G
> 
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
> 
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  26 +++
>  arch/arm64/include/asm/mmu.h        |   1 +
>  arch/arm64/include/asm/pgtable.h    |  12 +-
>  arch/arm64/kernel/cpufeature.c      |   2 +-
>  arch/arm64/mm/mmu.c                 | 269 +++++++++++++++++++++++++---
>  arch/arm64/mm/pageattr.c            |  37 +++-
>  6 files changed, 319 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 8f36ffa16b73..a95806980298 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>  #endif
>  }
>  

[...] (I gave comments on this part in previous reply)

I'm focussing on teh table walker in mmu.c here - i.e. implementation of
split_linear_mapping()...

> index 775c0536b194..4c5d3aa35d62 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -45,6 +45,7 @@
>  #define NO_BLOCK_MAPPINGS	BIT(0)
>  #define NO_CONT_MAPPINGS	BIT(1)
>  #define NO_EXEC_MAPPINGS	BIT(2)	/* assumes FEAT_HPDS is not used */
> +#define SPLIT_MAPPINGS		BIT(3)
>  
>  u64 kimage_voffset __ro_after_init;
>  EXPORT_SYMBOL(kimage_voffset);
> @@ -166,12 +167,91 @@ static void init_clear_pgtable(void *table)
>  	dsb(ishst);
>  }
>  
> +static void split_cont_pte(pte_t *ptep)
> +{
> +	pte_t *_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> +	pte_t _pte;
> +
> +	for (int i = 0; i < CONT_PTES; i++, _ptep++) {
> +		_pte = READ_ONCE(*_ptep);
> +		_pte = pte_mknoncont(_pte);
> +		__set_pte_nosync(_ptep, _pte);

This is not atomic but I don't think that matters for kernel mappings since we
don't care about HW-modified access/dirty bits.

> +	}
> +
> +	dsb(ishst);
> +	isb();

I think we can use lazy_mmu_mode here to potentially batch the barriers for
multiple levels. This also avoids the need for adding __set_pmd_nosync().

> +}
> +
> +static void split_cont_pmd(pmd_t *pmdp)
> +{
> +	pmd_t *_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> +	pmd_t _pmd;
> +
> +	for (int i = 0; i < CONT_PMDS; i++, _pmdp++) {
> +		_pmd = READ_ONCE(*_pmdp);
> +		_pmd = pmd_mknoncont(_pmd);
> +		set_pmd(_pmdp, _pmd);

Without lazy_mmu_mode this is issuing barriers per entry. With lazy_mmu_mode
this will defer the barriers until we exit the mode so this will get a bit
faster. (in practice it will be a bit like what you have done for contpte but
potentially even better because we can batch across levels.

> +	}
> +}
> +
> +static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
> +{
> +	pte_t *ptep;
> +	unsigned long pfn;
> +	pgprot_t prot;
> +
> +	pfn = pmd_pfn(pmd);
> +	prot = pmd_pgprot(pmd);
> +	prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PTE_TYPE_PAGE);
> +
> +	ptep = (pte_t *)phys_to_virt(pte_phys);
> +
> +	/* It must be naturally aligned if PMD is leaf */
> +	if ((flags & NO_CONT_MAPPINGS) == 0)

I'm not sure we have a use case for avoiding CONT mappings? Suggest doing it
unconditionally.

> +		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> +	for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> +		__set_pte_nosync(ptep, pfn_pte(pfn, prot));
> +
> +	dsb(ishst);
> +}
> +
> +static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
> +{
> +	pmd_t *pmdp;
> +	unsigned long pfn;
> +	pgprot_t prot;
> +	unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> +	pfn = pud_pfn(pud);
> +	prot = pud_pgprot(pud);
> +	pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +
> +	/* It must be naturally aligned if PUD is leaf */
> +	if ((flags & NO_CONT_MAPPINGS) == 0)
> +		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> +	for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> +		__set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
> +		pfn += step;
> +	}
> +
> +	dsb(ishst);
> +}
> +
>  static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
> -		     phys_addr_t phys, pgprot_t prot)
> +		     phys_addr_t phys, pgprot_t prot, int flags)
>  {
>  	do {
>  		pte_t old_pte = __ptep_get(ptep);
>  
> +		if (flags & SPLIT_MAPPINGS) {
> +			if (pte_cont(old_pte))
> +				split_cont_pte(ptep);
> +
> +			continue;
> +		}
> +
>  		/*
>  		 * Required barriers to make this visible to the table walker
>  		 * are deferred to the end of alloc_init_cont_pte().
> @@ -199,11 +279,20 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  	pmd_t pmd = READ_ONCE(*pmdp);
>  	pte_t *ptep;
>  	int ret = 0;
> +	bool split = flags & SPLIT_MAPPINGS;
> +	pmdval_t pmdval;
> +	phys_addr_t pte_phys;
>  
> -	BUG_ON(pmd_sect(pmd));
> -	if (pmd_none(pmd)) {
> -		pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> -		phys_addr_t pte_phys;
> +	if (!split)
> +		BUG_ON(pmd_sect(pmd));
> +
> +	if (pmd_none(pmd) && split) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (pmd_none(pmd) || (split && pmd_leaf(pmd))) {
> +		pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>  
>  		if (flags & NO_EXEC_MAPPINGS)
>  			pmdval |= PMD_TABLE_PXN;
> @@ -213,6 +302,18 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  			ret = -ENOMEM;
>  			goto out;
>  		}
> +	}
> +
> +	if (split) {
> +		if (pmd_leaf(pmd)) {
> +			split_pmd(pmd, pte_phys, flags);
> +			__pmd_populate(pmdp, pte_phys, pmdval);
> +		}
> +		ptep = pte_offset_kernel(pmdp, addr);
> +		goto split_pgtable;
> +	}
> +
> +	if (pmd_none(pmd)) {
>  		ptep = pte_set_fixmap(pte_phys);
>  		init_clear_pgtable(ptep);
>  		ptep += pte_index(addr);
> @@ -222,17 +323,28 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  		ptep = pte_set_fixmap_offset(pmdp, addr);
>  	}
>  
> +split_pgtable:
>  	do {
>  		pgprot_t __prot = prot;
>  
>  		next = pte_cont_addr_end(addr, end);
>  
> +		if (split) {
> +			pte_t pteval = READ_ONCE(*ptep);
> +			bool cont = pte_cont(pteval);
> +
> +			if (cont &&
> +			    ((addr | next) & ~CONT_PTE_MASK) == 0 &&
> +			    (flags & NO_CONT_MAPPINGS) == 0)
> +				continue;
> +		}
> +
>  		/* use a contiguous mapping if the range is suitably aligned */
>  		if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
>  		    (flags & NO_CONT_MAPPINGS) == 0)
>  			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>  
> -		init_pte(ptep, addr, next, phys, __prot);
> +		init_pte(ptep, addr, next, phys, __prot, flags);
>  
>  		ptep += pte_index(next) - pte_index(addr);
>  		phys += next - addr;
> @@ -243,7 +355,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  	 * ensure that all previous pgtable writes are visible to the table
>  	 * walker.
>  	 */
> -	pte_clear_fixmap();
> +	if (!split)
> +		pte_clear_fixmap();
>  
>  out:
>  	return ret;
> @@ -255,15 +368,29 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>  {
>  	unsigned long next;
>  	int ret = 0;
> +	bool split = flags & SPLIT_MAPPINGS;
> +	bool cont;
>  
>  	do {
>  		pmd_t old_pmd = READ_ONCE(*pmdp);
>  
>  		next = pmd_addr_end(addr, end);
>  
> +		if (split && pmd_leaf(old_pmd)) {
> +			cont = pgprot_val(pmd_pgprot(old_pmd)) & PTE_CONT;
> +			if (cont)
> +				split_cont_pmd(pmdp);
> +
> +			/* The PMD is fully contained in the range */
> +			if (((addr | next) & ~PMD_MASK) == 0 &&
> +			    (flags & NO_BLOCK_MAPPINGS) == 0)
> +				continue;
> +		}
> +
>  		/* try section mapping first */
>  		if (((addr | next | phys) & ~PMD_MASK) == 0 &&
> -		    (flags & NO_BLOCK_MAPPINGS) == 0) {
> +		    (flags & NO_BLOCK_MAPPINGS) == 0 &&
> +		    (flags & SPLIT_MAPPINGS) == 0) {
>  			pmd_set_huge(pmdp, phys, prot);
>  
>  			/*
> @@ -278,7 +405,7 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>  			if (ret)
>  				break;
>  
> -			BUG_ON(pmd_val(old_pmd) != 0 &&
> +			BUG_ON(!split && pmd_val(old_pmd) != 0 &&
>  			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
>  		}
>  		phys += next - addr;
> @@ -296,14 +423,23 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  	int ret = 0;
>  	pud_t pud = READ_ONCE(*pudp);
>  	pmd_t *pmdp;
> +	bool split = flags & SPLIT_MAPPINGS;
> +	pudval_t pudval;
> +	phys_addr_t pmd_phys;
>  
>  	/*
>  	 * Check for initial section mappings in the pgd/pud.
>  	 */
> -	BUG_ON(pud_sect(pud));
> -	if (pud_none(pud)) {
> -		pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> -		phys_addr_t pmd_phys;
> +	if (!split)
> +		BUG_ON(pud_sect(pud));
> +
> +	if (pud_none(pud) && split) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (pud_none(pud) || (split && pud_leaf(pud))) {
> +		pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>  
>  		if (flags & NO_EXEC_MAPPINGS)
>  			pudval |= PUD_TABLE_PXN;
> @@ -313,6 +449,18 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  			ret = -ENOMEM;
>  			goto out;
>  		}
> +	}
> +
> +	if (split) {
> +		if (pud_leaf(pud)) {
> +			split_pud(pud, pmd_phys, flags);
> +			__pud_populate(pudp, pmd_phys, pudval);
> +		}
> +		pmdp = pmd_offset(pudp, addr);
> +		goto split_pgtable;
> +	}
> +
> +	if (pud_none(pud)) {
>  		pmdp = pmd_set_fixmap(pmd_phys);
>  		init_clear_pgtable(pmdp);
>  		pmdp += pmd_index(addr);
> @@ -322,11 +470,22 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  		pmdp = pmd_set_fixmap_offset(pudp, addr);
>  	}
>  
> +split_pgtable:
>  	do {
>  		pgprot_t __prot = prot;
>  
>  		next = pmd_cont_addr_end(addr, end);
>  
> +		if (split) {
> +			pmd_t pmdval = READ_ONCE(*pmdp);
> +			bool cont = pgprot_val(pmd_pgprot(pmdval)) & PTE_CONT;
> +
> +			if (cont &&
> +			    ((addr | next) & ~CONT_PMD_MASK) == 0 &&
> +			    (flags & NO_CONT_MAPPINGS) == 0)
> +				continue;
> +		}
> +
>  		/* use a contiguous mapping if the range is suitably aligned */
>  		if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
>  		    (flags & NO_CONT_MAPPINGS) == 0)
> @@ -340,7 +499,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  		phys += next - addr;
>  	} while (addr = next, addr != end);
>  
> -	pmd_clear_fixmap();
> +	if (!split)
> +		pmd_clear_fixmap();
>  
>  out:
>  	return ret;
> @@ -355,6 +515,16 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>  	int ret = 0;
>  	p4d_t p4d = READ_ONCE(*p4dp);
>  	pud_t *pudp;
> +	bool split = flags & SPLIT_MAPPINGS;
> +
> +	if (split) {
> +		if (p4d_none(p4d)) {
> +			ret= -EINVAL;
> +			goto out;
> +		}
> +		pudp = pud_offset(p4dp, addr);
> +		goto split_pgtable;
> +	}
>  
>  	if (p4d_none(p4d)) {
>  		p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN | P4D_TABLE_AF;
> @@ -377,17 +547,26 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>  		pudp = pud_set_fixmap_offset(p4dp, addr);
>  	}
>  
> +split_pgtable:
>  	do {
>  		pud_t old_pud = READ_ONCE(*pudp);
>  
>  		next = pud_addr_end(addr, end);
>  
> +		if (split && pud_leaf(old_pud)) {
> +			/* The PUD is fully contained in the range */
> +			if (((addr | next) & ~PUD_MASK) == 0 &&
> +			    (flags & NO_BLOCK_MAPPINGS) == 0)
> +				continue;
> +		}
> +
>  		/*
>  		 * For 4K granule only, attempt to put down a 1GB block
>  		 */
>  		if (pud_sect_supported() &&
>  		   ((addr | next | phys) & ~PUD_MASK) == 0 &&
> -		    (flags & NO_BLOCK_MAPPINGS) == 0) {
> +		    (flags & NO_BLOCK_MAPPINGS) == 0 &&
> +		    (flags & SPLIT_MAPPINGS) == 0) {
>  			pud_set_huge(pudp, phys, prot);
>  
>  			/*
> @@ -402,13 +581,14 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>  			if (ret)
>  				break;
>  
> -			BUG_ON(pud_val(old_pud) != 0 &&
> +			BUG_ON(!split && pud_val(old_pud) != 0 &&
>  			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
>  		}
>  		phys += next - addr;
>  	} while (pudp++, addr = next, addr != end);
>  
> -	pud_clear_fixmap();
> +	if (!split)
> +		pud_clear_fixmap();
>  
>  out:
>  	return ret;
> @@ -423,6 +603,16 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  	int ret = 0;
>  	pgd_t pgd = READ_ONCE(*pgdp);
>  	p4d_t *p4dp;
> +	bool split = flags & SPLIT_MAPPINGS;
> +
> +	if (split) {
> +		if (pgd_none(pgd)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +		p4dp = p4d_offset(pgdp, addr);
> +		goto split_pgtable;
> +	}

I really don't like the way the split logic has been added to the existing table
walker; there are so many conditionals, it's not clear that there is really any
advantage. I know I proposed it originally, but I changed my mind last cycle and
made the case for keeping it separate. That's still my opinon I'm afraid; I'm
proposing a patch below showing how I would prefer to see this implemented.

>  
>  	if (pgd_none(pgd)) {
>  		pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN | PGD_TABLE_AF;
> @@ -445,6 +635,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  		p4dp = p4d_set_fixmap_offset(pgdp, addr);
>  	}
>  
> +split_pgtable:
>  	do {
>  		p4d_t old_p4d = READ_ONCE(*p4dp);
>  
> @@ -461,7 +652,8 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  		phys += next - addr;
>  	} while (p4dp++, addr = next, addr != end);
>  
> -	p4d_clear_fixmap();
> +	if (!split)
> +		p4d_clear_fixmap();
>  
>  out:
>  	return ret;
> @@ -557,6 +749,25 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>  	return pa;
>  }
>  
> +int split_linear_mapping(unsigned long start, unsigned long end)
> +{
> +	int ret = 0;
> +
> +	if (!system_supports_bbml2_noabort())
> +		return 0;
> +
> +	mmap_write_lock(&init_mm);
> +	/* NO_EXEC_MAPPINGS is needed when splitting linear map */
> +	ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
> +					  start, (end - start), __pgprot(0),
> +					  __pgd_pgtable_alloc,
> +					  NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);

Implementing this on top of __create_pgd_mapping_locked() is problematic because
(I think) it assumes that the virtual range is physically contiguous? That's
fine for the linear map, but I'd like to reuse this primitive for vmalloc too.

> +	mmap_write_unlock(&init_mm);

As already mentioned, I don't like this locking. I think we can make it work
locklessly as long as we are only ever splitting and not collapsing.

> +	flush_tlb_kernel_range(start, end);
> +
> +	return ret;
> +}

I had a go at creating a version to try to illustrate how I have been thinking
about this. What do you think? I've only compile tested it (and it fails because
I don't have pmd_mknoncont() and system_supports_bbml2_noabort() in my tree -
but the rest looks ok). It's on top of v6.16-rc1, where the pgtable allocation
functions have changed a bit. And I don't think you need patch 2 from your
series with this change. I haven't implemented the cmpxchg part that I think
would make it safe to be used locklessly yet, but I've marked the sites up with
TODO. Once implemented, the idea is that concurrent threads trying to split on
addresses that all lie within the same block/contig mapping should be safe.

---8<---
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8fcf59ba39db..22a09cc7a2aa 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -480,6 +480,9 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt,
 			     int flags);
 #endif

+/* Sentinel used to represent failure to allocate for phys_addr_t type. */
+#define INVALID_PHYS_ADDR -1
+
 static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 				       enum pgtable_type pgtable_type)
 {
@@ -487,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
 	phys_addr_t pa;

-	BUG_ON(!ptdesc);
+	if (!ptdesc)
+		return INVALID_PHYS_ADDR;
+
 	pa = page_to_phys(ptdesc_page(ptdesc));

 	switch (pgtable_type) {
@@ -509,15 +514,27 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 }

 static phys_addr_t __maybe_unused
-pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
 {
 	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
 }

+static phys_addr_t __maybe_unused
+pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+	phys_addr_t pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+
+	BUG_ON(!pa);
+	return pa;
+}
+
 static phys_addr_t
 pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
 {
-	return __pgd_pgtable_alloc(NULL, pgtable_type);
+	phys_addr_t pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+
+	BUG_ON(!pa);
+	return pa;
 }

 /*
@@ -1616,3 +1633,202 @@ int arch_set_user_pkey_access(struct task_struct *tsk,
int pkey, unsigned long i
 	return 0;
 }
 #endif
+
+static void split_contpte(pte_t *ptep)
+{
+	int i;
+
+	ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+	for (i = 0; i < CONT_PTES; i++, ptep++)
+		__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
+}
+
+static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
+	unsigned long pfn = pmd_pfn(pmd);
+	pgprot_t prot = pmd_pgprot(pmd);
+	phys_addr_t pte_phys;
+	pte_t *ptep;
+	int i;
+
+	pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+	if (pte_phys == INVALID_PHYS_ADDR)
+		return -ENOMEM;
+	ptep = (pte_t *)phys_to_virt(pte_phys);
+
+	if (pgprot_val(prot) & PMD_SECT_PXN)
+		tableprot |= PMD_TABLE_PXN;
+
+	prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
+	prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+	for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+		__set_pte(ptep, pfn_pte(pfn, prot));
+
+	/*
+	 * Ensure the pte entries are visible to the table walker by the time
+	 * the pmd entry that points to the ptes is visible.
+	 */
+	dsb(ishst);
+
+	// TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
+	__pmd_populate(pmdp, pte_phys, tableprot);
+
+	return 0;
+}
+
+static void split_contpmd(pmd_t *pmdp)
+{
+	int i;
+
+	pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+	for (i = 0; i < CONT_PMDS; i++, pmdp++)
+		set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
+}
+
+static int split_pud(pud_t *pudp, pud_t pud)
+{
+	pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
+	unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+	unsigned long pfn = pud_pfn(pud);
+	pgprot_t prot = pud_pgprot(pud);
+	phys_addr_t pmd_phys;
+	pmd_t *pmdp;
+	int i;
+
+	pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+	if (pmd_phys == INVALID_PHYS_ADDR)
+		return -ENOMEM;
+	pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+	if (pgprot_val(prot) & PMD_SECT_PXN)
+		tableprot |= PUD_TABLE_PXN;
+
+	prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
+	prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+	for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
+		set_pmd(pmdp, pfn_pmd(pfn, prot));
+
+	/*
+	 * Ensure the pmd entries are visible to the table walker by the time
+	 * the pud entry that points to the pmds is visible.
+	 */
+	dsb(ishst);
+
+	// TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
+	__pud_populate(pudp, pmd_phys, tableprot);
+
+	return 0;
+}
+
+int split_leaf_mapping(unsigned long addr)
+{
+	pgd_t *pgdp, pgd;
+	p4d_t *p4dp, p4d;
+	pud_t *pudp, pud;
+	pmd_t *pmdp, pmd;
+	pte_t *ptep, pte;
+	int ret = 0;
+
+	/*
+	 * !BBML2_NOABORT systems should not be trying to change permissions on
+	 * anything that is not pte-mapped in the first place. Just return early
+	 * and let the permission change code raise a warning if not already
+	 * pte-mapped.
+	 */
+	if (!system_supports_bbml2_noabort())
+		return 0;
+
+	/*
+	 * Ensure addr is at least page-aligned since this is the finest
+	 * granularity we can split to.
+	 */
+	if (addr != PAGE_ALIGN(addr))
+		return -EINVAL;
+
+	arch_enter_lazy_mmu_mode();
+
+	/*
+	 * PGD: If addr is PGD aligned then addr already describes a leaf
+	 * boundary. If not present then there is nothing to split.
+	 */
+	if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
+		goto out;
+	pgdp = pgd_offset_k(addr);
+	pgd = pgdp_get(pgdp);
+	if (!pgd_present(pgd))
+		goto out;
+
+	/*
+	 * P4D: If addr is P4D aligned then addr already describes a leaf
+	 * boundary. If not present then there is nothing to split.
+	 */
+	if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
+		goto out;
+	p4dp = p4d_offset(pgdp, addr);
+	p4d = p4dp_get(p4dp);
+	if (!p4d_present(p4d))
+		goto out;
+
+	/*
+	 * PUD: If addr is PUD aligned then addr already describes a leaf
+	 * boundary. If not present then there is nothing to split. Otherwise,
+	 * if we have a pud leaf, split to contpmd.
+	 */
+	if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
+		goto out;
+	pudp = pud_offset(p4dp, addr);
+	pud = pudp_get(pudp);
+	if (!pud_present(pud))
+		goto out;
+	if (pud_leaf(pud)) {
+		ret = split_pud(pudp, pud);
+		if (ret)
+			goto out;
+	}
+
+	/*
+	 * CONTPMD: If addr is CONTPMD aligned then addr already describes a
+	 * leaf boundary. If not present then there is nothing to split.
+	 * Otherwise, if we have a contpmd leaf, split to pmd.
+	 */
+	if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+		goto out;
+	pmdp = pmd_offset(pudp, addr);
+	pmd = pmdp_get(pmdp);
+	if (!pmd_present(pmd))
+		goto out;
+	if (pmd_leaf(pmd)) {
+		if (pmd_cont(pmd))
+			split_contpmd(pmdp);
+		/*
+		 * PMD: If addr is PMD aligned then addr already describes a
+		 * leaf boundary. Otherwise, split to contpte.
+		 */
+		if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
+			goto out;
+		ret = split_pmd(pmdp, pmd);
+		if (ret)
+			goto out;
+	}
+
+	/*
+	 * CONTPTE: If addr is CONTPTE aligned then addr already describes a
+	 * leaf boundary. If not present then there is nothing to split.
+	 * Otherwise, if we have a contpte leaf, split to pte.
+	 */
+	if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+		goto out;
+	ptep = pte_offset_kernel(pmdp, addr);
+	pte = __ptep_get(ptep);
+	if (!pte_present(pte))
+		goto out;
+	if (pte_cont(pte))
+		split_contpte(ptep);
+
+out:
+	arch_leave_lazy_mmu_mode();
+	return ret;
+}
---8<---

Thanks,
Ryan



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
  2025-06-16  9:09   ` Ryan Roberts
@ 2025-06-17 20:57     ` Yang Shi
  0 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-06-17 20:57 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 6/16/25 2:09 AM, Ryan Roberts wrote:
> On 13/06/2025 18:21, Yang Shi wrote:
>> Hi Ryan,
>>
>> Gently ping... any comments for this version?
> Hi Yang, yes sorry for slow response - It's been in my queue. I'm going to start
> looking at it now and plan to get you some feedback in the next couple of days.
>
>> It looks Dev's series is getting stable except some nits. I went through his
>> patches and all the call sites for changing page permission. They are:
>>    1. change_memory_common(): called by set_memory_{ro|rw|x|nx}. It iterates
>> every single page mapped in the vm area then change permission on page basis. It
>> depends on whether the vm area is block mapped or not if we want to change
>>       permission on block mapping.
>>    2. set_memory_valid(): it looks it assumes the [addr, addr + size) range is
>> mapped contiguously, but it depends on the callers pass in block size (nr > 1).
>> There are two sub cases:
>>       2.a kfence and debugalloc just work for PTE mapping, so they pass in single
>> page.
>>       2.b The execmem passes in large page on x86, arm64 has not supported huge
>> execmem cache yet, so it should still pass in singe page for the time being. But
>> my series + Dev's series can handle both single page mapping and block mapping well
>>           for this case. So changing permission for block mapping can be
>> supported automatically once arm64 supports huge execmem cache.
>>    3. set_direct_map_{invalid|default}_noflush(): it looks they are page basis.
>> So Dev's series has no change to them.
>>    4. realm: if I remember correctly, realm forces PTE mapping for linear address
>> space all the time, so no impact.
> Yes for realm, we currently force PTE mapping - that's because we need page
> granularity for sharing certain portions back to the host. But with this work I
> think we will be able to do the splitting on the fly and map using big blocks
> even for realms.

OK, it is good to support more usecase.

>> So it looks like just #1 may need some extra work. But it seems simple. I should
>> just need advance the address range in (1 << vm's order) stride. So there should
>> be just some minor changes when I rebase my patches on top of Dev's, mainly
>> context changes. It has no impact to the split primitive and repainting linear
>> mapping.
> I haven't looked at your series yet, but I had assumed that the most convenient
> (and only) integration point would be to call your split primitive from dev's
> ___change_memory_common() (note 3 underscores at beginning). Something like this:
>
> ___change_memory_common(unsigned long start, unsigned long size, ...)
> {
> 	// This will need to return error for case where splitting would have
> 	// been required but system does not support BBML2_NOABORT
> 	ret = split_mapping_granularity(start, start + size)
> 	if (ret)
> 		return ret;
>
> 	...
> }

Yeah, I agree. All callsites converge to ___change_memory_common() once 
Dev's series is applied.

Thanks,
Yang

>> Thanks,
>> Yang
>>
>>
>> On 5/30/25 7:41 PM, Yang Shi wrote:
>>> Changelog
>>> =========
>>> v4:
>>>     * Rebased to v6.15-rc4.
>>>     * Based on Miko's latest BBML2 cpufeature patch (https://lore.kernel.org/
>>> linux-arm-kernel/20250428153514.55772-4-miko.lenczewski@arm.com/).
>>>     * Keep block mappings rather than splitting to PTEs if it is fully contained
>>>       per Ryan.
>>>     * Return -EINVAL if page table allocation failed instead of BUG_ON per Ryan.
>>>     * When page table allocation failed, return -1 instead of 0 per Ryan.
>>>     * Allocate page table with GFP_ATOMIC for repainting per Ryan.
>>>     * Use idmap to wait for repainting is done per Ryan.
>>>     * Some minor fixes per the discussion for v3.
>>>     * Some clean up to reduce redundant code.
>>>
>>> v3:
>>>     * Rebased to v6.14-rc4.
>>>     * Based on Miko's BBML2 cpufeature patch (https://lore.kernel.org/linux-
>>> arm-kernel/20250228182403.6269-3-miko.lenczewski@arm.com/).
>>>       Also included in this series in order to have the complete patchset.
>>>     * Enhanced __create_pgd_mapping() to handle split as well per Ryan.
>>>     * Supported CONT mappings per Ryan.
>>>     * Supported asymmetric system by splitting kernel linear mapping if such
>>>       system is detected per Ryan. I don't have such system to test, so the
>>>       testing is done by hacking kernel to call linear mapping repainting
>>>       unconditionally. The linear mapping doesn't have any block and cont
>>>       mappings after booting.
>>>
>>> RFC v2:
>>>     * Used allowlist to advertise BBM lv2 on the CPUs which can handle TLB
>>>       conflict gracefully per Will Deacon
>>>     * Rebased onto v6.13-rc5
>>>     *https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-
>>> yang@os.amperecomputing.com/
>>>
>>> v3:https://lore.kernel.org/linux-arm-kernel/20250304222018.615808-1-
>>> yang@os.amperecomputing.com/
>>> RFC v2:https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-
>>> yang@os.amperecomputing.com/
>>> RFC v1:https://lore.kernel.org/lkml/20241118181711.962576-1-
>>> yang@os.amperecomputing.com/
>>>
>>> Description
>>> ===========
>>> When rodata=full kernel linear mapping is mapped by PTE due to arm's
>>> break-before-make rule.
>>>
>>> A number of performance issues arise when the kernel linear map is using
>>> PTE entries due to arm's break-before-make rule:
>>>     - performance degradation
>>>     - more TLB pressure
>>>     - memory waste for kernel page table
>>>
>>> These issues can be avoided by specifying rodata=on the kernel command
>>> line but this disables the alias checks on page table permissions and
>>> therefore compromises security somewhat.
>>>
>>> With FEAT_BBM level 2 support it is no longer necessary to invalidate the
>>> page table entry when changing page sizes.  This allows the kernel to
>>> split large mappings after boot is complete.
>>>
>>> This patch adds support for splitting large mappings when FEAT_BBM level 2
>>> is available and rodata=full is used. This functionality will be used
>>> when modifying page permissions for individual page frames.
>>>
>>> Without FEAT_BBM level 2 we will keep the kernel linear map using PTEs
>>> only.
>>>
>>> If the system is asymmetric, the kernel linear mapping may be repainted once
>>> the BBML2 capability is finalized on all CPUs.  See patch #4 for more details.
>>>
>>> We saw significant performance increases in some benchmarks with
>>> rodata=full without compromising the security features of the kernel.
>>>
>>> Testing
>>> =======
>>> The test was done on AmpereOne machine (192 cores, 1P) with 256GB memory and
>>> 4K page size + 48 bit VA.
>>>
>>> Function test (4K/16K/64K page size)
>>>     - Kernel boot.  Kernel needs change kernel linear mapping permission at
>>>       boot stage, if the patch didn't work, kernel typically didn't boot.
>>>     - Module stress from stress-ng. Kernel module load change permission for
>>>       linear mapping.
>>>     - A test kernel module which allocates 80% of total memory via vmalloc(),
>>>       then change the vmalloc area permission to RO, this also change linear
>>>       mapping permission to RO, then change it back before vfree(). Then launch
>>>       a VM which consumes almost all physical memory.
>>>     - VM with the patchset applied in guest kernel too.
>>>     - Kernel build in VM with guest kernel which has this series applied.
>>>     - rodata=on. Make sure other rodata mode is not broken.
>>>     - Boot on the machine which doesn't support BBML2.
>>>
>>> Performance
>>> ===========
>>> Memory consumption
>>> Before:
>>> MemTotal:       258988984 kB
>>> MemFree:        254821700 kB
>>>
>>> After:
>>> MemTotal:       259505132 kB
>>> MemFree:        255410264 kB
>>>
>>> Around 500MB more memory are free to use.  The larger the machine, the
>>> more memory saved.
>>>
>>> Performance benchmarking
>>> * Memcached
>>> We saw performance degradation when running Memcached benchmark with
>>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>>> latency is reduced by around 9.6%.
>>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>>> MPKI is reduced by 28.5%.
>>>
>>> The benchmark data is now on par with rodata=on too.
>>>
>>> * Disk encryption (dm-crypt) benchmark
>>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>>> encryption (by dm-crypt with no read/write workqueue).
>>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>>       --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>>       --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>>       --name=iops-test-job --eta-newline=1 --size 100G
>>>
>>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>>> number of good case is around 90% more than the best number of bad case).
>>> The bandwidth is increased and the avg clat is reduced proportionally.
>>>
>>> * Sequential file read
>>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>>> The bandwidth is increased by 150%.
>>>
>>>
>>> Yang Shi (4):
>>>         arm64: cpufeature: add AmpereOne to BBML2 allow list
>>>         arm64: mm: make __create_pgd_mapping() and helpers non-void
>>>         arm64: mm: support large block mapping when rodata=full
>>>         arm64: mm: split linear mapping if BBML2 is not supported on secondary
>>> CPUs
>>>
>>>    arch/arm64/include/asm/cpufeature.h |  26 +++++++
>>>    arch/arm64/include/asm/mmu.h        |   4 +
>>>    arch/arm64/include/asm/pgtable.h    |  12 ++-
>>>    arch/arm64/kernel/cpufeature.c      |  30 ++++++--
>>>    arch/arm64/mm/mmu.c                 | 505 ++++++++++++++++++++++++++++++++++
>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>> +---------------
>>>    arch/arm64/mm/pageattr.c            |  37 +++++++--
>>>    arch/arm64/mm/proc.S                |  41 ++++++++++
>>>    7 files changed, 585 insertions(+), 70 deletions(-)
>>>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-16 12:33     ` Ryan Roberts
@ 2025-06-17 21:01       ` Yang Shi
  0 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-06-17 21:01 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 6/16/25 5:33 AM, Ryan Roberts wrote:
> On 16/06/2025 12:58, Ryan Roberts wrote:
>> On 31/05/2025 03:41, Yang Shi wrote:
>>> When rodata=full is specified, kernel linear mapping has to be mapped at
>>> PTE level since large page table can't be split due to break-before-make
>>> rule on ARM64.
>>>
>>> This resulted in a couple of problems:
>>>    - performance degradation
>>>    - more TLB pressure
>>>    - memory waste for kernel page table
>>>
>>> With FEAT_BBM level 2 support, splitting large block page table to
>>> smaller ones doesn't need to make the page table entry invalid anymore.
>>> This allows kernel split large block mapping on the fly.
>>>
>>> Add kernel page table split support and use large block mapping by
>>> default when FEAT_BBM level 2 is supported for rodata=full.  When
>>> changing permissions for kernel linear mapping, the page table will be
>>> split to smaller size.
>>>
>>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>>> mapping PTE-mapped when rodata=full.
>>>
>>> With this we saw significant performance boost with some benchmarks and
>>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>>> 256GB memory.
>>>
>>> * Memory use after boot
>>> Before:
>>> MemTotal:       258988984 kB
>>> MemFree:        254821700 kB
>>>
>>> After:
>>> MemTotal:       259505132 kB
>>> MemFree:        255410264 kB
>>>
>>> Around 500MB more memory are free to use.  The larger the machine, the
>>> more memory saved.
>>>
>>> * Memcached
>>> We saw performance degradation when running Memcached benchmark with
>>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>>> latency is reduced by around 9.6%.
>>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>>> MPKI is reduced by 28.5%.
>>>
>>> The benchmark data is now on par with rodata=on too.
>>>
>>> * Disk encryption (dm-crypt) benchmark
>>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>>> encryption (by dm-crypt).
>>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>>      --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>>      --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>>      --name=iops-test-job --eta-newline=1 --size 100G
>>>
>>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>>> number of good case is around 90% more than the best number of bad case).
>>> The bandwidth is increased and the avg clat is reduced proportionally.
>>>
>>> * Sequential file read
>>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>>> The bandwidth is increased by 150%.
>>>
>>> Signed-off-by: Yang Shi<yang@os.amperecomputing.com>
>>> ---
>>>   arch/arm64/include/asm/cpufeature.h |  26 +++
>>>   arch/arm64/include/asm/mmu.h        |   1 +
>>>   arch/arm64/include/asm/pgtable.h    |  12 +-
>>>   arch/arm64/kernel/cpufeature.c      |   2 +-
>>>   arch/arm64/mm/mmu.c                 | 269 +++++++++++++++++++++++++---
>>>   arch/arm64/mm/pageattr.c            |  37 +++-
>>>   6 files changed, 319 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>> index 8f36ffa16b73..a95806980298 100644
>>> --- a/arch/arm64/include/asm/cpufeature.h
>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>>>   #endif
>>>   }
>>>   
>>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
>>> +
>>> +static inline bool has_nobbml2_override(void)
>>> +{
>>> +	u64 mmfr2;
>>> +	unsigned int bbm;
>>> +
>>> +	mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
>>> +	mmfr2 &= ~id_aa64mmfr2_override.mask;
>>> +	mmfr2 |= id_aa64mmfr2_override.val;
>>> +	bbm = cpuid_feature_extract_unsigned_field(mmfr2,
>>> +						   ID_AA64MMFR2_EL1_BBM_SHIFT);
>>> +	return bbm == 0;
>>> +}
>>> +
>>> +/*
>>> + * Called at early boot stage on boot CPU before cpu info and cpu feature
>>> + * are ready.
>>> + */
>>> +static inline bool bbml2_noabort_available(void)
>>> +{
>>> +	return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
>>> +	       cpu_has_bbml2_noabort(read_cpuid_id()) &&
>>> +	       !has_nobbml2_override();
>> Based on Will's feedback, The Kconfig and the cmdline override will both
>> disappear in Miko's next version and we will only use the MIDR list to decide
>> BBML2_NOABORT status, so this will significantly simplify. Sorry about the churn
>> here.

Good news! I just saw Miko's v7 series, will rebase on to it.

>>> +}
>>> +
>>>   #endif /* __ASSEMBLY__ */
>>>   
>>>   #endif
>>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>>> index 6e8aa8e72601..2693d63bf837 100644
>>> --- a/arch/arm64/include/asm/mmu.h
>>> +++ b/arch/arm64/include/asm/mmu.h
>>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>>   			       pgprot_t prot, bool page_mappings_only);
>>>   extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>>   extern void mark_linear_text_alias_ro(void);
>>> +extern int split_linear_mapping(unsigned long start, unsigned long end);
>> nit: Perhaps split_leaf_mapping() or split_kernel_pgtable_mapping() or something
>> similar is more generic which will benefit us in future when using this for
>> vmalloc too?

Yeah, sure. Will use split_kernel_pgtable_mapping().

>>>   
>>>   /*
>>>    * This check is triggered during the early boot before the cpufeature
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index d3b538be1500..bf3cef31d243 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>>>   	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>>>   }
>>>   
>>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>>> +{
>>> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>>> +}
>>> +
>>>   static inline pte_t pte_mkdevmap(pte_t pte)
>>>   {
>>>   	return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
>>> @@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
>>>   	        ((unsigned long)swapper_pg_dir & PAGE_MASK);
>>>   }
>>>   
>>> -static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>> +static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
>>>   {
>>>   #ifdef __PAGETABLE_PMD_FOLDED
>>>   	if (in_swapper_pgdir(pmdp)) {
>>> @@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>>   #endif /* __PAGETABLE_PMD_FOLDED */
>>>   
>>>   	WRITE_ONCE(*pmdp, pmd);
>>> +}
>>> +
>>> +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>> +{
>>> +	__set_pmd_nosync(pmdp, pmd);
>>>   
>>>   	if (pmd_valid(pmd)) {
>>>   		dsb(ishst);
>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>> index e879bfcf853b..5fc2a4a804de 100644
>>> --- a/arch/arm64/kernel/cpufeature.c
>>> +++ b/arch/arm64/kernel/cpufeature.c
>>> @@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>>>   	return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
>>>   }
>>>   
>>> -static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>>>   {
>>>   	/*
>>>   	 * We want to allow usage of bbml2 in as wide a range of kernel contexts
>> [...] I'll send a separate response for the mmu.c table walker changes.
>>
>>>   
>>> +int split_linear_mapping(unsigned long start, unsigned long end)
>>> +{
>>> +	int ret = 0;
>>> +
>>> +	if (!system_supports_bbml2_noabort())
>>> +		return 0;
>> Hmm... I guess the thinking here is that for !BBML2_NOABORT you are expecting
>> this function should only be called in the first place if we know we are
>> pte-mapped. So I guess this is ok... it just means that if we are not
>> pte-mapped, warnings will be emitted while walking the pgtables (as is the case
>> today). So I think this approach is ok.

Yes, we can't split kernel page table on the fly w/o BBML2_NOABORT and 
kernel linear mapping should be always PTE-mapped so it returns 0 
instead of errno.

>>> +
>>> +	mmap_write_lock(&init_mm);
>> What is the lock protecting? I was orignally thinking no locking should be
>> needed because it's not needed for permission changes today; But I think you are
>> right here and we do need locking; multiple owners could share a large leaf
>> mapping, I guess? And in that case you could get concurrent attempts to split
>> from both owners.

Yes, for example, [addr, addr + 4K) and [addr + 4K, addr + 8K) may 
belong to two different owners. There may be race when both the owners 
want to split the page table.

>> I'm not really a fan of adding the extra locking though; It might introduce a
>> new bottleneck. I wonder if there is a way we could do this locklessly? i.e.
>> allocate the new table, then cmpxchg to insert and the loser has to free? That
>> doesn't work for contiguous mappings though...

I'm not sure whether it is going to be a real bottleneck or not. I saw 
x86 uses a dedicated lock, called cpa_lock. So it seems the lock 
contention is not a real problem.
I used init_mm mmap_lock instead of inventing a new lock. My thought is 
we can start from something simple, we can optimize it if it turns out 
to be a problem.

>>> +	/* NO_EXEC_MAPPINGS is needed when splitting linear map */
>>> +	ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
>>> +					  start, (end - start), __pgprot(0),
>>> +					  __pgd_pgtable_alloc,
>>> +					  NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
>>> +	mmap_write_unlock(&init_mm);
>>> +	flush_tlb_kernel_range(start, end);
>> I don't believe we should need to flush the TLB when only changing entry sizes
>> when BBML2 is supported. Miko's series has a massive comment explaining the
>> reasoning. That applies to user space though. We should consider if this all
>> works safely for kernel space too, and hopefully remove the flush.

I think it should be same with userspace. The point is hardware will 
handle TLB conflict gracefully with BBML2_NOABORT, and hardware does the 
same thing for userspace address and kernel address, so the tlb flush 
should be not necessary. The TLB pressure or implicit invalidation for 
conflict TLB will do the job.

>>> +
>>> +	return ret;
>>> +}
>>> +
>>>   /*
>>>    * This function can only be used to modify existing table entries,
>>>    * without allocating new levels of table. Note that this permits the
>>> @@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>>   
>>>   #endif /* CONFIG_KFENCE */
>>>   
>>> +static inline bool force_pte_mapping(void)
>>> +{
>>> +	/*
>>> +	 * Can't use cpufeature API to determine whether BBML2 supported
>>> +	 * or not since cpufeature have not been finalized yet.
>>> +	 *
>>> +	 * Checking the boot CPU only for now.  If the boot CPU has
>>> +	 * BBML2, paint linear mapping with block mapping.  If it turns
>>> +	 * out the secondary CPUs don't support BBML2 once cpufeature is
>>> +	 * fininalized, the linear mapping will be repainted with PTE
>>> +	 * mapping.
>>> +	 */
>>> +	return (rodata_full && !bbml2_noabort_available()) ||
>> So this is the case where we don't have BBML2 and need to modify protections at
>> page granularity - I agree we need to force pte mappings here.
>>
>>> +		debug_pagealloc_enabled() ||
>> This is the case where every page is made invalid on free and valid on
>> allocation, so no point in having block mappings because it will soon degenerate
>> into page mappings because we will have to split on every allocation. Agree here
>> too.
>>
>>> +		arm64_kfence_can_set_direct_map() ||
>> After looking into how kfence works, I don't agree with this one. It has a
>> dedicated pool where it allocates from. That pool may be allocated early by the
>> arch or may be allocated late by the core code. Either way, kfence will only
>> modify protections within that pool. You current approach is forcing pte
>> mappings if the pool allocation is late (i.e. not performed by the arch code
>> during boot). But I think "late" is the most common case; kfence is compiled
>> into the kernel but not active at boot. Certainly that's how my Ubuntu kernel is
>> configured. So I think we should just ignore kfence here. If it's "early" then
>> we map the pool with page granularity (as an optimization). If it's "late" your
>> splitter will degenerate the whole kfence pool to page mappings over time as
>> kfence_protect_page() -> set_memory_valid() is called. But the bulk of the
>> linear map will remain mapped with large blocks.

OK, thanks for looking into this. I misunderstood how the late pool works.

>>> +		is_realm_world();
>> I think the only reason this requires pte mappings is for
>> __set_memory_enc_dec(). But that can now deal with block mappings given the
>> ability to split the mappings as needed. So I think this condition can be
>> removed too.

Sure

> To clarify; the latter 2 would still be needed for the !BBML2_NOABORT case. So I
> think the expression becomes:
>
> 	return (!bbml2_noabort_available() && (rodata_full ||
> 		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
> 		debug_pagealloc_enabled();

Thanks for coming up with this expression.

Thanks,
Yang

> Thanks,
> Ryan
>
>>> +}
>> Additionally, for can_set_direct_map(); at minimum it's comment should be tidied
>> up, but really I think it should return true if "BBML2_NOABORT ||
>> force_pte_mapping()". Because they are the conditions under which we can now
>> safely modify the linear map.
>>
>>> +
>>>   static void __init map_mem(pgd_t *pgdp)
>>>   {
>>>   	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>>> @@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
>>>   
>>>   	early_kfence_pool = arm64_kfence_alloc_pool();
>>>   
>>> -	if (can_set_direct_map())
>>> +	if (force_pte_mapping())
>>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>>   
>>>   	/*
>>> @@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>   
>>>   	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>>   
>>> -	if (can_set_direct_map())
>>> +	if (force_pte_mapping())
>>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>>   
>>>   	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> index 39fd1f7ff02a..25c068712cb5 100644
>>> --- a/arch/arm64/mm/pageattr.c
>>> +++ b/arch/arm64/mm/pageattr.c
>>> @@ -10,6 +10,7 @@
>>>   #include <linux/vmalloc.h>
>>>   
>>>   #include <asm/cacheflush.h>
>>> +#include <asm/mmu.h>
>>>   #include <asm/pgtable-prot.h>
>>>   #include <asm/set_memory.h>
>>>   #include <asm/tlbflush.h>
>>> @@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
>>>   	struct page_change_data *cdata = data;
>>>   	pte_t pte = __ptep_get(ptep);
>>>   
>>> +	BUG_ON(pte_cont(pte));
>> I don't think this is required; We want to enable using contiguous mappings
>> where it makes sense. As long as we have BBML2, we can update contiguous pte
>> mappings in place, as long as we update all of the ptes in the contiguous block.
>> split_linear_map() should either have converted to non-cont mappings if the
>> contiguous block straddled the split point, or would have left as is (or
>> downgraded a PMD-block to a contpte block) if fully contained within the split
>> range.
>>
>>> +
>>>   	pte = clear_pte_bit(pte, cdata->clear_mask);
>>>   	pte = set_pte_bit(pte, cdata->set_mask);
>>>   
>>> @@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>>>   	unsigned long start = addr;
>>>   	unsigned long size = PAGE_SIZE * numpages;
>>>   	unsigned long end = start + size;
>>> +	unsigned long l_start;
>>>   	struct vm_struct *area;
>>> -	int i;
>>> +	int i, ret;
>>>   
>>>   	if (!PAGE_ALIGNED(addr)) {
>>>   		start &= PAGE_MASK;
>>> @@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
>>>   	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>>   			    pgprot_val(clear_mask) == PTE_RDONLY)) {
>>>   		for (i = 0; i < area->nr_pages; i++) {
>>> -			__change_memory_common((u64)page_address(area->pages[i]),
>>> +			l_start = (u64)page_address(area->pages[i]);
>>> +			ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>>> +			if (WARN_ON_ONCE(ret))
>>> +				return ret;
>> I don't think this is the right place to integrate; I think the split should be
>> done inside __change_memory_common(). Then it caters to all possibilities (i.e.
>> set_memory_valid() and __set_memory_enc_dec()). This means it will run for
>> vmalloc too, but for now, that will be a nop because everything should already
>> be split as required on entry and in future we will get that for free.
>>
>> Once you have integrated Dev's series, the hook becomes
>> ___change_memory_common() (3 underscores)...
>>
>>> +
>>> +			__change_memory_common(l_start,
>>>   					       PAGE_SIZE, set_mask, clear_mask);
>>>   		}
>>>   	}
>>> @@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
>>>   
>>>   int set_direct_map_invalid_noflush(struct page *page)
>>>   {
>>> +	unsigned long l_start;
>>> +	int ret;
>>> +
>>>   	struct page_change_data data = {
>>>   		.set_mask = __pgprot(0),
>>>   		.clear_mask = __pgprot(PTE_VALID),
>>> @@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
>>>   	if (!can_set_direct_map())
>>>   		return 0;
>>>   
>>> +	l_start = (unsigned long)page_address(page);
>>> +	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>>> +	if (WARN_ON_ONCE(ret))
>>> +		return ret;
>>> +
>>>   	return apply_to_page_range(&init_mm,
>>> -				   (unsigned long)page_address(page),
>>> -				   PAGE_SIZE, change_page_range, &data);
>>> +				   l_start, PAGE_SIZE, change_page_range,
>>> +				   &data);
>> ...and once integrated with Dev's series you don't need any changes here...
>>
>>>   }
>>>   
>>>   int set_direct_map_default_noflush(struct page *page)
>>>   {
>>> +	unsigned long l_start;
>>> +	int ret;
>>> +
>>>   	struct page_change_data data = {
>>>   		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
>>>   		.clear_mask = __pgprot(PTE_RDONLY),
>>> @@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
>>>   	if (!can_set_direct_map())
>>>   		return 0;
>>>   
>>> +	l_start = (unsigned long)page_address(page);
>>> +	ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>>> +	if (WARN_ON_ONCE(ret))
>>> +		return ret;
>>> +
>>>   	return apply_to_page_range(&init_mm,
>>> -				   (unsigned long)page_address(page),
>>> -				   PAGE_SIZE, change_page_range, &data);
>>> +				   l_start, PAGE_SIZE, change_page_range,
>>> +				   &data);
>> ...or here.
>>
>> Thanks,
>> Ryan
>>
>>>   }
>>>   
>>>   static int __set_memory_enc_dec(unsigned long addr,



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-16 16:24   ` Ryan Roberts
@ 2025-06-17 21:09     ` Yang Shi
  2025-06-23 13:26       ` Ryan Roberts
  0 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-06-17 21:09 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 6/16/25 9:24 AM, Ryan Roberts wrote:
> On 31/05/2025 03:41, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>>    - performance degradation
>>    - more TLB pressure
>>    - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full.  When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal:       258988984 kB
>> MemFree:        254821700 kB
>>
>> After:
>> MemTotal:       259505132 kB
>> MemFree:        255410264 kB
>>
>> Around 500MB more memory are free to use.  The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>      --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>      --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>      --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi<yang@os.amperecomputing.com>
>> ---
>>   arch/arm64/include/asm/cpufeature.h |  26 +++
>>   arch/arm64/include/asm/mmu.h        |   1 +
>>   arch/arm64/include/asm/pgtable.h    |  12 +-
>>   arch/arm64/kernel/cpufeature.c      |   2 +-
>>   arch/arm64/mm/mmu.c                 | 269 +++++++++++++++++++++++++---
>>   arch/arm64/mm/pageattr.c            |  37 +++-
>>   6 files changed, 319 insertions(+), 28 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 8f36ffa16b73..a95806980298 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>>   #endif
>>   }
>>   
> [...] (I gave comments on this part in previous reply)
>
> I'm focussing on teh table walker in mmu.c here - i.e. implementation of
> split_linear_mapping()...
>
>> index 775c0536b194..4c5d3aa35d62 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -45,6 +45,7 @@
>>   #define NO_BLOCK_MAPPINGS	BIT(0)
>>   #define NO_CONT_MAPPINGS	BIT(1)
>>   #define NO_EXEC_MAPPINGS	BIT(2)	/* assumes FEAT_HPDS is not used */
>> +#define SPLIT_MAPPINGS		BIT(3)
>>   
>>   u64 kimage_voffset __ro_after_init;
>>   EXPORT_SYMBOL(kimage_voffset);
>> @@ -166,12 +167,91 @@ static void init_clear_pgtable(void *table)
>>   	dsb(ishst);
>>   }
>>   
>> +static void split_cont_pte(pte_t *ptep)
>> +{
>> +	pte_t *_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> +	pte_t _pte;
>> +
>> +	for (int i = 0; i < CONT_PTES; i++, _ptep++) {
>> +		_pte = READ_ONCE(*_ptep);
>> +		_pte = pte_mknoncont(_pte);
>> +		__set_pte_nosync(_ptep, _pte);
> This is not atomic but I don't think that matters for kernel mappings since we
> don't care about HW-modified access/dirty bits.

Yes, access bit should be always set for kernel mappings if I remember 
correctly. Kernel mapping doesn't care about dirty bit.

>> +	}
>> +
>> +	dsb(ishst);
>> +	isb();
> I think we can use lazy_mmu_mode here to potentially batch the barriers for
> multiple levels. This also avoids the need for adding __set_pmd_nosync().
>
>> +}
>> +
>> +static void split_cont_pmd(pmd_t *pmdp)
>> +{
>> +	pmd_t *_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> +	pmd_t _pmd;
>> +
>> +	for (int i = 0; i < CONT_PMDS; i++, _pmdp++) {
>> +		_pmd = READ_ONCE(*_pmdp);
>> +		_pmd = pmd_mknoncont(_pmd);
>> +		set_pmd(_pmdp, _pmd);
> Without lazy_mmu_mode this is issuing barriers per entry. With lazy_mmu_mode
> this will defer the barriers until we exit the mode so this will get a bit
> faster. (in practice it will be a bit like what you have done for contpte but
> potentially even better because we can batch across levels.

OK, lazy mmu should work IIUC. The concurrent page table walker should 
see either the old entry or the new entry. Both the old entry and the 
new entry point to the same physical address and have the same 
permission, just different page size. BBML2_NOABORT can handle this 
gracefully.

>> +	}
>> +}
>> +
>> +static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
>> +{
>> +	pte_t *ptep;
>> +	unsigned long pfn;
>> +	pgprot_t prot;
>> +
>> +	pfn = pmd_pfn(pmd);
>> +	prot = pmd_pgprot(pmd);
>> +	prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PTE_TYPE_PAGE);
>> +
>> +	ptep = (pte_t *)phys_to_virt(pte_phys);
>> +
>> +	/* It must be naturally aligned if PMD is leaf */
>> +	if ((flags & NO_CONT_MAPPINGS) == 0)
> I'm not sure we have a use case for avoiding CONT mappings? Suggest doing it
> unconditionally.

Repainting linear mapping does avoid CONT mappings.

>> +		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +
>> +	for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> +		__set_pte_nosync(ptep, pfn_pte(pfn, prot));
>> +
>> +	dsb(ishst);
>> +}
>> +
>> +static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
>> +{
>> +	pmd_t *pmdp;
>> +	unsigned long pfn;
>> +	pgprot_t prot;
>> +	unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> +	pfn = pud_pfn(pud);
>> +	prot = pud_pgprot(pud);
>> +	pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +
>> +	/* It must be naturally aligned if PUD is leaf */
>> +	if ((flags & NO_CONT_MAPPINGS) == 0)
>> +		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +
>> +	for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> +		__set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
>> +		pfn += step;
>> +	}
>> +
>> +	dsb(ishst);
>> +}
>> +
>>   static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
>> -		     phys_addr_t phys, pgprot_t prot)
>> +		     phys_addr_t phys, pgprot_t prot, int flags)
>>   {
>>   	do {
>>   		pte_t old_pte = __ptep_get(ptep);
>>   
>> +		if (flags & SPLIT_MAPPINGS) {
>> +			if (pte_cont(old_pte))
>> +				split_cont_pte(ptep);
>> +
>> +			continue;
>> +		}
>> +
>>   		/*
>>   		 * Required barriers to make this visible to the table walker
>>   		 * are deferred to the end of alloc_init_cont_pte().
>> @@ -199,11 +279,20 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>   	pmd_t pmd = READ_ONCE(*pmdp);
>>   	pte_t *ptep;
>>   	int ret = 0;
>> +	bool split = flags & SPLIT_MAPPINGS;
>> +	pmdval_t pmdval;
>> +	phys_addr_t pte_phys;
>>   
>> -	BUG_ON(pmd_sect(pmd));
>> -	if (pmd_none(pmd)) {
>> -		pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>> -		phys_addr_t pte_phys;
>> +	if (!split)
>> +		BUG_ON(pmd_sect(pmd));
>> +
>> +	if (pmd_none(pmd) && split) {
>> +		ret = -EINVAL;
>> +		goto out;
>> +	}
>> +
>> +	if (pmd_none(pmd) || (split && pmd_leaf(pmd))) {
>> +		pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>>   
>>   		if (flags & NO_EXEC_MAPPINGS)
>>   			pmdval |= PMD_TABLE_PXN;
>> @@ -213,6 +302,18 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>   			ret = -ENOMEM;
>>   			goto out;
>>   		}
>> +	}
>> +
>> +	if (split) {
>> +		if (pmd_leaf(pmd)) {
>> +			split_pmd(pmd, pte_phys, flags);
>> +			__pmd_populate(pmdp, pte_phys, pmdval);
>> +		}
>> +		ptep = pte_offset_kernel(pmdp, addr);
>> +		goto split_pgtable;
>> +	}
>> +
>> +	if (pmd_none(pmd)) {
>>   		ptep = pte_set_fixmap(pte_phys);
>>   		init_clear_pgtable(ptep);
>>   		ptep += pte_index(addr);
>> @@ -222,17 +323,28 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>   		ptep = pte_set_fixmap_offset(pmdp, addr);
>>   	}
>>   
>> +split_pgtable:
>>   	do {
>>   		pgprot_t __prot = prot;
>>   
>>   		next = pte_cont_addr_end(addr, end);
>>   
>> +		if (split) {
>> +			pte_t pteval = READ_ONCE(*ptep);
>> +			bool cont = pte_cont(pteval);
>> +
>> +			if (cont &&
>> +			    ((addr | next) & ~CONT_PTE_MASK) == 0 &&
>> +			    (flags & NO_CONT_MAPPINGS) == 0)
>> +				continue;
>> +		}
>> +
>>   		/* use a contiguous mapping if the range is suitably aligned */
>>   		if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
>>   		    (flags & NO_CONT_MAPPINGS) == 0)
>>   			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>   
>> -		init_pte(ptep, addr, next, phys, __prot);
>> +		init_pte(ptep, addr, next, phys, __prot, flags);
>>   
>>   		ptep += pte_index(next) - pte_index(addr);
>>   		phys += next - addr;
>> @@ -243,7 +355,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>   	 * ensure that all previous pgtable writes are visible to the table
>>   	 * walker.
>>   	 */
>> -	pte_clear_fixmap();
>> +	if (!split)
>> +		pte_clear_fixmap();
>>   
>>   out:
>>   	return ret;
>> @@ -255,15 +368,29 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>>   {
>>   	unsigned long next;
>>   	int ret = 0;
>> +	bool split = flags & SPLIT_MAPPINGS;
>> +	bool cont;
>>   
>>   	do {
>>   		pmd_t old_pmd = READ_ONCE(*pmdp);
>>   
>>   		next = pmd_addr_end(addr, end);
>>   
>> +		if (split && pmd_leaf(old_pmd)) {
>> +			cont = pgprot_val(pmd_pgprot(old_pmd)) & PTE_CONT;
>> +			if (cont)
>> +				split_cont_pmd(pmdp);
>> +
>> +			/* The PMD is fully contained in the range */
>> +			if (((addr | next) & ~PMD_MASK) == 0 &&
>> +			    (flags & NO_BLOCK_MAPPINGS) == 0)
>> +				continue;
>> +		}
>> +
>>   		/* try section mapping first */
>>   		if (((addr | next | phys) & ~PMD_MASK) == 0 &&
>> -		    (flags & NO_BLOCK_MAPPINGS) == 0) {
>> +		    (flags & NO_BLOCK_MAPPINGS) == 0 &&
>> +		    (flags & SPLIT_MAPPINGS) == 0) {
>>   			pmd_set_huge(pmdp, phys, prot);
>>   
>>   			/*
>> @@ -278,7 +405,7 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>>   			if (ret)
>>   				break;
>>   
>> -			BUG_ON(pmd_val(old_pmd) != 0 &&
>> +			BUG_ON(!split && pmd_val(old_pmd) != 0 &&
>>   			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
>>   		}
>>   		phys += next - addr;
>> @@ -296,14 +423,23 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>   	int ret = 0;
>>   	pud_t pud = READ_ONCE(*pudp);
>>   	pmd_t *pmdp;
>> +	bool split = flags & SPLIT_MAPPINGS;
>> +	pudval_t pudval;
>> +	phys_addr_t pmd_phys;
>>   
>>   	/*
>>   	 * Check for initial section mappings in the pgd/pud.
>>   	 */
>> -	BUG_ON(pud_sect(pud));
>> -	if (pud_none(pud)) {
>> -		pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>> -		phys_addr_t pmd_phys;
>> +	if (!split)
>> +		BUG_ON(pud_sect(pud));
>> +
>> +	if (pud_none(pud) && split) {
>> +		ret = -EINVAL;
>> +		goto out;
>> +	}
>> +
>> +	if (pud_none(pud) || (split && pud_leaf(pud))) {
>> +		pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>>   
>>   		if (flags & NO_EXEC_MAPPINGS)
>>   			pudval |= PUD_TABLE_PXN;
>> @@ -313,6 +449,18 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>   			ret = -ENOMEM;
>>   			goto out;
>>   		}
>> +	}
>> +
>> +	if (split) {
>> +		if (pud_leaf(pud)) {
>> +			split_pud(pud, pmd_phys, flags);
>> +			__pud_populate(pudp, pmd_phys, pudval);
>> +		}
>> +		pmdp = pmd_offset(pudp, addr);
>> +		goto split_pgtable;
>> +	}
>> +
>> +	if (pud_none(pud)) {
>>   		pmdp = pmd_set_fixmap(pmd_phys);
>>   		init_clear_pgtable(pmdp);
>>   		pmdp += pmd_index(addr);
>> @@ -322,11 +470,22 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>   		pmdp = pmd_set_fixmap_offset(pudp, addr);
>>   	}
>>   
>> +split_pgtable:
>>   	do {
>>   		pgprot_t __prot = prot;
>>   
>>   		next = pmd_cont_addr_end(addr, end);
>>   
>> +		if (split) {
>> +			pmd_t pmdval = READ_ONCE(*pmdp);
>> +			bool cont = pgprot_val(pmd_pgprot(pmdval)) & PTE_CONT;
>> +
>> +			if (cont &&
>> +			    ((addr | next) & ~CONT_PMD_MASK) == 0 &&
>> +			    (flags & NO_CONT_MAPPINGS) == 0)
>> +				continue;
>> +		}
>> +
>>   		/* use a contiguous mapping if the range is suitably aligned */
>>   		if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
>>   		    (flags & NO_CONT_MAPPINGS) == 0)
>> @@ -340,7 +499,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>   		phys += next - addr;
>>   	} while (addr = next, addr != end);
>>   
>> -	pmd_clear_fixmap();
>> +	if (!split)
>> +		pmd_clear_fixmap();
>>   
>>   out:
>>   	return ret;
>> @@ -355,6 +515,16 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>   	int ret = 0;
>>   	p4d_t p4d = READ_ONCE(*p4dp);
>>   	pud_t *pudp;
>> +	bool split = flags & SPLIT_MAPPINGS;
>> +
>> +	if (split) {
>> +		if (p4d_none(p4d)) {
>> +			ret= -EINVAL;
>> +			goto out;
>> +		}
>> +		pudp = pud_offset(p4dp, addr);
>> +		goto split_pgtable;
>> +	}
>>   
>>   	if (p4d_none(p4d)) {
>>   		p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN | P4D_TABLE_AF;
>> @@ -377,17 +547,26 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>   		pudp = pud_set_fixmap_offset(p4dp, addr);
>>   	}
>>   
>> +split_pgtable:
>>   	do {
>>   		pud_t old_pud = READ_ONCE(*pudp);
>>   
>>   		next = pud_addr_end(addr, end);
>>   
>> +		if (split && pud_leaf(old_pud)) {
>> +			/* The PUD is fully contained in the range */
>> +			if (((addr | next) & ~PUD_MASK) == 0 &&
>> +			    (flags & NO_BLOCK_MAPPINGS) == 0)
>> +				continue;
>> +		}
>> +
>>   		/*
>>   		 * For 4K granule only, attempt to put down a 1GB block
>>   		 */
>>   		if (pud_sect_supported() &&
>>   		   ((addr | next | phys) & ~PUD_MASK) == 0 &&
>> -		    (flags & NO_BLOCK_MAPPINGS) == 0) {
>> +		    (flags & NO_BLOCK_MAPPINGS) == 0 &&
>> +		    (flags & SPLIT_MAPPINGS) == 0) {
>>   			pud_set_huge(pudp, phys, prot);
>>   
>>   			/*
>> @@ -402,13 +581,14 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>   			if (ret)
>>   				break;
>>   
>> -			BUG_ON(pud_val(old_pud) != 0 &&
>> +			BUG_ON(!split && pud_val(old_pud) != 0 &&
>>   			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
>>   		}
>>   		phys += next - addr;
>>   	} while (pudp++, addr = next, addr != end);
>>   
>> -	pud_clear_fixmap();
>> +	if (!split)
>> +		pud_clear_fixmap();
>>   
>>   out:
>>   	return ret;
>> @@ -423,6 +603,16 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>   	int ret = 0;
>>   	pgd_t pgd = READ_ONCE(*pgdp);
>>   	p4d_t *p4dp;
>> +	bool split = flags & SPLIT_MAPPINGS;
>> +
>> +	if (split) {
>> +		if (pgd_none(pgd)) {
>> +			ret = -EINVAL;
>> +			goto out;
>> +		}
>> +		p4dp = p4d_offset(pgdp, addr);
>> +		goto split_pgtable;
>> +	}
> I really don't like the way the split logic has been added to the existing table
> walker; there are so many conditionals, it's not clear that there is really any
> advantage. I know I proposed it originally, but I changed my mind last cycle and
> made the case for keeping it separate. That's still my opinon I'm afraid; I'm
> proposing a patch below showing how I would prefer to see this implemented.

Yes, it added a lot conditionals because split does something reverse, 
we need conditionals to tell it is create or split. But I don't recall 
you mentioned this in v3 discussion. Maybe I misunderstood you or missed 
the point because our discussion was focused on keep block mapping 
instead of splitting all the way down to PTE all the time.

But anyway this basically rolled back to my v2 implementation which has 
dedicated separate functions for split.

>>   
>>   	if (pgd_none(pgd)) {
>>   		pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN | PGD_TABLE_AF;
>> @@ -445,6 +635,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>   		p4dp = p4d_set_fixmap_offset(pgdp, addr);
>>   	}
>>   
>> +split_pgtable:
>>   	do {
>>   		p4d_t old_p4d = READ_ONCE(*p4dp);
>>   
>> @@ -461,7 +652,8 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>   		phys += next - addr;
>>   	} while (p4dp++, addr = next, addr != end);
>>   
>> -	p4d_clear_fixmap();
>> +	if (!split)
>> +		p4d_clear_fixmap();
>>   
>>   out:
>>   	return ret;
>> @@ -557,6 +749,25 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>>   	return pa;
>>   }
>>   
>> +int split_linear_mapping(unsigned long start, unsigned long end)
>> +{
>> +	int ret = 0;
>> +
>> +	if (!system_supports_bbml2_noabort())
>> +		return 0;
>> +
>> +	mmap_write_lock(&init_mm);
>> +	/* NO_EXEC_MAPPINGS is needed when splitting linear map */
>> +	ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
>> +					  start, (end - start), __pgprot(0),
>> +					  __pgd_pgtable_alloc,
>> +					  NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
> Implementing this on top of __create_pgd_mapping_locked() is problematic because
> (I think) it assumes that the virtual range is physically contiguous? That's
> fine for the linear map, but I'd like to reuse this primitive for vmalloc too.

That assumption is for creating page table. But split doesn't care 
whether it is physically contiguous or not, the phys is not actually 
used by split primitive.

>> +	mmap_write_unlock(&init_mm);
> As already mentioned, I don't like this locking. I think we can make it work
> locklessly as long as we are only ever splitting and not collapsing.

I don't disagree lockless is good. I'm just not sure whether it is worth 
the extra complexity or not.

>> +	flush_tlb_kernel_range(start, end);
>> +
>> +	return ret;
>> +}
> I had a go at creating a version to try to illustrate how I have been thinking
> about this. What do you think? I've only compile tested it (and it fails because
> I don't have pmd_mknoncont() and system_supports_bbml2_noabort() in my tree -
> but the rest looks ok). It's on top of v6.16-rc1, where the pgtable allocation
> functions have changed a bit. And I don't think you need patch 2 from your
> series with this change. I haven't implemented the cmpxchg part that I think

Yes, patch #2 is not needed anymore for this series if we have split 
primitive in a separate function. But it should be a prerequisite for 
fixing the memory hotplug bug.

> would make it safe to be used locklessly yet, but I've marked the sites up with
> TODO. Once implemented, the idea is that concurrent threads trying to split on
> addresses that all lie within the same block/contig mapping should be safe.
>
> ---8<---
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 8fcf59ba39db..22a09cc7a2aa 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,9 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys,
> unsigned long virt,
>   			     int flags);
>   #endif
>
> +/* Sentinel used to represent failure to allocate for phys_addr_t type. */
> +#define INVALID_PHYS_ADDR -1
> +
>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   				       enum pgtable_type pgtable_type)
>   {
> @@ -487,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>   	phys_addr_t pa;
>
> -	BUG_ON(!ptdesc);
> +	if (!ptdesc)
> +		return INVALID_PHYS_ADDR;
> +
>   	pa = page_to_phys(ptdesc_page(ptdesc));
>
>   	switch (pgtable_type) {
> @@ -509,15 +514,27 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   }
>
>   static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>   {
>   	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>   }
>
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> +	phys_addr_t pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +
> +	BUG_ON(!pa);
> +	return pa;
> +}
> +
>   static phys_addr_t
>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>   {
> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
> +	phys_addr_t pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> +
> +	BUG_ON(!pa);
> +	return pa;
>   }
>
>   /*
> @@ -1616,3 +1633,202 @@ int arch_set_user_pkey_access(struct task_struct *tsk,
> int pkey, unsigned long i
>   	return 0;
>   }
>   #endif
> +
> +static void split_contpte(pte_t *ptep)
> +{
> +	int i;
> +
> +	ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> +	for (i = 0; i < CONT_PTES; i++, ptep++)
> +		__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> +}
> +
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd)
> +{
> +	pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> +	unsigned long pfn = pmd_pfn(pmd);
> +	pgprot_t prot = pmd_pgprot(pmd);
> +	phys_addr_t pte_phys;
> +	pte_t *ptep;
> +	int i;
> +
> +	pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
> +	if (pte_phys == INVALID_PHYS_ADDR)
> +		return -ENOMEM;
> +	ptep = (pte_t *)phys_to_virt(pte_phys);
> +
> +	if (pgprot_val(prot) & PMD_SECT_PXN)
> +		tableprot |= PMD_TABLE_PXN;
> +
> +	prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
> +	prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> +	for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> +		__set_pte(ptep, pfn_pte(pfn, prot));
> +
> +	/*
> +	 * Ensure the pte entries are visible to the table walker by the time
> +	 * the pmd entry that points to the ptes is visible.
> +	 */
> +	dsb(ishst);
> +
> +	// TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
> +	__pmd_populate(pmdp, pte_phys, tableprot);
> +
> +	return 0;
> +}
> +
> +static void split_contpmd(pmd_t *pmdp)
> +{
> +	int i;
> +
> +	pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> +	for (i = 0; i < CONT_PMDS; i++, pmdp++)
> +		set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
> +}
> +
> +static int split_pud(pud_t *pudp, pud_t pud)
> +{
> +	pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> +	unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +	unsigned long pfn = pud_pfn(pud);
> +	pgprot_t prot = pud_pgprot(pud);
> +	phys_addr_t pmd_phys;
> +	pmd_t *pmdp;
> +	int i;
> +
> +	pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
> +	if (pmd_phys == INVALID_PHYS_ADDR)
> +		return -ENOMEM;
> +	pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +
> +	if (pgprot_val(prot) & PMD_SECT_PXN)
> +		tableprot |= PUD_TABLE_PXN;
> +
> +	prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
> +	prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> +	for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
> +		set_pmd(pmdp, pfn_pmd(pfn, prot));
> +
> +	/*
> +	 * Ensure the pmd entries are visible to the table walker by the time
> +	 * the pud entry that points to the pmds is visible.
> +	 */
> +	dsb(ishst);
> +
> +	// TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
> +	__pud_populate(pudp, pmd_phys, tableprot);
> +
> +	return 0;
> +}
> +
> +int split_leaf_mapping(unsigned long addr)

Thanks for coming up with the code. It does help to understand your 
idea. Now I see why you suggested "split_mapping(start); 
split_mapping(end);" model. It does make the implementation easier 
because we don't need a loop anymore. But this may have a couple of 
problems:
   1. We need walk the page table twice instead of once. It sounds 
expensive.
   2. How should we handle repainting? We need split all the page tables 
all the way down to PTE for repainting between start and end rather than 
keeping block mappings. This model doesn't work, right? For example, 
repaint a 2G block. The first 1G is mapped by a PUD, the second 1G is 
mapped by 511 PMD and 512 PTEs. split_mapping(start) will split the 
first 1G, but split_mapping(end) will do nothing, the 511 PMDs are kept 
intact. In addition, I think we also prefer reuse the split primitive 
for repainting instead of inventing another one.

Thanks,
Yang

> +{
> +	pgd_t *pgdp, pgd;
> +	p4d_t *p4dp, p4d;
> +	pud_t *pudp, pud;
> +	pmd_t *pmdp, pmd;
> +	pte_t *ptep, pte;
> +	int ret = 0;
> +
> +	/*
> +	 * !BBML2_NOABORT systems should not be trying to change permissions on
> +	 * anything that is not pte-mapped in the first place. Just return early
> +	 * and let the permission change code raise a warning if not already
> +	 * pte-mapped.
> +	 */
> +	if (!system_supports_bbml2_noabort())
> +		return 0;
> +
> +	/*
> +	 * Ensure addr is at least page-aligned since this is the finest
> +	 * granularity we can split to.
> +	 */
> +	if (addr != PAGE_ALIGN(addr))
> +		return -EINVAL;
> +
> +	arch_enter_lazy_mmu_mode();
> +
> +	/*
> +	 * PGD: If addr is PGD aligned then addr already describes a leaf
> +	 * boundary. If not present then there is nothing to split.
> +	 */
> +	if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
> +		goto out;
> +	pgdp = pgd_offset_k(addr);
> +	pgd = pgdp_get(pgdp);
> +	if (!pgd_present(pgd))
> +		goto out;
> +
> +	/*
> +	 * P4D: If addr is P4D aligned then addr already describes a leaf
> +	 * boundary. If not present then there is nothing to split.
> +	 */
> +	if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
> +		goto out;
> +	p4dp = p4d_offset(pgdp, addr);
> +	p4d = p4dp_get(p4dp);
> +	if (!p4d_present(p4d))
> +		goto out;
> +
> +	/*
> +	 * PUD: If addr is PUD aligned then addr already describes a leaf
> +	 * boundary. If not present then there is nothing to split. Otherwise,
> +	 * if we have a pud leaf, split to contpmd.
> +	 */
> +	if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
> +		goto out;
> +	pudp = pud_offset(p4dp, addr);
> +	pud = pudp_get(pudp);
> +	if (!pud_present(pud))
> +		goto out;
> +	if (pud_leaf(pud)) {
> +		ret = split_pud(pudp, pud);
> +		if (ret)
> +			goto out;
> +	}
> +
> +	/*
> +	 * CONTPMD: If addr is CONTPMD aligned then addr already describes a
> +	 * leaf boundary. If not present then there is nothing to split.
> +	 * Otherwise, if we have a contpmd leaf, split to pmd.
> +	 */
> +	if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> +		goto out;
> +	pmdp = pmd_offset(pudp, addr);
> +	pmd = pmdp_get(pmdp);
> +	if (!pmd_present(pmd))
> +		goto out;
> +	if (pmd_leaf(pmd)) {
> +		if (pmd_cont(pmd))
> +			split_contpmd(pmdp);
> +		/*
> +		 * PMD: If addr is PMD aligned then addr already describes a
> +		 * leaf boundary. Otherwise, split to contpte.
> +		 */
> +		if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> +			goto out;
> +		ret = split_pmd(pmdp, pmd);
> +		if (ret)
> +			goto out;
> +	}
> +
> +	/*
> +	 * CONTPTE: If addr is CONTPTE aligned then addr already describes a
> +	 * leaf boundary. If not present then there is nothing to split.
> +	 * Otherwise, if we have a contpte leaf, split to pte.
> +	 */
> +	if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> +		goto out;
> +	ptep = pte_offset_kernel(pmdp, addr);
> +	pte = __ptep_get(ptep);
> +	if (!pte_present(pte))
> +		goto out;
> +	if (pte_cont(pte))
> +		split_contpte(ptep);
> +
> +out:
> +	arch_leave_lazy_mmu_mode();
> +	return ret;
> +}
> ---8<---
>
> Thanks,
> Ryan
>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void
  2025-06-16 10:04   ` Ryan Roberts
@ 2025-06-17 21:11     ` Yang Shi
  2025-06-23 13:05       ` Ryan Roberts
  0 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-06-17 21:11 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel, Chaitanya S Prakash



On 6/16/25 3:04 AM, Ryan Roberts wrote:
> On 31/05/2025 03:41, Yang Shi wrote:
>> The later patch will enhance __create_pgd_mapping() and related helpers
>> to split kernel linear mapping, it requires have return value.  So make
>> __create_pgd_mapping() and helpers non-void functions.
>>
>> And move the BUG_ON() out of page table alloc helper since failing
>> splitting kernel linear mapping is not fatal and can be handled by the
>> callers in the later patch.  Have BUG_ON() after
>> __create_pgd_mapping_locked() returns to keep the current callers behavior
>> intact.
>>
>> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> With the nits below taken care of:
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

Thank you. Although this patch may be dropped in the new spin per our 
discussion, this is still needed to fix the memory hotplug bug.

>
>> ---
>>   arch/arm64/kernel/cpufeature.c |  10 ++-
>>   arch/arm64/mm/mmu.c            | 130 +++++++++++++++++++++++----------
>>   2 files changed, 99 insertions(+), 41 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 25e1fbfab6a3..e879bfcf853b 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -1933,9 +1933,9 @@ static bool has_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
>>   #define KPTI_NG_TEMP_VA		(-(1UL << PMD_SHIFT))
>>   
>>   extern
>> -void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> -			     phys_addr_t size, pgprot_t prot,
>> -			     phys_addr_t (*pgtable_alloc)(int), int flags);
>> +int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> +			    phys_addr_t size, pgprot_t prot,
>> +			    phys_addr_t (*pgtable_alloc)(int), int flags);
>>   
>>   static phys_addr_t __initdata kpti_ng_temp_alloc;
>>   
>> @@ -1957,6 +1957,7 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>>   	u64 kpti_ng_temp_pgd_pa = 0;
>>   	pgd_t *kpti_ng_temp_pgd;
>>   	u64 alloc = 0;
>> +	int err;
>>   
>>   	if (levels == 5 && !pgtable_l5_enabled())
>>   		levels = 4;
>> @@ -1986,9 +1987,10 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>>   		// covers the PTE[] page itself, the remaining entries are free
>>   		// to be used as a ad-hoc fixmap.
>>   		//
>> -		create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
>> +		err = create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
>>   					KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL,
>>   					kpti_ng_pgd_alloc, 0);
>> +		BUG_ON(err);
>>   	}
>>   
>>   	cpu_install_idmap();
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index ea6695d53fb9..775c0536b194 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -189,15 +189,16 @@ static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
>>   	} while (ptep++, addr += PAGE_SIZE, addr != end);
>>   }
>>   
>> -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>> -				unsigned long end, phys_addr_t phys,
>> -				pgprot_t prot,
>> -				phys_addr_t (*pgtable_alloc)(int),
>> -				int flags)
>> +static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>> +			       unsigned long end, phys_addr_t phys,
>> +			       pgprot_t prot,
>> +			       phys_addr_t (*pgtable_alloc)(int),
>> +			       int flags)
>>   {
>>   	unsigned long next;
>>   	pmd_t pmd = READ_ONCE(*pmdp);
>>   	pte_t *ptep;
>> +	int ret = 0;
>>   
>>   	BUG_ON(pmd_sect(pmd));
>>   	if (pmd_none(pmd)) {
>> @@ -208,6 +209,10 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>   			pmdval |= PMD_TABLE_PXN;
>>   		BUG_ON(!pgtable_alloc);
>>   		pte_phys = pgtable_alloc(PAGE_SHIFT);
>> +		if (pte_phys == -1) {
> It would be better to have a macro definition for the invalid PA case instead of
> using the magic -1 everywhere. I think it can be local to this file. Perhaps:
>
> #define INVAL_PHYS_ADDR -1

OK

>
>> +			ret = -ENOMEM;
>> +			goto out;
>> +		}
>>   		ptep = pte_set_fixmap(pte_phys);
>>   		init_clear_pgtable(ptep);
>>   		ptep += pte_index(addr);
>> @@ -239,13 +244,17 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>   	 * walker.
>>   	 */
>>   	pte_clear_fixmap();
>> +
>> +out:
>> +	return ret;
>>   }
>>   
>> -static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>> -		     phys_addr_t phys, pgprot_t prot,
>> -		     phys_addr_t (*pgtable_alloc)(int), int flags)
>> +static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>> +		    phys_addr_t phys, pgprot_t prot,
>> +		    phys_addr_t (*pgtable_alloc)(int), int flags)
>>   {
>>   	unsigned long next;
>> +	int ret = 0;
>>   
>>   	do {
>>   		pmd_t old_pmd = READ_ONCE(*pmdp);
>> @@ -264,22 +273,27 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>>   			BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
>>   						      READ_ONCE(pmd_val(*pmdp))));
>>   		} else {
>> -			alloc_init_cont_pte(pmdp, addr, next, phys, prot,
>> +			ret = alloc_init_cont_pte(pmdp, addr, next, phys, prot,
>>   					    pgtable_alloc, flags);
>> +			if (ret)
>> +				break;
>>   
>>   			BUG_ON(pmd_val(old_pmd) != 0 &&
>>   			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
>>   		}
>>   		phys += next - addr;
>>   	} while (pmdp++, addr = next, addr != end);
>> +
>> +	return ret;
>>   }
>>   
>> -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>> -				unsigned long end, phys_addr_t phys,
>> -				pgprot_t prot,
>> -				phys_addr_t (*pgtable_alloc)(int), int flags)
>> +static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>> +			       unsigned long end, phys_addr_t phys,
>> +			       pgprot_t prot,
>> +			       phys_addr_t (*pgtable_alloc)(int), int flags)
>>   {
>>   	unsigned long next;
>> +	int ret = 0;
>>   	pud_t pud = READ_ONCE(*pudp);
>>   	pmd_t *pmdp;
>>   
>> @@ -295,6 +309,10 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>   			pudval |= PUD_TABLE_PXN;
>>   		BUG_ON(!pgtable_alloc);
>>   		pmd_phys = pgtable_alloc(PMD_SHIFT);
>> +		if (pmd_phys == -1) {
>> +			ret = -ENOMEM;
>> +			goto out;
>> +		}
>>   		pmdp = pmd_set_fixmap(pmd_phys);
>>   		init_clear_pgtable(pmdp);
>>   		pmdp += pmd_index(addr);
>> @@ -314,21 +332,27 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>   		    (flags & NO_CONT_MAPPINGS) == 0)
>>   			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>   
>> -		init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
>> +		ret = init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
>> +		if (ret)
>> +			break;
>>   
>>   		pmdp += pmd_index(next) - pmd_index(addr);
>>   		phys += next - addr;
>>   	} while (addr = next, addr != end);
>>   
>>   	pmd_clear_fixmap();
>> +
>> +out:
>> +	return ret;
>>   }
>>   
>> -static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>> -			   phys_addr_t phys, pgprot_t prot,
>> -			   phys_addr_t (*pgtable_alloc)(int),
>> -			   int flags)
>> +static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>> +			  phys_addr_t phys, pgprot_t prot,
>> +			  phys_addr_t (*pgtable_alloc)(int),
>> +			  int flags)
>>   {
>>   	unsigned long next;
>> +	int ret = 0;
>>   	p4d_t p4d = READ_ONCE(*p4dp);
>>   	pud_t *pudp;
>>   
>> @@ -340,6 +364,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>   			p4dval |= P4D_TABLE_PXN;
>>   		BUG_ON(!pgtable_alloc);
>>   		pud_phys = pgtable_alloc(PUD_SHIFT);
>> +		if (pud_phys == -1) {
>> +			ret = -ENOMEM;
>> +			goto out;
>> +		}
>>   		pudp = pud_set_fixmap(pud_phys);
>>   		init_clear_pgtable(pudp);
>>   		pudp += pud_index(addr);
>> @@ -369,8 +397,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>   			BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
>>   						      READ_ONCE(pud_val(*pudp))));
>>   		} else {
>> -			alloc_init_cont_pmd(pudp, addr, next, phys, prot,
>> +			ret = alloc_init_cont_pmd(pudp, addr, next, phys, prot,
>>   					    pgtable_alloc, flags);
>> +			if (ret)
>> +				break;
>>   
>>   			BUG_ON(pud_val(old_pud) != 0 &&
>>   			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
>> @@ -379,14 +409,18 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>   	} while (pudp++, addr = next, addr != end);
>>   
>>   	pud_clear_fixmap();
>> +
>> +out:
>> +	return ret;
>>   }
>>   
>> -static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>> -			   phys_addr_t phys, pgprot_t prot,
>> -			   phys_addr_t (*pgtable_alloc)(int),
>> -			   int flags)
>> +static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>> +			  phys_addr_t phys, pgprot_t prot,
>> +			  phys_addr_t (*pgtable_alloc)(int),
>> +			  int flags)
>>   {
>>   	unsigned long next;
>> +	int ret = 0;
>>   	pgd_t pgd = READ_ONCE(*pgdp);
>>   	p4d_t *p4dp;
>>   
>> @@ -398,6 +432,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>   			pgdval |= PGD_TABLE_PXN;
>>   		BUG_ON(!pgtable_alloc);
>>   		p4d_phys = pgtable_alloc(P4D_SHIFT);
>> +		if (p4d_phys == -1) {
>> +			ret = -ENOMEM;
>> +			goto out;
>> +		}
>>   		p4dp = p4d_set_fixmap(p4d_phys);
>>   		init_clear_pgtable(p4dp);
>>   		p4dp += p4d_index(addr);
>> @@ -412,8 +450,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>   
>>   		next = p4d_addr_end(addr, end);
>>   
>> -		alloc_init_pud(p4dp, addr, next, phys, prot,
>> +		ret = alloc_init_pud(p4dp, addr, next, phys, prot,
>>   			       pgtable_alloc, flags);
>> +		if (ret)
>> +			break;
>>   
>>   		BUG_ON(p4d_val(old_p4d) != 0 &&
>>   		       p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp)));
>> @@ -422,23 +462,27 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>   	} while (p4dp++, addr = next, addr != end);
>>   
>>   	p4d_clear_fixmap();
>> +
>> +out:
>> +	return ret;
>>   }
>>   
>> -static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>> -					unsigned long virt, phys_addr_t size,
>> -					pgprot_t prot,
>> -					phys_addr_t (*pgtable_alloc)(int),
>> -					int flags)
>> +static int __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>> +				       unsigned long virt, phys_addr_t size,
>> +				       pgprot_t prot,
>> +				       phys_addr_t (*pgtable_alloc)(int),
>> +				       int flags)
>>   {
>>   	unsigned long addr, end, next;
>>   	pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
>> +	int ret = 0;
>>   
>>   	/*
>>   	 * If the virtual and physical address don't have the same offset
>>   	 * within a page, we cannot map the region as the caller expects.
>>   	 */
>>   	if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
>> -		return;
>> +		return -EINVAL;
>>   
>>   	phys &= PAGE_MASK;
>>   	addr = virt & PAGE_MASK;
>> @@ -446,10 +490,14 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>   
>>   	do {
>>   		next = pgd_addr_end(addr, end);
>> -		alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
>> +		ret = alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
>>   			       flags);
>> +		if (ret)
>> +			break;
>>   		phys += next - addr;
>>   	} while (pgdp++, addr = next, addr != end);
>> +
>> +	return ret;
>>   }
>>   
>>   static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
>> @@ -458,17 +506,20 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
>>   				 phys_addr_t (*pgtable_alloc)(int),
>>   				 int flags)
>>   {
>> +	int err;
>> +
>>   	mutex_lock(&fixmap_lock);
>> -	__create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
>> -				    pgtable_alloc, flags);
>> +	err = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
>> +					  pgtable_alloc, flags);
>> +	BUG_ON(err);
>>   	mutex_unlock(&fixmap_lock);
>>   }
>>   
>>   #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>   extern __alias(__create_pgd_mapping_locked)
>> -void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> -			     phys_addr_t size, pgprot_t prot,
>> -			     phys_addr_t (*pgtable_alloc)(int), int flags);
>> +int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> +			    phys_addr_t size, pgprot_t prot,
>> +			    phys_addr_t (*pgtable_alloc)(int), int flags);
>>   #endif
> Personally I would have converted this from an alias to a wrapper:
>
> void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> 			     phys_addr_t size, pgprot_t prot,
> 			     phys_addr_t (*pgtable_alloc)(int), int flags)
> {
> 	int ret;
>
> 	ret = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
> 					  pgtable_alloc, flags);
> 	BUG_ON(err);
> }
>
> Then there is no churn in cpufeature.c. But it's not a strong opinion. If you
> prefer it like this then I'm ok with it (We'll need to see what Catalin and Will
> prefer ultimately anyway).

I don't have strong preference either.

Thanks,
Yang

>
> Thanks,
> Ryan
>
>>   
>>   static phys_addr_t __pgd_pgtable_alloc(int shift)
>> @@ -476,13 +527,17 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
>>   	/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
>>   	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL & ~__GFP_ZERO);
>>   
>> -	BUG_ON(!ptr);
>> +	if (!ptr)
>> +		return -1;
>> +
>>   	return __pa(ptr);
>>   }
>>   
>>   static phys_addr_t pgd_pgtable_alloc(int shift)
>>   {
>>   	phys_addr_t pa = __pgd_pgtable_alloc(shift);
>> +	if (pa == -1)
>> +		goto out;
>>   	struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
>>   
>>   	/*
>> @@ -498,6 +553,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>>   	else if (shift == PMD_SHIFT)
>>   		BUG_ON(!pagetable_pmd_ctor(ptdesc));
>>   
>> +out:
>>   	return pa;
>>   }
>>   



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-05-31  2:41 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
@ 2025-06-23 12:26   ` Ryan Roberts
  2025-06-23 20:56     ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-06-23 12:26 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel

On 31/05/2025 03:41, Yang Shi wrote:
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point.  So the linear
> mapping is determined by the capability of boot CPU.  If the boot CPU
> supports BBML2, large block mapping will be used for linear mapping.
> 
> But the secondary CPUs may not support BBML2, so repaint the linear mapping
> if large block mapping is used and the secondary CPUs don't support BBML2
> once cpufeature is finalized on all CPUs.
> 
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/mmu.h   |   3 +
>  arch/arm64/kernel/cpufeature.c |  16 +++++
>  arch/arm64/mm/mmu.c            | 108 ++++++++++++++++++++++++++++++++-
>  arch/arm64/mm/proc.S           |  41 +++++++++++++
>  4 files changed, 166 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 2693d63bf837..ad38135d1aa1 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -56,6 +56,8 @@ typedef struct {
>   */
>  #define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
>  
> +extern bool block_mapping;

Is this really useful to cache? Why not just call force_pte_mapping() instead?
Its the inverse. It's also not a great name for a global variable.

But perhaps it is better to cache a boolean that also reflects the bbml2 status:

bool linear_map_requires_bbml2;

Then create_idmap() will only bother to add to the idmap if there is a chance
you will need to repaint. And repaint_linear_mappings() won't need to explcitly
check !rodata_full.

I think this can be __initdata too?

> +
>  static inline bool arm64_kernel_unmapped_at_el0(void)
>  {
>  	return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
> @@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>  extern void mark_linear_text_alias_ro(void);
>  extern int split_linear_mapping(unsigned long start, unsigned long end);
> +extern int __repaint_linear_mappings(void *__unused);

nit: "repaint_linear_mappings" is a bit vague. How about
linear_map_split_to_ptes() or similar?

>  
>  /*
>   * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 5fc2a4a804de..5151c101fbaf 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -85,6 +85,7 @@
>  #include <asm/insn.h>
>  #include <asm/kvm_host.h>
>  #include <asm/mmu_context.h>
> +#include <asm/mmu.h>
>  #include <asm/mte.h>
>  #include <asm/hypervisor.h>
>  #include <asm/processor.h>
> @@ -2005,6 +2006,20 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>  	return 0;
>  }
>  
> +static void __init repaint_linear_mappings(void)
> +{
> +	if (!block_mapping)
> +		return;
> +
> +	if (!rodata_full)
> +		return;
> +
> +	if (system_supports_bbml2_noabort())
> +		return;
> +
> +	stop_machine(__repaint_linear_mappings, NULL, cpu_online_mask);

With the above suggestions, I think this can be simplified to something like:

static void __init linear_map_maybe_split_to_ptes(void)
{
	if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
		stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
}

> +}
> +
>  static void __init kpti_install_ng_mappings(void)
>  {
>  	/* Check whether KPTI is going to be used */
> @@ -3868,6 +3883,7 @@ void __init setup_system_features(void)
>  {
>  	setup_system_capabilities();
>  
> +	repaint_linear_mappings();
>  	kpti_install_ng_mappings();
>  
>  	sve_setup();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 4c5d3aa35d62..3922af89abbb 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -209,6 +209,8 @@ static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
>  	/* It must be naturally aligned if PMD is leaf */
>  	if ((flags & NO_CONT_MAPPINGS) == 0)
>  		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +	else
> +		prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>  
>  	for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>  		__set_pte_nosync(ptep, pfn_pte(pfn, prot));
> @@ -230,6 +232,8 @@ static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
>  	/* It must be naturally aligned if PUD is leaf */
>  	if ((flags & NO_CONT_MAPPINGS) == 0)
>  		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +	else
> +		prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>  
>  	for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>  		__set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
> @@ -833,6 +837,86 @@ void __init mark_linear_text_alias_ro(void)
>  			    PAGE_KERNEL_RO);
>  }
>  
> +static phys_addr_t repaint_pgtable_alloc(int shift)
> +{
> +	void *ptr;
> +
> +	ptr = (void *)__get_free_page(GFP_ATOMIC);
> +	if (!ptr)
> +		return -1;
> +
> +	return __pa(ptr);
> +}
> +
> +extern u32 repaint_done;
> +
> +int __init __repaint_linear_mappings(void *__unused)
> +{
> +	typedef void (repaint_wait_fn)(void);
> +	extern repaint_wait_fn bbml2_wait_for_repainting;
> +	repaint_wait_fn *wait_fn;
> +
> +	phys_addr_t kernel_start = __pa_symbol(_stext);
> +	phys_addr_t kernel_end = __pa_symbol(__init_begin);
> +	phys_addr_t start, end;
> +	unsigned long vstart, vend;
> +	u64 i;
> +	int ret;
> +	int flags = NO_EXEC_MAPPINGS | NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS |
> +		    SPLIT_MAPPINGS;
> +	int cpu = smp_processor_id();

nit: most of these variables are only needed by cpu 0 so you could defer their
initialization until inside the if condition below.

> +
> +	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
> +
> +	/*
> +	 * Repainting just can be run on CPU 0 because we just can be sure
> +	 * CPU 0 supports BBML2.
> +	 */
> +	if (!cpu) {
> +		/*
> +		 * Wait for all secondary CPUs get prepared for repainting
> +		 * the linear mapping.
> +		 */
> +wait_for_secondary:
> +		if (READ_ONCE(repaint_done) != num_online_cpus())
> +			goto wait_for_secondary;

This feels suspect when comparing against the assembly code that does a similar
sync operation in idmap_kpti_install_ng_mappings:

	/* We're the boot CPU. Wait for the others to catch up */
	sevl
1:	wfe
	ldaxr	w17, [flag_ptr]
	eor	w17, w17, num_cpus
	cbnz	w17, 1b

The acquire semantics of the ldaxr are needed here to ensure that program-order
future memory accesses don't get reordered before. READ_ONCE() is relaxed so
permits reordering.

The wfe means the CPU is not just furiously spinning, but actually waiting for a
secondary cpu exclusively write to the variable at flag_ptr.

I think you can drop the whole loop and just call:

	smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());

> +
> +		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
> +		/* Split the whole linear mapping */
> +		for_each_mem_range(i, &start, &end) {
> +			if (start >= end)
> +				return -EINVAL;
> +
> +			vstart = __phys_to_virt(start);
> +			vend = __phys_to_virt(end);
> +			ret = __create_pgd_mapping_locked(init_mm.pgd, start,
> +					vstart, (end - start), __pgprot(0),
> +					repaint_pgtable_alloc, flags);
> +			if (ret)
> +				panic("Failed to split linear mappings\n");
> +
> +			flush_tlb_kernel_range(vstart, vend);
> +		}
> +		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);

You're relyng on the memblock API here. Is that valid, given we are quite late
into the boot at this point and we have transferred control to the buddy?

I was thinking you would just need to traverse the the linear map region of the
kernel page table, splitting each large leaf you find into the next size down?

> +
> +		WRITE_ONCE(repaint_done, 0);

I think this depends on the dsb(ish) in flush_tlb_kernel_range() to ensure it is
not re-ordered before any pgtable split operations? Might be worth a comment.


> +	} else {
> +		/*
> +		 * The secondary CPUs can't run in the same address space
> +		 * with CPU 0 because accessing the linear mapping address
> +		 * when CPU 0 is repainting it is not safe.
> +		 *
> +		 * Let the secondary CPUs run busy loop in idmap address
> +		 * space when repainting is ongoing.
> +		 */
> +		cpu_install_idmap();
> +		wait_fn();
> +		cpu_uninstall_idmap();
> +	}
> +
> +	return 0;
> +}
> +
>  #ifdef CONFIG_KFENCE
>  
>  bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
> @@ -887,6 +971,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>  
>  #endif /* CONFIG_KFENCE */
>  
> +bool block_mapping;
> +>  static inline bool force_pte_mapping(void)
>  {
>  	/*
> @@ -915,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
>  	int flags = NO_EXEC_MAPPINGS;
>  	u64 i;
>  
> +	block_mapping = true;
> +
>  	/*
>  	 * Setting hierarchical PXNTable attributes on table entries covering
>  	 * the linear region is only possible if it is guaranteed that no table
> @@ -930,8 +1018,10 @@ static void __init map_mem(pgd_t *pgdp)
>  
>  	early_kfence_pool = arm64_kfence_alloc_pool();
>  
> -	if (force_pte_mapping())
> +	if (force_pte_mapping()) {
> +		block_mapping = false;
>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> +	}
>  
>  	/*
>  	 * Take care not to create a writable alias for the
> @@ -1063,7 +1153,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>  		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>  
>  static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> -	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> +	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>  
>  static void __init create_idmap(void)
>  {
> @@ -1088,6 +1179,19 @@ static void __init create_idmap(void)
>  			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>  			       __phys_to_virt(ptep) - ptep);
>  	}
> +
> +	/*
> +	 * Setup idmap mapping for repaint_done flag.  It will be used if
> +	 * repainting the linear mapping is needed later.
> +	 */
> +	if (block_mapping) {
> +		u64 pa = __pa_symbol(&repaint_done);
> +		ptep = __pa_symbol(bbml2_ptes);
> +
> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +			       __phys_to_virt(ptep) - ptep);
> +	}
>  }
>  
>  void __init paging_init(void)
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index fb30c8804f87..c40e6126c093 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -440,6 +440,47 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>  	.popsection
>  #endif
>  
> +/*
> + * Wait for repainting is done. Run on secondary CPUs
> + * only.
> + */
> +	.pushsection	".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> +	.popsection
> +
> +	.pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> +	swapper_ttb	.req	x0
> +	flag_ptr	.req	x1
> +
> +	mrs	swapper_ttb, ttbr1_el1
> +	adr_l	flag_ptr, repaint_done
> +
> +	/* Uninstall swapper before surgery begins */
> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
> +
> +	/* Increment the flag to let the boot CPU we're ready */
> +1:	ldxr	w16, [flag_ptr]
> +	add	w16, w16, #1
> +	stxr	w17, w16, [flag_ptr]
> +	cbnz	w17, 1b
> +
> +	/* Wait for the boot CPU to finish repainting */
> +	sevl
> +1:	wfe
> +	ldxr	w16, [flag_ptr]
> +	cbnz	w16, 1b
> +
> +	/* All done, act like nothing happened */
> +	msr	ttbr1_el1, swapper_ttb
> +	isb
> +	ret
> +
> +	.unreq	swapper_ttb
> +	.unreq	flag_ptr
> +SYM_FUNC_END(bbml2_wait_for_repainting)
> +	.popsection

This is identical to __idmap_kpti_secondary. Can't we just refactor it into a
common function? I think you can even reuse the same refcount variable (i.e. no
need for both repaint_done and __idmap_kpti_flag).

Thanks,
Ryan


> +
>  /*
>   *	__cpu_setup
>   *



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void
  2025-06-17 21:11     ` Yang Shi
@ 2025-06-23 13:05       ` Ryan Roberts
  0 siblings, 0 replies; 34+ messages in thread
From: Ryan Roberts @ 2025-06-23 13:05 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel, Chaitanya S Prakash

On 17/06/2025 22:11, Yang Shi wrote:
> 
> 
> On 6/16/25 3:04 AM, Ryan Roberts wrote:
>> On 31/05/2025 03:41, Yang Shi wrote:
>>> The later patch will enhance __create_pgd_mapping() and related helpers
>>> to split kernel linear mapping, it requires have return value.  So make
>>> __create_pgd_mapping() and helpers non-void functions.
>>>
>>> And move the BUG_ON() out of page table alloc helper since failing
>>> splitting kernel linear mapping is not fatal and can be handled by the
>>> callers in the later patch.  Have BUG_ON() after
>>> __create_pgd_mapping_locked() returns to keep the current callers behavior
>>> intact.
>>>
>>> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> With the nits below taken care of:
>>
>> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> 
> Thank you. Although this patch may be dropped in the new spin per our
> discussion, this is still needed to fix the memory hotplug bug.

Yep understood. Chaitanya (CCed) is looking into that so hopefully she can reuse
this patch.

Thanks,
Ryan

> 
>>
>>> ---
>>>   arch/arm64/kernel/cpufeature.c |  10 ++-
>>>   arch/arm64/mm/mmu.c            | 130 +++++++++++++++++++++++----------
>>>   2 files changed, 99 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>> index 25e1fbfab6a3..e879bfcf853b 100644
>>> --- a/arch/arm64/kernel/cpufeature.c
>>> +++ b/arch/arm64/kernel/cpufeature.c
>>> @@ -1933,9 +1933,9 @@ static bool has_pmuv3(const struct
>>> arm64_cpu_capabilities *entry, int scope)
>>>   #define KPTI_NG_TEMP_VA        (-(1UL << PMD_SHIFT))
>>>     extern
>>> -void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long
>>> virt,
>>> -                 phys_addr_t size, pgprot_t prot,
>>> -                 phys_addr_t (*pgtable_alloc)(int), int flags);
>>> +int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>> +                phys_addr_t size, pgprot_t prot,
>>> +                phys_addr_t (*pgtable_alloc)(int), int flags);
>>>     static phys_addr_t __initdata kpti_ng_temp_alloc;
>>>   @@ -1957,6 +1957,7 @@ static int __init __kpti_install_ng_mappings(void
>>> *__unused)
>>>       u64 kpti_ng_temp_pgd_pa = 0;
>>>       pgd_t *kpti_ng_temp_pgd;
>>>       u64 alloc = 0;
>>> +    int err;
>>>         if (levels == 5 && !pgtable_l5_enabled())
>>>           levels = 4;
>>> @@ -1986,9 +1987,10 @@ static int __init __kpti_install_ng_mappings(void
>>> *__unused)
>>>           // covers the PTE[] page itself, the remaining entries are free
>>>           // to be used as a ad-hoc fixmap.
>>>           //
>>> -        create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
>>> +        err = create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc),
>>>                       KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL,
>>>                       kpti_ng_pgd_alloc, 0);
>>> +        BUG_ON(err);
>>>       }
>>>         cpu_install_idmap();
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index ea6695d53fb9..775c0536b194 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -189,15 +189,16 @@ static void init_pte(pte_t *ptep, unsigned long addr,
>>> unsigned long end,
>>>       } while (ptep++, addr += PAGE_SIZE, addr != end);
>>>   }
>>>   -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>> -                unsigned long end, phys_addr_t phys,
>>> -                pgprot_t prot,
>>> -                phys_addr_t (*pgtable_alloc)(int),
>>> -                int flags)
>>> +static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>> +                   unsigned long end, phys_addr_t phys,
>>> +                   pgprot_t prot,
>>> +                   phys_addr_t (*pgtable_alloc)(int),
>>> +                   int flags)
>>>   {
>>>       unsigned long next;
>>>       pmd_t pmd = READ_ONCE(*pmdp);
>>>       pte_t *ptep;
>>> +    int ret = 0;
>>>         BUG_ON(pmd_sect(pmd));
>>>       if (pmd_none(pmd)) {
>>> @@ -208,6 +209,10 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned
>>> long addr,
>>>               pmdval |= PMD_TABLE_PXN;
>>>           BUG_ON(!pgtable_alloc);
>>>           pte_phys = pgtable_alloc(PAGE_SHIFT);
>>> +        if (pte_phys == -1) {
>> It would be better to have a macro definition for the invalid PA case instead of
>> using the magic -1 everywhere. I think it can be local to this file. Perhaps:
>>
>> #define INVAL_PHYS_ADDR -1
> 
> OK
> 
>>
>>> +            ret = -ENOMEM;
>>> +            goto out;
>>> +        }
>>>           ptep = pte_set_fixmap(pte_phys);
>>>           init_clear_pgtable(ptep);
>>>           ptep += pte_index(addr);
>>> @@ -239,13 +244,17 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned
>>> long addr,
>>>        * walker.
>>>        */
>>>       pte_clear_fixmap();
>>> +
>>> +out:
>>> +    return ret;
>>>   }
>>>   -static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>>> -             phys_addr_t phys, pgprot_t prot,
>>> -             phys_addr_t (*pgtable_alloc)(int), int flags)
>>> +static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>>> +            phys_addr_t phys, pgprot_t prot,
>>> +            phys_addr_t (*pgtable_alloc)(int), int flags)
>>>   {
>>>       unsigned long next;
>>> +    int ret = 0;
>>>         do {
>>>           pmd_t old_pmd = READ_ONCE(*pmdp);
>>> @@ -264,22 +273,27 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr,
>>> unsigned long end,
>>>               BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
>>>                                 READ_ONCE(pmd_val(*pmdp))));
>>>           } else {
>>> -            alloc_init_cont_pte(pmdp, addr, next, phys, prot,
>>> +            ret = alloc_init_cont_pte(pmdp, addr, next, phys, prot,
>>>                           pgtable_alloc, flags);
>>> +            if (ret)
>>> +                break;
>>>                 BUG_ON(pmd_val(old_pmd) != 0 &&
>>>                      pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
>>>           }
>>>           phys += next - addr;
>>>       } while (pmdp++, addr = next, addr != end);
>>> +
>>> +    return ret;
>>>   }
>>>   -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>> -                unsigned long end, phys_addr_t phys,
>>> -                pgprot_t prot,
>>> -                phys_addr_t (*pgtable_alloc)(int), int flags)
>>> +static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>> +                   unsigned long end, phys_addr_t phys,
>>> +                   pgprot_t prot,
>>> +                   phys_addr_t (*pgtable_alloc)(int), int flags)
>>>   {
>>>       unsigned long next;
>>> +    int ret = 0;
>>>       pud_t pud = READ_ONCE(*pudp);
>>>       pmd_t *pmdp;
>>>   @@ -295,6 +309,10 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned
>>> long addr,
>>>               pudval |= PUD_TABLE_PXN;
>>>           BUG_ON(!pgtable_alloc);
>>>           pmd_phys = pgtable_alloc(PMD_SHIFT);
>>> +        if (pmd_phys == -1) {
>>> +            ret = -ENOMEM;
>>> +            goto out;
>>> +        }
>>>           pmdp = pmd_set_fixmap(pmd_phys);
>>>           init_clear_pgtable(pmdp);
>>>           pmdp += pmd_index(addr);
>>> @@ -314,21 +332,27 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned
>>> long addr,
>>>               (flags & NO_CONT_MAPPINGS) == 0)
>>>               __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>>   -        init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
>>> +        ret = init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
>>> +        if (ret)
>>> +            break;
>>>             pmdp += pmd_index(next) - pmd_index(addr);
>>>           phys += next - addr;
>>>       } while (addr = next, addr != end);
>>>         pmd_clear_fixmap();
>>> +
>>> +out:
>>> +    return ret;
>>>   }
>>>   -static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long
>>> end,
>>> -               phys_addr_t phys, pgprot_t prot,
>>> -               phys_addr_t (*pgtable_alloc)(int),
>>> -               int flags)
>>> +static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>>> +              phys_addr_t phys, pgprot_t prot,
>>> +              phys_addr_t (*pgtable_alloc)(int),
>>> +              int flags)
>>>   {
>>>       unsigned long next;
>>> +    int ret = 0;
>>>       p4d_t p4d = READ_ONCE(*p4dp);
>>>       pud_t *pudp;
>>>   @@ -340,6 +364,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long
>>> addr, unsigned long end,
>>>               p4dval |= P4D_TABLE_PXN;
>>>           BUG_ON(!pgtable_alloc);
>>>           pud_phys = pgtable_alloc(PUD_SHIFT);
>>> +        if (pud_phys == -1) {
>>> +            ret = -ENOMEM;
>>> +            goto out;
>>> +        }
>>>           pudp = pud_set_fixmap(pud_phys);
>>>           init_clear_pgtable(pudp);
>>>           pudp += pud_index(addr);
>>> @@ -369,8 +397,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long
>>> addr, unsigned long end,
>>>               BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
>>>                                 READ_ONCE(pud_val(*pudp))));
>>>           } else {
>>> -            alloc_init_cont_pmd(pudp, addr, next, phys, prot,
>>> +            ret = alloc_init_cont_pmd(pudp, addr, next, phys, prot,
>>>                           pgtable_alloc, flags);
>>> +            if (ret)
>>> +                break;
>>>                 BUG_ON(pud_val(old_pud) != 0 &&
>>>                      pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
>>> @@ -379,14 +409,18 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long
>>> addr, unsigned long end,
>>>       } while (pudp++, addr = next, addr != end);
>>>         pud_clear_fixmap();
>>> +
>>> +out:
>>> +    return ret;
>>>   }
>>>   -static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long
>>> end,
>>> -               phys_addr_t phys, pgprot_t prot,
>>> -               phys_addr_t (*pgtable_alloc)(int),
>>> -               int flags)
>>> +static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>> +              phys_addr_t phys, pgprot_t prot,
>>> +              phys_addr_t (*pgtable_alloc)(int),
>>> +              int flags)
>>>   {
>>>       unsigned long next;
>>> +    int ret = 0;
>>>       pgd_t pgd = READ_ONCE(*pgdp);
>>>       p4d_t *p4dp;
>>>   @@ -398,6 +432,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long
>>> addr, unsigned long end,
>>>               pgdval |= PGD_TABLE_PXN;
>>>           BUG_ON(!pgtable_alloc);
>>>           p4d_phys = pgtable_alloc(P4D_SHIFT);
>>> +        if (p4d_phys == -1) {
>>> +            ret = -ENOMEM;
>>> +            goto out;
>>> +        }
>>>           p4dp = p4d_set_fixmap(p4d_phys);
>>>           init_clear_pgtable(p4dp);
>>>           p4dp += p4d_index(addr);
>>> @@ -412,8 +450,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long
>>> addr, unsigned long end,
>>>             next = p4d_addr_end(addr, end);
>>>   -        alloc_init_pud(p4dp, addr, next, phys, prot,
>>> +        ret = alloc_init_pud(p4dp, addr, next, phys, prot,
>>>                      pgtable_alloc, flags);
>>> +        if (ret)
>>> +            break;
>>>             BUG_ON(p4d_val(old_p4d) != 0 &&
>>>                  p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp)));
>>> @@ -422,23 +462,27 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long
>>> addr, unsigned long end,
>>>       } while (p4dp++, addr = next, addr != end);
>>>         p4d_clear_fixmap();
>>> +
>>> +out:
>>> +    return ret;
>>>   }
>>>   -static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>> -                    unsigned long virt, phys_addr_t size,
>>> -                    pgprot_t prot,
>>> -                    phys_addr_t (*pgtable_alloc)(int),
>>> -                    int flags)
>>> +static int __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>> +                       unsigned long virt, phys_addr_t size,
>>> +                       pgprot_t prot,
>>> +                       phys_addr_t (*pgtable_alloc)(int),
>>> +                       int flags)
>>>   {
>>>       unsigned long addr, end, next;
>>>       pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
>>> +    int ret = 0;
>>>         /*
>>>        * If the virtual and physical address don't have the same offset
>>>        * within a page, we cannot map the region as the caller expects.
>>>        */
>>>       if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
>>> -        return;
>>> +        return -EINVAL;
>>>         phys &= PAGE_MASK;
>>>       addr = virt & PAGE_MASK;
>>> @@ -446,10 +490,14 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir,
>>> phys_addr_t phys,
>>>         do {
>>>           next = pgd_addr_end(addr, end);
>>> -        alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
>>> +        ret = alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
>>>                      flags);
>>> +        if (ret)
>>> +            break;
>>>           phys += next - addr;
>>>       } while (pgdp++, addr = next, addr != end);
>>> +
>>> +    return ret;
>>>   }
>>>     static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
>>> @@ -458,17 +506,20 @@ static void __create_pgd_mapping(pgd_t *pgdir,
>>> phys_addr_t phys,
>>>                    phys_addr_t (*pgtable_alloc)(int),
>>>                    int flags)
>>>   {
>>> +    int err;
>>> +
>>>       mutex_lock(&fixmap_lock);
>>> -    __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
>>> -                    pgtable_alloc, flags);
>>> +    err = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
>>> +                      pgtable_alloc, flags);
>>> +    BUG_ON(err);
>>>       mutex_unlock(&fixmap_lock);
>>>   }
>>>     #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>   extern __alias(__create_pgd_mapping_locked)
>>> -void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long
>>> virt,
>>> -                 phys_addr_t size, pgprot_t prot,
>>> -                 phys_addr_t (*pgtable_alloc)(int), int flags);
>>> +int create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>> +                phys_addr_t size, pgprot_t prot,
>>> +                phys_addr_t (*pgtable_alloc)(int), int flags);
>>>   #endif
>> Personally I would have converted this from an alias to a wrapper:
>>
>> void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>                  phys_addr_t size, pgprot_t prot,
>>                  phys_addr_t (*pgtable_alloc)(int), int flags)
>> {
>>     int ret;
>>
>>     ret = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
>>                       pgtable_alloc, flags);
>>     BUG_ON(err);
>> }
>>
>> Then there is no churn in cpufeature.c. But it's not a strong opinion. If you
>> prefer it like this then I'm ok with it (We'll need to see what Catalin and Will
>> prefer ultimately anyway).
> 
> I don't have strong preference either.
> 
> Thanks,
> Yang
> 
>>
>> Thanks,
>> Ryan
>>
>>>     static phys_addr_t __pgd_pgtable_alloc(int shift)
>>> @@ -476,13 +527,17 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
>>>       /* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
>>>       void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL & ~__GFP_ZERO);
>>>   -    BUG_ON(!ptr);
>>> +    if (!ptr)
>>> +        return -1;
>>> +
>>>       return __pa(ptr);
>>>   }
>>>     static phys_addr_t pgd_pgtable_alloc(int shift)
>>>   {
>>>       phys_addr_t pa = __pgd_pgtable_alloc(shift);
>>> +    if (pa == -1)
>>> +        goto out;
>>>       struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
>>>         /*
>>> @@ -498,6 +553,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>>>       else if (shift == PMD_SHIFT)
>>>           BUG_ON(!pagetable_pmd_ctor(ptdesc));
>>>   +out:
>>>       return pa;
>>>   }
>>>   
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-17 21:09     ` Yang Shi
@ 2025-06-23 13:26       ` Ryan Roberts
  2025-06-23 19:12         ` Yang Shi
                           ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: Ryan Roberts @ 2025-06-23 13:26 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
	cl
  Cc: linux-arm-kernel, linux-kernel

[...]

>> +
>> +int split_leaf_mapping(unsigned long addr)
> 
> Thanks for coming up with the code. It does help to understand your idea. Now I
> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
> make the implementation easier because we don't need a loop anymore. But this
> may have a couple of problems:
>   1. We need walk the page table twice instead of once. It sounds expensive.

Yes we need to walk twice. That may be more expensive or less expensive,
depending on the size of the range that you are splitting. If the range is large
then your approach loops through every leaf mapping between the start and end
which will be more expensive than just doing 2 walks. If the range is small then
your approach can avoid the second walk, but at the expense of all the extra
loop overhead.

My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
Personally I think this is probably acceptable? Perhaps we need some other
voices here.


>   2. How should we handle repainting? We need split all the page tables all the
> way down to PTE for repainting between start and end rather than keeping block
> mappings. This model doesn't work, right? For example, repaint a 2G block. The
> first 1G is mapped by a PUD, the second 1G is mapped by 511 PMD and 512 PTEs.
> split_mapping(start) will split the first 1G, but split_mapping(end) will do
> nothing, the 511 PMDs are kept intact. In addition, I think we also prefer reuse
> the split primitive for repainting instead of inventing another one.

I agree my approach doesn't work for the repainting case. But I think what I'm
trying to say is that the 2 things are different operations;
split_leaf_mapping() is just trying to ensure that the start and end of a ragion
are on leaf boundaries. Repainting is trying to ensure that all leaf mappings
within a range are PTE-size. I've implemented the former and you've implemented
that latter. Your implementation looks like meets the former's requirements
because you are only testing it for the case where the range is 1 page. But
actually it is splitting everything in the range to PTEs.

Thanks,
Ryan

> 
> Thanks,
> Yang
> 
>> +{
>> +    pgd_t *pgdp, pgd;
>> +    p4d_t *p4dp, p4d;
>> +    pud_t *pudp, pud;
>> +    pmd_t *pmdp, pmd;
>> +    pte_t *ptep, pte;
>> +    int ret = 0;
>> +
>> +    /*
>> +     * !BBML2_NOABORT systems should not be trying to change permissions on
>> +     * anything that is not pte-mapped in the first place. Just return early
>> +     * and let the permission change code raise a warning if not already
>> +     * pte-mapped.
>> +     */
>> +    if (!system_supports_bbml2_noabort())
>> +        return 0;
>> +
>> +    /*
>> +     * Ensure addr is at least page-aligned since this is the finest
>> +     * granularity we can split to.
>> +     */
>> +    if (addr != PAGE_ALIGN(addr))
>> +        return -EINVAL;
>> +
>> +    arch_enter_lazy_mmu_mode();
>> +
>> +    /*
>> +     * PGD: If addr is PGD aligned then addr already describes a leaf
>> +     * boundary. If not present then there is nothing to split.
>> +     */
>> +    if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>> +        goto out;
>> +    pgdp = pgd_offset_k(addr);
>> +    pgd = pgdp_get(pgdp);
>> +    if (!pgd_present(pgd))
>> +        goto out;
>> +
>> +    /*
>> +     * P4D: If addr is P4D aligned then addr already describes a leaf
>> +     * boundary. If not present then there is nothing to split.
>> +     */
>> +    if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>> +        goto out;
>> +    p4dp = p4d_offset(pgdp, addr);
>> +    p4d = p4dp_get(p4dp);
>> +    if (!p4d_present(p4d))
>> +        goto out;
>> +
>> +    /*
>> +     * PUD: If addr is PUD aligned then addr already describes a leaf
>> +     * boundary. If not present then there is nothing to split. Otherwise,
>> +     * if we have a pud leaf, split to contpmd.
>> +     */
>> +    if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>> +        goto out;
>> +    pudp = pud_offset(p4dp, addr);
>> +    pud = pudp_get(pudp);
>> +    if (!pud_present(pud))
>> +        goto out;
>> +    if (pud_leaf(pud)) {
>> +        ret = split_pud(pudp, pud);
>> +        if (ret)
>> +            goto out;
>> +    }
>> +
>> +    /*
>> +     * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>> +     * leaf boundary. If not present then there is nothing to split.
>> +     * Otherwise, if we have a contpmd leaf, split to pmd.
>> +     */
>> +    if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> +        goto out;
>> +    pmdp = pmd_offset(pudp, addr);
>> +    pmd = pmdp_get(pmdp);
>> +    if (!pmd_present(pmd))
>> +        goto out;
>> +    if (pmd_leaf(pmd)) {
>> +        if (pmd_cont(pmd))
>> +            split_contpmd(pmdp);
>> +        /*
>> +         * PMD: If addr is PMD aligned then addr already describes a
>> +         * leaf boundary. Otherwise, split to contpte.
>> +         */
>> +        if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>> +            goto out;
>> +        ret = split_pmd(pmdp, pmd);
>> +        if (ret)
>> +            goto out;
>> +    }
>> +
>> +    /*
>> +     * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>> +     * leaf boundary. If not present then there is nothing to split.
>> +     * Otherwise, if we have a contpte leaf, split to pte.
>> +     */
>> +    if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> +        goto out;
>> +    ptep = pte_offset_kernel(pmdp, addr);
>> +    pte = __ptep_get(ptep);
>> +    if (!pte_present(pte))
>> +        goto out;
>> +    if (pte_cont(pte))
>> +        split_contpte(ptep);
>> +
>> +out:
>> +    arch_leave_lazy_mmu_mode();
>> +    return ret;
>> +}
>> ---8<---
>>
>> Thanks,
>> Ryan
>>
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-23 13:26       ` Ryan Roberts
@ 2025-06-23 19:12         ` Yang Shi
  2025-06-26 22:39         ` Yang Shi
  2025-07-23 17:38         ` Dev Jain
  2 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-06-23 19:12 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 6/23/25 6:26 AM, Ryan Roberts wrote:
> [...]
>
>>> +
>>> +int split_leaf_mapping(unsigned long addr)
>> Thanks for coming up with the code. It does help to understand your idea. Now I
>> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
>> make the implementation easier because we don't need a loop anymore. But this
>> may have a couple of problems:
>>    1. We need walk the page table twice instead of once. It sounds expensive.
> Yes we need to walk twice. That may be more expensive or less expensive,
> depending on the size of the range that you are splitting. If the range is large
> then your approach loops through every leaf mapping between the start and end
> which will be more expensive than just doing 2 walks. If the range is small then
> your approach can avoid the second walk, but at the expense of all the extra
> loop overhead.

Yes, it depends on the page table layout (the more fragmented the more 
loads) and the range passed in by the callers. But AFAICT, the most 
existing callers just try to change permission on page basis. I know you 
are looking at adding more block/cont mapping support for vmalloc, but 
will the large range case dominate?

>
> My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
> Personally I think this is probably acceptable? Perhaps we need some other
> voices here.

Doesn't it require 10 loads for both start and end together? The 5 loads 
for end may be fast since they are likely cached if they fall into the 
same PGD/P4D/PUD/PMD.

>
>
>>    2. How should we handle repainting? We need split all the page tables all the
>> way down to PTE for repainting between start and end rather than keeping block
>> mappings. This model doesn't work, right? For example, repaint a 2G block. The
>> first 1G is mapped by a PUD, the second 1G is mapped by 511 PMD and 512 PTEs.
>> split_mapping(start) will split the first 1G, but split_mapping(end) will do
>> nothing, the 511 PMDs are kept intact. In addition, I think we also prefer reuse
>> the split primitive for repainting instead of inventing another one.
> I agree my approach doesn't work for the repainting case. But I think what I'm
> trying to say is that the 2 things are different operations;
> split_leaf_mapping() is just trying to ensure that the start and end of a ragion
> are on leaf boundaries. Repainting is trying to ensure that all leaf mappings
> within a range are PTE-size. I've implemented the former and you've implemented
> that latter. Your implementation looks like meets the former's requirements
> because you are only testing it for the case where the range is 1 page. But
> actually it is splitting everything in the range to PTEs.

I can understand why you saw they are two different operations. And the 
repainting is basically one-off thing. However they share a lot of 
common logic (for example, allocate page table, populate new page table 
entries, etc) from code point of view. Repainting is just a special case 
of split (no block and cont mappings) in this perspective. If we 
implement them separately, I can see there will be a lot of duplicate 
code. I'm not sure whether this is preferred or not.

Thanks,
Yang

>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
>>> +{
>>> +    pgd_t *pgdp, pgd;
>>> +    p4d_t *p4dp, p4d;
>>> +    pud_t *pudp, pud;
>>> +    pmd_t *pmdp, pmd;
>>> +    pte_t *ptep, pte;
>>> +    int ret = 0;
>>> +
>>> +    /*
>>> +     * !BBML2_NOABORT systems should not be trying to change permissions on
>>> +     * anything that is not pte-mapped in the first place. Just return early
>>> +     * and let the permission change code raise a warning if not already
>>> +     * pte-mapped.
>>> +     */
>>> +    if (!system_supports_bbml2_noabort())
>>> +        return 0;
>>> +
>>> +    /*
>>> +     * Ensure addr is at least page-aligned since this is the finest
>>> +     * granularity we can split to.
>>> +     */
>>> +    if (addr != PAGE_ALIGN(addr))
>>> +        return -EINVAL;
>>> +
>>> +    arch_enter_lazy_mmu_mode();
>>> +
>>> +    /*
>>> +     * PGD: If addr is PGD aligned then addr already describes a leaf
>>> +     * boundary. If not present then there is nothing to split.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>>> +        goto out;
>>> +    pgdp = pgd_offset_k(addr);
>>> +    pgd = pgdp_get(pgdp);
>>> +    if (!pgd_present(pgd))
>>> +        goto out;
>>> +
>>> +    /*
>>> +     * P4D: If addr is P4D aligned then addr already describes a leaf
>>> +     * boundary. If not present then there is nothing to split.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>>> +        goto out;
>>> +    p4dp = p4d_offset(pgdp, addr);
>>> +    p4d = p4dp_get(p4dp);
>>> +    if (!p4d_present(p4d))
>>> +        goto out;
>>> +
>>> +    /*
>>> +     * PUD: If addr is PUD aligned then addr already describes a leaf
>>> +     * boundary. If not present then there is nothing to split. Otherwise,
>>> +     * if we have a pud leaf, split to contpmd.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>>> +        goto out;
>>> +    pudp = pud_offset(p4dp, addr);
>>> +    pud = pudp_get(pudp);
>>> +    if (!pud_present(pud))
>>> +        goto out;
>>> +    if (pud_leaf(pud)) {
>>> +        ret = split_pud(pudp, pud);
>>> +        if (ret)
>>> +            goto out;
>>> +    }
>>> +
>>> +    /*
>>> +     * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>>> +     * leaf boundary. If not present then there is nothing to split.
>>> +     * Otherwise, if we have a contpmd leaf, split to pmd.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> +        goto out;
>>> +    pmdp = pmd_offset(pudp, addr);
>>> +    pmd = pmdp_get(pmdp);
>>> +    if (!pmd_present(pmd))
>>> +        goto out;
>>> +    if (pmd_leaf(pmd)) {
>>> +        if (pmd_cont(pmd))
>>> +            split_contpmd(pmdp);
>>> +        /*
>>> +         * PMD: If addr is PMD aligned then addr already describes a
>>> +         * leaf boundary. Otherwise, split to contpte.
>>> +         */
>>> +        if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> +            goto out;
>>> +        ret = split_pmd(pmdp, pmd);
>>> +        if (ret)
>>> +            goto out;
>>> +    }
>>> +
>>> +    /*
>>> +     * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>>> +     * leaf boundary. If not present then there is nothing to split.
>>> +     * Otherwise, if we have a contpte leaf, split to pte.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> +        goto out;
>>> +    ptep = pte_offset_kernel(pmdp, addr);
>>> +    pte = __ptep_get(ptep);
>>> +    if (!pte_present(pte))
>>> +        goto out;
>>> +    if (pte_cont(pte))
>>> +        split_contpte(ptep);
>>> +
>>> +out:
>>> +    arch_leave_lazy_mmu_mode();
>>> +    return ret;
>>> +}
>>> ---8<---
>>>
>>> Thanks,
>>> Ryan
>>>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-06-23 12:26   ` Ryan Roberts
@ 2025-06-23 20:56     ` Yang Shi
  0 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-06-23 20:56 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 6/23/25 5:26 AM, Ryan Roberts wrote:
> On 31/05/2025 03:41, Yang Shi wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point.  So the linear
>> mapping is determined by the capability of boot CPU.  If the boot CPU
>> supports BBML2, large block mapping will be used for linear mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear mapping
>> if large block mapping is used and the secondary CPUs don't support BBML2
>> once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>>   arch/arm64/include/asm/mmu.h   |   3 +
>>   arch/arm64/kernel/cpufeature.c |  16 +++++
>>   arch/arm64/mm/mmu.c            | 108 ++++++++++++++++++++++++++++++++-
>>   arch/arm64/mm/proc.S           |  41 +++++++++++++
>>   4 files changed, 166 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 2693d63bf837..ad38135d1aa1 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -56,6 +56,8 @@ typedef struct {
>>    */
>>   #define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
>>   
>> +extern bool block_mapping;
> Is this really useful to cache? Why not just call force_pte_mapping() instead?
> Its the inverse. It's also not a great name for a global variable.

We can use force_pte_mapping().

>
> But perhaps it is better to cache a boolean that also reflects the bbml2 status:
>
> bool linear_map_requires_bbml2;
>
> Then create_idmap() will only bother to add to the idmap if there is a chance
> you will need to repaint. And repaint_linear_mappings() won't need to explcitly
> check !rodata_full.

OK, IIUC linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full

>
> I think this can be __initdata too?

Yes, it is just used at bootup stage.

>
>> +
>>   static inline bool arm64_kernel_unmapped_at_el0(void)
>>   {
>>   	return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
>> @@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>   extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>   extern void mark_linear_text_alias_ro(void);
>>   extern int split_linear_mapping(unsigned long start, unsigned long end);
>> +extern int __repaint_linear_mappings(void *__unused);
> nit: "repaint_linear_mappings" is a bit vague. How about
> linear_map_split_to_ptes() or similar?

Sounds good to me.

>
>>   
>>   /*
>>    * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 5fc2a4a804de..5151c101fbaf 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -85,6 +85,7 @@
>>   #include <asm/insn.h>
>>   #include <asm/kvm_host.h>
>>   #include <asm/mmu_context.h>
>> +#include <asm/mmu.h>
>>   #include <asm/mte.h>
>>   #include <asm/hypervisor.h>
>>   #include <asm/processor.h>
>> @@ -2005,6 +2006,20 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>>   	return 0;
>>   }
>>   
>> +static void __init repaint_linear_mappings(void)
>> +{
>> +	if (!block_mapping)
>> +		return;
>> +
>> +	if (!rodata_full)
>> +		return;
>> +
>> +	if (system_supports_bbml2_noabort())
>> +		return;
>> +
>> +	stop_machine(__repaint_linear_mappings, NULL, cpu_online_mask);
> With the above suggestions, I think this can be simplified to something like:
>
> static void __init linear_map_maybe_split_to_ptes(void)
> {
> 	if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
> 		stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
> }

Yeah, ack.

>
>> +}
>> +
>>   static void __init kpti_install_ng_mappings(void)
>>   {
>>   	/* Check whether KPTI is going to be used */
>> @@ -3868,6 +3883,7 @@ void __init setup_system_features(void)
>>   {
>>   	setup_system_capabilities();
>>   
>> +	repaint_linear_mappings();
>>   	kpti_install_ng_mappings();
>>   
>>   	sve_setup();
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 4c5d3aa35d62..3922af89abbb 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -209,6 +209,8 @@ static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
>>   	/* It must be naturally aligned if PMD is leaf */
>>   	if ((flags & NO_CONT_MAPPINGS) == 0)
>>   		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +	else
>> +		prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>>   
>>   	for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>>   		__set_pte_nosync(ptep, pfn_pte(pfn, prot));
>> @@ -230,6 +232,8 @@ static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
>>   	/* It must be naturally aligned if PUD is leaf */
>>   	if ((flags & NO_CONT_MAPPINGS) == 0)
>>   		prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +	else
>> +		prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>>   
>>   	for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>>   		__set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
>> @@ -833,6 +837,86 @@ void __init mark_linear_text_alias_ro(void)
>>   			    PAGE_KERNEL_RO);
>>   }
>>   
>> +static phys_addr_t repaint_pgtable_alloc(int shift)
>> +{
>> +	void *ptr;
>> +
>> +	ptr = (void *)__get_free_page(GFP_ATOMIC);
>> +	if (!ptr)
>> +		return -1;
>> +
>> +	return __pa(ptr);
>> +}
>> +
>> +extern u32 repaint_done;
>> +
>> +int __init __repaint_linear_mappings(void *__unused)
>> +{
>> +	typedef void (repaint_wait_fn)(void);
>> +	extern repaint_wait_fn bbml2_wait_for_repainting;
>> +	repaint_wait_fn *wait_fn;
>> +
>> +	phys_addr_t kernel_start = __pa_symbol(_stext);
>> +	phys_addr_t kernel_end = __pa_symbol(__init_begin);
>> +	phys_addr_t start, end;
>> +	unsigned long vstart, vend;
>> +	u64 i;
>> +	int ret;
>> +	int flags = NO_EXEC_MAPPINGS | NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS |
>> +		    SPLIT_MAPPINGS;
>> +	int cpu = smp_processor_id();
> nit: most of these variables are only needed by cpu 0 so you could defer their
> initialization until inside the if condition below.

OK

>
>> +
>> +	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>> +
>> +	/*
>> +	 * Repainting just can be run on CPU 0 because we just can be sure
>> +	 * CPU 0 supports BBML2.
>> +	 */
>> +	if (!cpu) {
>> +		/*
>> +		 * Wait for all secondary CPUs get prepared for repainting
>> +		 * the linear mapping.
>> +		 */
>> +wait_for_secondary:
>> +		if (READ_ONCE(repaint_done) != num_online_cpus())
>> +			goto wait_for_secondary;
> This feels suspect when comparing against the assembly code that does a similar
> sync operation in idmap_kpti_install_ng_mappings:
>
> 	/* We're the boot CPU. Wait for the others to catch up */
> 	sevl
> 1:	wfe
> 	ldaxr	w17, [flag_ptr]
> 	eor	w17, w17, num_cpus
> 	cbnz	w17, 1b
>
> The acquire semantics of the ldaxr are needed here to ensure that program-order
> future memory accesses don't get reordered before. READ_ONCE() is relaxed so
> permits reordering.
>
> The wfe means the CPU is not just furiously spinning, but actually waiting for a
> secondary cpu exclusively write to the variable at flag_ptr.
>
> I think you can drop the whole loop and just call:
>
> 	smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());

Yeah, it seems better.

>
>> +
>> +		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
>> +		/* Split the whole linear mapping */
>> +		for_each_mem_range(i, &start, &end) {
>> +			if (start >= end)
>> +				return -EINVAL;
>> +
>> +			vstart = __phys_to_virt(start);
>> +			vend = __phys_to_virt(end);
>> +			ret = __create_pgd_mapping_locked(init_mm.pgd, start,
>> +					vstart, (end - start), __pgprot(0),
>> +					repaint_pgtable_alloc, flags);
>> +			if (ret)
>> +				panic("Failed to split linear mappings\n");
>> +
>> +			flush_tlb_kernel_range(vstart, vend);
>> +		}
>> +		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
> You're relyng on the memblock API here. Is that valid, given we are quite late
> into the boot at this point and we have transferred control to the buddy?

Yes, it is still valid at this point. The memblock won't be discarded 
until page_alloc_init_late(), it is called after smp_init(). And whether 
discarding memblock also depends on !CONFIG_ARCH_KEEP_MEMBLOCK, however 
arm64 (and some other arches, i.e. x86, riscv, etc) selects this config. 
It means memblock is always valid.

>
> I was thinking you would just need to traverse the the linear map region of the
> kernel page table, splitting each large leaf you find into the next size down?

The benefit with using memblock is we can easily skip the memory for 
kernel itself. Kernel itself is always mapped by block mappings, we 
don't want to split it. Traversing page table is hard to tell whether 
the mapping is for kernel itself or not.

>
>> +
>> +		WRITE_ONCE(repaint_done, 0);
> I think this depends on the dsb(ish) in flush_tlb_kernel_range() to ensure it is
> not re-ordered before any pgtable split operations? Might be worth a comment.

Sure.

>
>
>> +	} else {
>> +		/*
>> +		 * The secondary CPUs can't run in the same address space
>> +		 * with CPU 0 because accessing the linear mapping address
>> +		 * when CPU 0 is repainting it is not safe.
>> +		 *
>> +		 * Let the secondary CPUs run busy loop in idmap address
>> +		 * space when repainting is ongoing.
>> +		 */
>> +		cpu_install_idmap();
>> +		wait_fn();
>> +		cpu_uninstall_idmap();
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   #ifdef CONFIG_KFENCE
>>   
>>   bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
>> @@ -887,6 +971,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>   
>>   #endif /* CONFIG_KFENCE */
>>   
>> +bool block_mapping;
>> +>  static inline bool force_pte_mapping(void)
>>   {
>>   	/*
>> @@ -915,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
>>   	int flags = NO_EXEC_MAPPINGS;
>>   	u64 i;
>>   
>> +	block_mapping = true;
>> +
>>   	/*
>>   	 * Setting hierarchical PXNTable attributes on table entries covering
>>   	 * the linear region is only possible if it is guaranteed that no table
>> @@ -930,8 +1018,10 @@ static void __init map_mem(pgd_t *pgdp)
>>   
>>   	early_kfence_pool = arm64_kfence_alloc_pool();
>>   
>> -	if (force_pte_mapping())
>> +	if (force_pte_mapping()) {
>> +		block_mapping = false;
>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> +	}
>>   
>>   	/*
>>   	 * Take care not to create a writable alias for the
>> @@ -1063,7 +1153,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>>   		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>>   
>>   static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> -	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> +	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>>   
>>   static void __init create_idmap(void)
>>   {
>> @@ -1088,6 +1179,19 @@ static void __init create_idmap(void)
>>   			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>>   			       __phys_to_virt(ptep) - ptep);
>>   	}
>> +
>> +	/*
>> +	 * Setup idmap mapping for repaint_done flag.  It will be used if
>> +	 * repainting the linear mapping is needed later.
>> +	 */
>> +	if (block_mapping) {
>> +		u64 pa = __pa_symbol(&repaint_done);
>> +		ptep = __pa_symbol(bbml2_ptes);
>> +
>> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> +			       __phys_to_virt(ptep) - ptep);
>> +	}
>>   }
>>   
>>   void __init paging_init(void)
>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>> index fb30c8804f87..c40e6126c093 100644
>> --- a/arch/arm64/mm/proc.S
>> +++ b/arch/arm64/mm/proc.S
>> @@ -440,6 +440,47 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>>   	.popsection
>>   #endif
>>   
>> +/*
>> + * Wait for repainting is done. Run on secondary CPUs
>> + * only.
>> + */
>> +	.pushsection	".data", "aw", %progbits
>> +SYM_DATA(repaint_done, .long 1)
>> +	.popsection
>> +
>> +	.pushsection ".idmap.text", "a"
>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>> +	swapper_ttb	.req	x0
>> +	flag_ptr	.req	x1
>> +
>> +	mrs	swapper_ttb, ttbr1_el1
>> +	adr_l	flag_ptr, repaint_done
>> +
>> +	/* Uninstall swapper before surgery begins */
>> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
>> +
>> +	/* Increment the flag to let the boot CPU we're ready */
>> +1:	ldxr	w16, [flag_ptr]
>> +	add	w16, w16, #1
>> +	stxr	w17, w16, [flag_ptr]
>> +	cbnz	w17, 1b
>> +
>> +	/* Wait for the boot CPU to finish repainting */
>> +	sevl
>> +1:	wfe
>> +	ldxr	w16, [flag_ptr]
>> +	cbnz	w16, 1b
>> +
>> +	/* All done, act like nothing happened */
>> +	msr	ttbr1_el1, swapper_ttb
>> +	isb
>> +	ret
>> +
>> +	.unreq	swapper_ttb
>> +	.unreq	flag_ptr
>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>> +	.popsection
> This is identical to __idmap_kpti_secondary. Can't we just refactor it into a
> common function? I think you can even reuse the same refcount variable (i.e. no
> need for both repaint_done and __idmap_kpti_flag).

I think we can extract the below busy-loop logic into a macro:

         .macro wait_for_boot_cpu, swapper_ttb, flag_ptr
         /* Uninstall swapper before surgery begins */
         __idmap_cpu_set_reserved_ttbr1 x16, x17

         /* Increment the flag to let the boot CPU we're ready */
1:      ldxr    w16, [flag_ptr]
         add     w16, w16, #1
         stxr    w17, w16, [flag_ptr]
         cbnz    w17, 1b

         /* Wait for the boot CPU to finish messing around with swapper */
         sevl
1:      wfe
         ldxr    w16, [flag_ptr]
         cbnz    w16, 1b

         /* All done, act like nothing happened */
         msr     ttbr1_el1, swapper_ttb
         isb
         ret
         .endm

Then kpti and repainting can just call this macro so that we don't have 
to keep both them use the same registers for swapper_ttb and flag_ptr 
parameters.

Thanks,
Yang


>
> Thanks,
> Ryan
>
>
>> +
>>   /*
>>    *	__cpu_setup
>>    *



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-23 13:26       ` Ryan Roberts
  2025-06-23 19:12         ` Yang Shi
@ 2025-06-26 22:39         ` Yang Shi
  2025-07-23 17:38         ` Dev Jain
  2 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-06-26 22:39 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 6/23/25 6:26 AM, Ryan Roberts wrote:
> [...]
>
>>> +
>>> +int split_leaf_mapping(unsigned long addr)
>> Thanks for coming up with the code. It does help to understand your idea. Now I
>> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
>> make the implementation easier because we don't need a loop anymore. But this
>> may have a couple of problems:
>>    1. We need walk the page table twice instead of once. It sounds expensive.
> Yes we need to walk twice. That may be more expensive or less expensive,
> depending on the size of the range that you are splitting. If the range is large
> then your approach loops through every leaf mapping between the start and end
> which will be more expensive than just doing 2 walks. If the range is small then
> your approach can avoid the second walk, but at the expense of all the extra
> loop overhead.

Thinking about this further. Although there is some extra loop overhead, 
but there should be not extra loads. We can check whether the start and 
end are properly aligned or not, it they are aligned, we just continue 
the loop without loading page table entry.

And we can optimize the loop by advancing multiple PUD/PMD/CONT size at 
a time instead of one at a time. The pseudo code (for example, pmd 
level) looks like:

do {
      next = pmd_addr_end(start, end);

      if (next < end)
          nr = ((end - next) / PMD_SIZE) + 1;

      if (((start | next) & ~PMD_MASK) == 0)
          continue;

      split_pmd(start, next);
} while (pmdp += nr, start = next * nr, start != end)


For repainting case, we just need do:

do {
      nr = 1;
      next = pmd_addr_end(start, end);

      if (next < end && !repainting)
          nr = ((end - next) / PMD_SIZE) + 1;

      if (((start | next) & ~PMD_MASK) == 0 && !repainting)
          continue;

      split_pmd(start, next);
} while (pmdp += nr, start = next * nr, start != end)

This should reduce loop overhead and duplicate code for repainting.

Thanks,
Yang

>
> My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
> Personally I think this is probably acceptable? Perhaps we need some other
> voices here.
>
>
>>    2. How should we handle repainting? We need split all the page tables all the
>> way down to PTE for repainting between start and end rather than keeping block
>> mappings. This model doesn't work, right? For example, repaint a 2G block. The
>> first 1G is mapped by a PUD, the second 1G is mapped by 511 PMD and 512 PTEs.
>> split_mapping(start) will split the first 1G, but split_mapping(end) will do
>> nothing, the 511 PMDs are kept intact. In addition, I think we also prefer reuse
>> the split primitive for repainting instead of inventing another one.
> I agree my approach doesn't work for the repainting case. But I think what I'm
> trying to say is that the 2 things are different operations;
> split_leaf_mapping() is just trying to ensure that the start and end of a ragion
> are on leaf boundaries. Repainting is trying to ensure that all leaf mappings
> within a range are PTE-size. I've implemented the former and you've implemented
> that latter. Your implementation looks like meets the former's requirements
> because you are only testing it for the case where the range is 1 page. But
> actually it is splitting everything in the range to PTEs.
>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
>>> +{
>>> +    pgd_t *pgdp, pgd;
>>> +    p4d_t *p4dp, p4d;
>>> +    pud_t *pudp, pud;
>>> +    pmd_t *pmdp, pmd;
>>> +    pte_t *ptep, pte;
>>> +    int ret = 0;
>>> +
>>> +    /*
>>> +     * !BBML2_NOABORT systems should not be trying to change permissions on
>>> +     * anything that is not pte-mapped in the first place. Just return early
>>> +     * and let the permission change code raise a warning if not already
>>> +     * pte-mapped.
>>> +     */
>>> +    if (!system_supports_bbml2_noabort())
>>> +        return 0;
>>> +
>>> +    /*
>>> +     * Ensure addr is at least page-aligned since this is the finest
>>> +     * granularity we can split to.
>>> +     */
>>> +    if (addr != PAGE_ALIGN(addr))
>>> +        return -EINVAL;
>>> +
>>> +    arch_enter_lazy_mmu_mode();
>>> +
>>> +    /*
>>> +     * PGD: If addr is PGD aligned then addr already describes a leaf
>>> +     * boundary. If not present then there is nothing to split.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>>> +        goto out;
>>> +    pgdp = pgd_offset_k(addr);
>>> +    pgd = pgdp_get(pgdp);
>>> +    if (!pgd_present(pgd))
>>> +        goto out;
>>> +
>>> +    /*
>>> +     * P4D: If addr is P4D aligned then addr already describes a leaf
>>> +     * boundary. If not present then there is nothing to split.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>>> +        goto out;
>>> +    p4dp = p4d_offset(pgdp, addr);
>>> +    p4d = p4dp_get(p4dp);
>>> +    if (!p4d_present(p4d))
>>> +        goto out;
>>> +
>>> +    /*
>>> +     * PUD: If addr is PUD aligned then addr already describes a leaf
>>> +     * boundary. If not present then there is nothing to split. Otherwise,
>>> +     * if we have a pud leaf, split to contpmd.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>>> +        goto out;
>>> +    pudp = pud_offset(p4dp, addr);
>>> +    pud = pudp_get(pudp);
>>> +    if (!pud_present(pud))
>>> +        goto out;
>>> +    if (pud_leaf(pud)) {
>>> +        ret = split_pud(pudp, pud);
>>> +        if (ret)
>>> +            goto out;
>>> +    }
>>> +
>>> +    /*
>>> +     * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>>> +     * leaf boundary. If not present then there is nothing to split.
>>> +     * Otherwise, if we have a contpmd leaf, split to pmd.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> +        goto out;
>>> +    pmdp = pmd_offset(pudp, addr);
>>> +    pmd = pmdp_get(pmdp);
>>> +    if (!pmd_present(pmd))
>>> +        goto out;
>>> +    if (pmd_leaf(pmd)) {
>>> +        if (pmd_cont(pmd))
>>> +            split_contpmd(pmdp);
>>> +        /*
>>> +         * PMD: If addr is PMD aligned then addr already describes a
>>> +         * leaf boundary. Otherwise, split to contpte.
>>> +         */
>>> +        if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> +            goto out;
>>> +        ret = split_pmd(pmdp, pmd);
>>> +        if (ret)
>>> +            goto out;
>>> +    }
>>> +
>>> +    /*
>>> +     * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>>> +     * leaf boundary. If not present then there is nothing to split.
>>> +     * Otherwise, if we have a contpte leaf, split to pte.
>>> +     */
>>> +    if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> +        goto out;
>>> +    ptep = pte_offset_kernel(pmdp, addr);
>>> +    pte = __ptep_get(ptep);
>>> +    if (!pte_present(pte))
>>> +        goto out;
>>> +    if (pte_cont(pte))
>>> +        split_contpte(ptep);
>>> +
>>> +out:
>>> +    arch_leave_lazy_mmu_mode();
>>> +    return ret;
>>> +}
>>> ---8<---
>>>
>>> Thanks,
>>> Ryan
>>>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-06-23 13:26       ` Ryan Roberts
  2025-06-23 19:12         ` Yang Shi
  2025-06-26 22:39         ` Yang Shi
@ 2025-07-23 17:38         ` Dev Jain
  2025-07-23 20:51           ` Yang Shi
  2 siblings, 1 reply; 34+ messages in thread
From: Dev Jain @ 2025-07-23 17:38 UTC (permalink / raw)
  To: Ryan Roberts, Yang Shi, will, catalin.marinas, Miko.Lenczewski,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel


On 23/06/25 6:56 pm, Ryan Roberts wrote:
> [...]
>
>>> +
>>> +int split_leaf_mapping(unsigned long addr)
>> Thanks for coming up with the code. It does help to understand your idea. Now I
>> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
>> make the implementation easier because we don't need a loop anymore. But this
>> may have a couple of problems:
>>    1. We need walk the page table twice instead of once. It sounds expensive.
> Yes we need to walk twice. That may be more expensive or less expensive,
> depending on the size of the range that you are splitting. If the range is large
> then your approach loops through every leaf mapping between the start and end
> which will be more expensive than just doing 2 walks. If the range is small then
> your approach can avoid the second walk, but at the expense of all the extra
> loop overhead.
>
> My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
> Personally I think this is probably acceptable? Perhaps we need some other
> voices here.

Hello all,

I am starting to implement vmalloc-huge by default with BBML2 no-abort on arm64.
I see that there is some disagreement related to the way the splitting needs to
be implemented - I skimmed through the discussions and it will require some work
to understand what is going on :) hopefully I'll be back soon to give some of
my opinions.

>
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-23 17:38         ` Dev Jain
@ 2025-07-23 20:51           ` Yang Shi
  2025-07-24 11:43             ` Dev Jain
  0 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-07-23 20:51 UTC (permalink / raw)
  To: Dev Jain, Ryan Roberts, will, catalin.marinas, Miko.Lenczewski,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 7/23/25 10:38 AM, Dev Jain wrote:
>
> On 23/06/25 6:56 pm, Ryan Roberts wrote:
>> [...]
>>
>>>> +
>>>> +int split_leaf_mapping(unsigned long addr)
>>> Thanks for coming up with the code. It does help to understand your 
>>> idea. Now I
>>> see why you suggested "split_mapping(start); split_mapping(end);" 
>>> model. It does
>>> make the implementation easier because we don't need a loop anymore. 
>>> But this
>>> may have a couple of problems:
>>>    1. We need walk the page table twice instead of once. It sounds 
>>> expensive.
>> Yes we need to walk twice. That may be more expensive or less expensive,
>> depending on the size of the range that you are splitting. If the 
>> range is large
>> then your approach loops through every leaf mapping between the start 
>> and end
>> which will be more expensive than just doing 2 walks. If the range is 
>> small then
>> your approach can avoid the second walk, but at the expense of all 
>> the extra
>> loop overhead.
>>
>> My suggestion requires 5 loads (assuming the maximum of 5 levels of 
>> lookup).
>> Personally I think this is probably acceptable? Perhaps we need some 
>> other
>> voices here.
>
> Hello all,
>
> I am starting to implement vmalloc-huge by default with BBML2 no-abort 
> on arm64.
> I see that there is some disagreement related to the way the splitting 
> needs to
> be implemented - I skimmed through the discussions and it will require 
> some work
> to understand what is going on :) hopefully I'll be back soon to give 
> some of
> my opinions.

Hi Dev,

Thanks for the heads up.

In the last email I suggested skip the leaf mappings in the split range 
in order to reduce page table walk overhead for split_mapping(start, 
end). In this way we can achieve:
     - reuse the most split code for repainting (just need 
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS flag for repainting to split page 
table to PTEs)
     - just walk page table once
     - have similar page table walk overhead with 
split_mapping(start)/split_mapping(end) if the split range is large

I'm basically done on a new spin to implement it and solve all the 
review comments from v4. I should be able to post the new spin by the 
end of this week.

Regards,
Yang

>
>>
>>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-23 20:51           ` Yang Shi
@ 2025-07-24 11:43             ` Dev Jain
  2025-07-24 17:59               ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Dev Jain @ 2025-07-24 11:43 UTC (permalink / raw)
  To: Yang Shi, Ryan Roberts, will, catalin.marinas, Miko.Lenczewski,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel


On 24/07/25 2:21 am, Yang Shi wrote:
>
>
> On 7/23/25 10:38 AM, Dev Jain wrote:
>>
>> On 23/06/25 6:56 pm, Ryan Roberts wrote:
>>> [...]
>>>
>>>>> +
>>>>> +int split_leaf_mapping(unsigned long addr)
>>>> Thanks for coming up with the code. It does help to understand your 
>>>> idea. Now I
>>>> see why you suggested "split_mapping(start); split_mapping(end);" 
>>>> model. It does
>>>> make the implementation easier because we don't need a loop 
>>>> anymore. But this
>>>> may have a couple of problems:
>>>>    1. We need walk the page table twice instead of once. It sounds 
>>>> expensive.
>>> Yes we need to walk twice. That may be more expensive or less 
>>> expensive,
>>> depending on the size of the range that you are splitting. If the 
>>> range is large
>>> then your approach loops through every leaf mapping between the 
>>> start and end
>>> which will be more expensive than just doing 2 walks. If the range 
>>> is small then
>>> your approach can avoid the second walk, but at the expense of all 
>>> the extra
>>> loop overhead.
>>>
>>> My suggestion requires 5 loads (assuming the maximum of 5 levels of 
>>> lookup).
>>> Personally I think this is probably acceptable? Perhaps we need some 
>>> other
>>> voices here.
>>
>> Hello all,
>>
>> I am starting to implement vmalloc-huge by default with BBML2 
>> no-abort on arm64.
>> I see that there is some disagreement related to the way the 
>> splitting needs to
>> be implemented - I skimmed through the discussions and it will 
>> require some work
>> to understand what is going on :) hopefully I'll be back soon to give 
>> some of
>> my opinions.
>
> Hi Dev,
>
> Thanks for the heads up.
>
> In the last email I suggested skip the leaf mappings in the split 
> range in order to reduce page table walk overhead for 
> split_mapping(start, end). In this way we can achieve:
>     - reuse the most split code for repainting (just need 
> NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS flag for repainting to split page 
> table to PTEs)
>     - just walk page table once
>     - have similar page table walk overhead with 
> split_mapping(start)/split_mapping(end) if the split range is large
>
> I'm basically done on a new spin to implement it and solve all the 
> review comments from v4. I should be able to post the new spin by the 
> end of this week.

Great! As Catalin notes on my huge-perm change series, that series 
doesn't have any user so it does not make sense for that to go in 
without your

series - can you merge that series into your work for the new version?


>
> Regards,
> Yang
>
>>
>>>
>>>
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 11:43             ` Dev Jain
@ 2025-07-24 17:59               ` Yang Shi
  0 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-07-24 17:59 UTC (permalink / raw)
  To: Dev Jain, Ryan Roberts, will, catalin.marinas, Miko.Lenczewski,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 7/24/25 4:43 AM, Dev Jain wrote:
>
> On 24/07/25 2:21 am, Yang Shi wrote:
>>
>>
>> On 7/23/25 10:38 AM, Dev Jain wrote:
>>>
>>> On 23/06/25 6:56 pm, Ryan Roberts wrote:
>>>> [...]
>>>>
>>>>>> +
>>>>>> +int split_leaf_mapping(unsigned long addr)
>>>>> Thanks for coming up with the code. It does help to understand 
>>>>> your idea. Now I
>>>>> see why you suggested "split_mapping(start); split_mapping(end);" 
>>>>> model. It does
>>>>> make the implementation easier because we don't need a loop 
>>>>> anymore. But this
>>>>> may have a couple of problems:
>>>>>    1. We need walk the page table twice instead of once. It sounds 
>>>>> expensive.
>>>> Yes we need to walk twice. That may be more expensive or less 
>>>> expensive,
>>>> depending on the size of the range that you are splitting. If the 
>>>> range is large
>>>> then your approach loops through every leaf mapping between the 
>>>> start and end
>>>> which will be more expensive than just doing 2 walks. If the range 
>>>> is small then
>>>> your approach can avoid the second walk, but at the expense of all 
>>>> the extra
>>>> loop overhead.
>>>>
>>>> My suggestion requires 5 loads (assuming the maximum of 5 levels of 
>>>> lookup).
>>>> Personally I think this is probably acceptable? Perhaps we need 
>>>> some other
>>>> voices here.
>>>
>>> Hello all,
>>>
>>> I am starting to implement vmalloc-huge by default with BBML2 
>>> no-abort on arm64.
>>> I see that there is some disagreement related to the way the 
>>> splitting needs to
>>> be implemented - I skimmed through the discussions and it will 
>>> require some work
>>> to understand what is going on :) hopefully I'll be back soon to 
>>> give some of
>>> my opinions.
>>
>> Hi Dev,
>>
>> Thanks for the heads up.
>>
>> In the last email I suggested skip the leaf mappings in the split 
>> range in order to reduce page table walk overhead for 
>> split_mapping(start, end). In this way we can achieve:
>>     - reuse the most split code for repainting (just need 
>> NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS flag for repainting to split 
>> page table to PTEs)
>>     - just walk page table once
>>     - have similar page table walk overhead with 
>> split_mapping(start)/split_mapping(end) if the split range is large
>>
>> I'm basically done on a new spin to implement it and solve all the 
>> review comments from v4. I should be able to post the new spin by the 
>> end of this week.
>
> Great! As Catalin notes on my huge-perm change series, that series 
> doesn't have any user so it does not make sense for that to go in 
> without your
>
> series - can you merge that series into your work for the new version?

Yeah, sure. But I saw Andrew had some nits on the generic mm code part. 
The other way is you can have your patch applied on top of my series so 
that we don't interlock each other. Anyway I will still keep it as 
prerequisite of my series for now.

Yang

>
>
>>
>> Regards,
>> Yang
>>
>>>
>>>>
>>>>
>>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 22:11 [v5 " Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
  2025-07-29 12:34   ` Dev Jain
  2025-08-01 14:35   ` Ryan Roberts
  0 siblings, 2 replies; 34+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.

This resulted in a couple of problems:
  - performance degradation
  - more TLB pressure
  - memory waste for kernel page table

With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.

Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full.  When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.

The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.

With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P) with
256GB memory.

* Memory use after boot
Before:
MemTotal:       258988984 kB
MemFree:        254821700 kB

After:
MemTotal:       259505132 kB
MemFree:        255410264 kB

Around 500MB more memory are free to use.  The larger the machine, the
more memory saved.

* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses.  The kernel TLB
MPKI is reduced by 28.5%.

The benchmark data is now on par with rodata=on too.

* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
    --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
    --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
    --name=iops-test-job --eta-newline=1 --size 100G

The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.

* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/include/asm/cpufeature.h |  34 ++++
 arch/arm64/include/asm/mmu.h        |   1 +
 arch/arm64/include/asm/pgtable.h    |   5 +
 arch/arm64/kernel/cpufeature.c      |  31 +--
 arch/arm64/mm/mmu.c                 | 293 +++++++++++++++++++++++++++-
 arch/arm64/mm/pageattr.c            |   4 +
 6 files changed, 333 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..d0d394cc837d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
 	return cpus_have_final_cap(ARM64_HAS_PMUV3);
 }
 
+static inline bool bbml2_noabort_available(void)
+{
+	/*
+	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
+	 * as possible. This list is therefore an allow-list of known-good
+	 * implementations that both support BBML2 and additionally, fulfill the
+	 * extra constraint of never generating TLB conflict aborts when using
+	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
+	 * kernel contexts difficult to prove safe against recursive aborts).
+	 *
+	 * Note that implementations can only be considered "known-good" if their
+	 * implementors attest to the fact that the implementation never raises
+	 * TLB conflict aborts for BBML2 mapping granularity changes.
+	 */
+	static const struct midr_range supports_bbml2_noabort_list[] = {
+		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
+		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
+		{}
+	};
+
+	/* Does our cpu guarantee to never raise TLB conflict aborts? */
+	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
+		return false;
+
+	/*
+	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
+	 * about whether the MIDR check passes.
+	 */
+
+	return true;
+}
+
 static inline bool system_supports_bbml2_noabort(void)
 {
 	return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..57f4b25e6f33 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
 extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
 
 /*
  * This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ba63c8736666..ad2a6a7e86b0 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
 	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
 }
 
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
 #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
 static inline int pte_uffd_wp(pte_t pte)
 {
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d2a8a509a58e..1c96016a7a41 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
 
 static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
 {
-	/*
-	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
-	 * as possible. This list is therefore an allow-list of known-good
-	 * implementations that both support BBML2 and additionally, fulfill the
-	 * extra constraint of never generating TLB conflict aborts when using
-	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
-	 * kernel contexts difficult to prove safe against recursive aborts).
-	 *
-	 * Note that implementations can only be considered "known-good" if their
-	 * implementors attest to the fact that the implementation never raises
-	 * TLB conflict aborts for BBML2 mapping granularity changes.
-	 */
-	static const struct midr_range supports_bbml2_noabort_list[] = {
-		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
-		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
-		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
-		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
-		{}
-	};
-
-	/* Does our cpu guarantee to never raise TLB conflict aborts? */
-	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
-		return false;
-
-	/*
-	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
-	 * about whether the MIDR check passes.
-	 */
-
-	return true;
+	return bbml2_noabort_available();
 }
 
 #ifdef CONFIG_ARM64_PAN
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3d5fb37424ab..f63b39613571 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
 			     int flags);
 #endif
 
+#define INVALID_PHYS_ADDR	-1
+
 static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 				       enum pgtable_type pgtable_type)
 {
@@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
 	phys_addr_t pa;
 
-	BUG_ON(!ptdesc);
+	if (!ptdesc)
+		return INVALID_PHYS_ADDR;
+
 	pa = page_to_phys(ptdesc_page(ptdesc));
 
 	switch (pgtable_type) {
@@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 }
 
 static phys_addr_t __maybe_unused
-pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+split_pgtable_alloc(enum pgtable_type pgtable_type)
 {
 	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
 }
 
+static phys_addr_t __maybe_unused
+pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+	phys_addr_t pa;
+
+	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+	BUG_ON(pa == INVALID_PHYS_ADDR);
+	return pa;
+}
+
 static phys_addr_t
 pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
 {
-	return __pgd_pgtable_alloc(NULL, pgtable_type);
+	phys_addr_t pa;
+
+	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+	BUG_ON(pa == INVALID_PHYS_ADDR);
+	return pa;
 }
 
 /*
@@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			     pgd_pgtable_alloc_special_mm, flags);
 }
 
+static DEFINE_MUTEX(pgtable_split_lock);
+
+static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
+{
+	pte_t *ptep;
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+
+	ptep = pte_offset_kernel(pmdp, addr);
+
+	do {
+		pte_t *_ptep;
+
+		nr = 0;
+		next = pte_cont_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
+		span = nr * CONT_PTE_SIZE;
+
+		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
+
+		if (((addr | next) & ~CONT_PTE_MASK) == 0)
+			continue;
+
+		if (!pte_cont(__ptep_get(_ptep)))
+			continue;
+
+		for (int i = 0; i < CONT_PTES; i++, _ptep++)
+			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
+	} while (addr = next + span, addr != end);
+
+	return 0;
+}
+
+static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
+{
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+	int ret = 0;
+
+	do {
+		pmd_t pmd;
+
+		nr = 1;
+		next = pmd_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / PMD_SIZE));
+		span = (nr - 1) * PMD_SIZE;
+
+		if (((addr | next) & ~PMD_MASK) == 0)
+			continue;
+
+		pmd = pmdp_get(pmdp);
+		if (pmd_leaf(pmd)) {
+			phys_addr_t pte_phys;
+			pte_t *ptep;
+			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
+					  PMD_TABLE_AF;
+			unsigned long pfn = pmd_pfn(pmd);
+			pgprot_t prot = pmd_pgprot(pmd);
+
+			pte_phys = split_pgtable_alloc(TABLE_PTE);
+			if (pte_phys == INVALID_PHYS_ADDR)
+				return -ENOMEM;
+
+			if (pgprot_val(prot) & PMD_SECT_PXN)
+				pmdval |= PMD_TABLE_PXN;
+
+			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
+					PTE_TYPE_PAGE);
+			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			ptep = (pte_t *)phys_to_virt(pte_phys);
+			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+				__set_pte(ptep, pfn_pte(pfn, prot));
+
+			dsb(ishst);
+
+			__pmd_populate(pmdp, pte_phys, pmdval);
+		}
+
+		ret = split_cont_pte(pmdp, addr, next);
+		if (ret)
+			break;
+	} while (pmdp += nr, addr = next + span, addr != end);
+
+	return ret;
+}
+
+static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
+{
+	pmd_t *pmdp;
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+	int ret = 0;
+
+	pmdp = pmd_offset(pudp, addr);
+
+	do {
+		pmd_t *_pmdp;
+
+		nr = 0;
+		next = pmd_cont_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
+		span = nr * CONT_PMD_SIZE;
+
+		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
+			pmdp += pmd_index(next) - pmd_index(addr) +
+				nr * CONT_PMDS;
+			continue;
+		}
+
+		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+		if (!pmd_cont(pmdp_get(_pmdp)))
+			goto split;
+
+		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
+			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
+
+split:
+		ret = split_pmd(pmdp, addr, next);
+		if (ret)
+			break;
+
+		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
+	} while (addr = next + span, addr != end);
+
+	return ret;
+}
+
+static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
+{
+	pud_t *pudp;
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+	int ret = 0;
+
+	pudp = pud_offset(p4dp, addr);
+
+	do {
+		pud_t pud;
+
+		nr = 1;
+		next = pud_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / PUD_SIZE));
+		span = (nr - 1) * PUD_SIZE;
+
+		if (((addr | next) & ~PUD_MASK) == 0)
+			continue;
+
+		pud = pudp_get(pudp);
+		if (pud_leaf(pud)) {
+			phys_addr_t pmd_phys;
+			pmd_t *pmdp;
+			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
+					  PUD_TABLE_AF;
+			unsigned long pfn = pud_pfn(pud);
+			pgprot_t prot = pud_pgprot(pud);
+			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+
+			pmd_phys = split_pgtable_alloc(TABLE_PMD);
+			if (pmd_phys == INVALID_PHYS_ADDR)
+				return -ENOMEM;
+
+			if (pgprot_val(prot) & PMD_SECT_PXN)
+				pudval |= PUD_TABLE_PXN;
+
+			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
+					PMD_TYPE_SECT);
+			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
+				set_pmd(pmdp, pfn_pmd(pfn, prot));
+				pfn += step;
+			}
+
+			dsb(ishst);
+
+			__pud_populate(pudp, pmd_phys, pudval);
+		}
+
+		ret = split_cont_pmd(pudp, addr, next);
+		if (ret)
+			break;
+	} while (pudp += nr, addr = next + span, addr != end);
+
+	return ret;
+}
+
+static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
+{
+	p4d_t *p4dp;
+	unsigned long next;
+	int ret = 0;
+
+	p4dp = p4d_offset(pgdp, addr);
+
+	do {
+		next = p4d_addr_end(addr, end);
+
+		ret = split_pud(p4dp, addr, next);
+		if (ret)
+			break;
+	} while (p4dp++, addr = next, addr != end);
+
+	return ret;
+}
+
+static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
+{
+	unsigned long next;
+	int ret = 0;
+
+	do {
+		next = pgd_addr_end(addr, end);
+		ret = split_p4d(pgdp, addr, next);
+		if (ret)
+			break;
+	} while (pgdp++, addr = next, addr != end);
+
+	return ret;
+}
+
+int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
+{
+	int ret;
+
+	if (!system_supports_bbml2_noabort())
+		return 0;
+
+	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
+		return -EINVAL;
+
+	mutex_lock(&pgtable_split_lock);
+	arch_enter_lazy_mmu_mode();
+	ret = split_pgd(pgd_offset_k(start), start, end);
+	arch_leave_lazy_mmu_mode();
+	mutex_unlock(&pgtable_split_lock);
+
+	return ret;
+}
+
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
@@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
 
 #endif /* CONFIG_KFENCE */
 
+bool linear_map_requires_bbml2;
+
+static inline bool force_pte_mapping(void)
+{
+	/*
+	 * Can't use cpufeature API to determine whether BBM level 2
+	 * is supported or not since cpufeature have not been
+	 * finalized yet.
+	 */
+	return (!bbml2_noabort_available() && (rodata_full ||
+		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
+		debug_pagealloc_enabled();
+}
+
 static void __init map_mem(pgd_t *pgdp)
 {
 	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
 
 	early_kfence_pool = arm64_kfence_alloc_pool();
 
-	if (can_set_direct_map())
+	linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
+
+	if (force_pte_mapping())
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
 
 	VM_BUG_ON(!mhp_range_allowed(start, size, true));
 
-	if (can_set_direct_map())
+	if (force_pte_mapping() ||
+	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c6a85000fa0e..6566aa9d8abb 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
 	data.set_mask = set_mask;
 	data.clear_mask = clear_mask;
 
+	ret = split_kernel_pgtable_mapping(start, start + size);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
 	arch_enter_lazy_mmu_mode();
 
 	/*
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support " Yang Shi
@ 2025-07-29 12:34   ` Dev Jain
  2025-08-05 21:28     ` Yang Shi
  2025-08-01 14:35   ` Ryan Roberts
  1 sibling, 1 reply; 34+ messages in thread
From: Dev Jain @ 2025-07-29 12:34 UTC (permalink / raw)
  To: Yang Shi, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, scott, cl
  Cc: linux-arm-kernel, linux-kernel


On 25/07/25 3:41 am, Yang Shi wrote:
> [----- snip -----]
>   
>   #ifdef CONFIG_ARM64_PAN
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3d5fb37424ab..f63b39613571 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>   			     int flags);
>   #endif
>   
> +#define INVALID_PHYS_ADDR	-1
> +
>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   				       enum pgtable_type pgtable_type)
>   {
> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>   	phys_addr_t pa;
>   
> -	BUG_ON(!ptdesc);
> +	if (!ptdesc)
> +		return INVALID_PHYS_ADDR;
> +
>   	pa = page_to_phys(ptdesc_page(ptdesc));
>   
>   	switch (pgtable_type) {
> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   }
>   
>   static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>   {
>   	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>   }
>   
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
> +}
> +
>   static phys_addr_t
>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>   {
> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
>   }
>   
>   /*
> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>   			     pgd_pgtable_alloc_special_mm, flags);
>   }
>   
> +static DEFINE_MUTEX(pgtable_split_lock);

Thanks for taking a separate lock.

> +
> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	pte_t *ptep;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +
> +	ptep = pte_offset_kernel(pmdp, addr);
> +
> +	do {
> +		pte_t *_ptep;
> +
> +		nr = 0;
> +		next = pte_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
> +		span = nr * CONT_PTE_SIZE;
> +
> +		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> +		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
> +
> +		if (((addr | next) & ~CONT_PTE_MASK) == 0)
> +			continue;
> +
> +		if (!pte_cont(__ptep_get(_ptep)))
> +			continue;
> +
> +		for (int i = 0; i < CONT_PTES; i++, _ptep++)
> +			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
> +	} while (addr = next + span, addr != end);
> +
> +	return 0;
> +}
> +
> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	do {
> +		pmd_t pmd;
> +
> +		nr = 1;
> +		next = pmd_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PMD_SIZE));
> +		span = (nr - 1) * PMD_SIZE;
> +
> +		if (((addr | next) & ~PMD_MASK) == 0)
> +			continue;
> +
> +		pmd = pmdp_get(pmdp);
> +		if (pmd_leaf(pmd)) {
> +			phys_addr_t pte_phys;
> +			pte_t *ptep;
> +			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
> +					  PMD_TABLE_AF;
> +			unsigned long pfn = pmd_pfn(pmd);
> +			pgprot_t prot = pmd_pgprot(pmd);
> +
> +			pte_phys = split_pgtable_alloc(TABLE_PTE);
> +			if (pte_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pmdval |= PMD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
> +					PTE_TYPE_PAGE);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			ptep = (pte_t *)phys_to_virt(pte_phys);
> +			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> +				__set_pte(ptep, pfn_pte(pfn, prot));
> +
> +			dsb(ishst);
> +
> +			__pmd_populate(pmdp, pte_phys, pmdval);
> +		}
> +
> +		ret = split_cont_pte(pmdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pmdp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
> +{
> +	pmd_t *pmdp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pmdp = pmd_offset(pudp, addr);
> +
> +	do {
> +		pmd_t *_pmdp;
> +
> +		nr = 0;
> +		next = pmd_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
> +		span = nr * CONT_PMD_SIZE;
> +
> +		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
> +			pmdp += pmd_index(next) - pmd_index(addr) +
> +				nr * CONT_PMDS;
> +			continue;
> +		}
> +
> +		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> +		if (!pmd_cont(pmdp_get(_pmdp)))
> +			goto split;
> +
> +		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
> +			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
> +
> +split:
> +		ret = split_pmd(pmdp, addr, next);
> +		if (ret)
> +			break;
> +
> +		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
> +	} while (addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
> +{
> +	pud_t *pudp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pudp = pud_offset(p4dp, addr);
> +
> +	do {
> +		pud_t pud;
> +
> +		nr = 1;
> +		next = pud_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PUD_SIZE));
> +		span = (nr - 1) * PUD_SIZE;
> +
> +		if (((addr | next) & ~PUD_MASK) == 0)
> +			continue;
> +
> +		pud = pudp_get(pudp);
> +		if (pud_leaf(pud)) {
> +			phys_addr_t pmd_phys;
> +			pmd_t *pmdp;
> +			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
> +					  PUD_TABLE_AF;
> +			unsigned long pfn = pud_pfn(pud);
> +			pgprot_t prot = pud_pgprot(pud);
> +			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> +			pmd_phys = split_pgtable_alloc(TABLE_PMD);
> +			if (pmd_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pudval |= PUD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
> +					PMD_TYPE_SECT);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> +				set_pmd(pmdp, pfn_pmd(pfn, prot));
> +				pfn += step;
> +			}
> +
> +			dsb(ishst);
> +
> +			__pud_populate(pudp, pmd_phys, pudval);
> +		}
> +
> +		ret = split_cont_pmd(pudp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pudp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	p4d_t *p4dp;
> +	unsigned long next;
> +	int ret = 0;
> +
> +	p4dp = p4d_offset(pgdp, addr);
> +
> +	do {
> +		next = p4d_addr_end(addr, end);
> +
> +		ret = split_pud(p4dp, addr, next);
> +		if (ret)
> +			break;
> +	} while (p4dp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	int ret = 0;
> +
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		ret = split_p4d(pgdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pgdp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
> +{
> +	int ret;
> +
> +	if (!system_supports_bbml2_noabort())
> +		return 0;
> +
> +	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> +		return -EINVAL;
> +
> +	mutex_lock(&pgtable_split_lock);
> +	arch_enter_lazy_mmu_mode();
> +	ret = split_pgd(pgd_offset_k(start), start, end);
> +	arch_leave_lazy_mmu_mode();
> +	mutex_unlock(&pgtable_split_lock);
> +
> +	return ret;
> +}
> +
>   	/*

--- snip ---

I'm afraid I'll have to agree with Ryan :) Looking at the signature of split_kernel_pgtable_mapping,
one would expect that this function splits all block mappings in this region. But that is just a
nit; it does not seem right to me that we are iterating over the entire space when we know *exactly* where
we have to make the split, just to save on pgd/p4d/pud loads - the effect of which is probably cancelled
out by unnecessary iterations your approach takes to skip the intermediate blocks.

If we are concerned that most change_memory_common() operations are for a single page, then
we can do something like:

unsigned long size = end - start;
bool end_split, start_split = false;

if (start not aligned to block mapping)
	start_split = split(start);

/*
  * split the end only if the start wasn't split, or
  * if it cannot be guaranteed that start and end lie
  * on the same contig block
  */
if (!start_split || (size > CONT_PTE_SIZE))
	end_split = split(end);





^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support " Yang Shi
  2025-07-29 12:34   ` Dev Jain
@ 2025-08-01 14:35   ` Ryan Roberts
  2025-08-04 10:07     ` Ryan Roberts
  2025-08-05 18:53     ` Yang Shi
  1 sibling, 2 replies; 34+ messages in thread
From: Ryan Roberts @ 2025-08-01 14:35 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 24/07/2025 23:11, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
> 
> This resulted in a couple of problems:
>   - performance degradation
>   - more TLB pressure
>   - memory waste for kernel page table
> 
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
> 
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full.  When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
> 
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
> 
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
> 
> * Memory use after boot
> Before:
> MemTotal:       258988984 kB
> MemFree:        254821700 kB
> 
> After:
> MemTotal:       259505132 kB
> MemFree:        255410264 kB
> 
> Around 500MB more memory are free to use.  The larger the machine, the
> more memory saved.
> 
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
> MPKI is reduced by 28.5%.
> 
> The benchmark data is now on par with rodata=on too.
> 
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>     --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>     --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>     --name=iops-test-job --eta-newline=1 --size 100G
> 
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
> 
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  34 ++++
>  arch/arm64/include/asm/mmu.h        |   1 +
>  arch/arm64/include/asm/pgtable.h    |   5 +
>  arch/arm64/kernel/cpufeature.c      |  31 +--
>  arch/arm64/mm/mmu.c                 | 293 +++++++++++++++++++++++++++-
>  arch/arm64/mm/pageattr.c            |   4 +
>  6 files changed, 333 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index bf13d676aae2..d0d394cc837d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
>  	return cpus_have_final_cap(ARM64_HAS_PMUV3);
>  }
>  
> +static inline bool bbml2_noabort_available(void)
> +{
> +	/*
> +	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
> +	 * as possible. This list is therefore an allow-list of known-good
> +	 * implementations that both support BBML2 and additionally, fulfill the
> +	 * extra constraint of never generating TLB conflict aborts when using
> +	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
> +	 * kernel contexts difficult to prove safe against recursive aborts).
> +	 *
> +	 * Note that implementations can only be considered "known-good" if their
> +	 * implementors attest to the fact that the implementation never raises
> +	 * TLB conflict aborts for BBML2 mapping granularity changes.
> +	 */
> +	static const struct midr_range supports_bbml2_noabort_list[] = {
> +		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> +		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> +		{}
> +	};
> +
> +	/* Does our cpu guarantee to never raise TLB conflict aborts? */
> +	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
> +		return false;
> +
> +	/*
> +	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
> +	 * about whether the MIDR check passes.
> +	 */
> +
> +	return true;
> +}

I don't think this function should be inline. Won't we end up duplicating the
midr list everywhere? Suggest moving back to cpufeature.c.

> +
>  static inline bool system_supports_bbml2_noabort(void)
>  {
>  	return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 6e8aa8e72601..57f4b25e6f33 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>  extern void mark_linear_text_alias_ro(void);
> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>  
>  /*
>   * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index ba63c8736666..ad2a6a7e86b0 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>  	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>  }
>  
> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
> +{
> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
> +}
> +
>  #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
>  static inline int pte_uffd_wp(pte_t pte)
>  {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d2a8a509a58e..1c96016a7a41 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>  
>  static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>  {
> -	/*
> -	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
> -	 * as possible. This list is therefore an allow-list of known-good
> -	 * implementations that both support BBML2 and additionally, fulfill the
> -	 * extra constraint of never generating TLB conflict aborts when using
> -	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
> -	 * kernel contexts difficult to prove safe against recursive aborts).
> -	 *
> -	 * Note that implementations can only be considered "known-good" if their
> -	 * implementors attest to the fact that the implementation never raises
> -	 * TLB conflict aborts for BBML2 mapping granularity changes.
> -	 */
> -	static const struct midr_range supports_bbml2_noabort_list[] = {
> -		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> -		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> -		{}
> -	};
> -
> -	/* Does our cpu guarantee to never raise TLB conflict aborts? */
> -	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
> -		return false;
> -
> -	/*
> -	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
> -	 * about whether the MIDR check passes.
> -	 */
> -
> -	return true;
> +	return bbml2_noabort_available();
>  }
>  
>  #ifdef CONFIG_ARM64_PAN
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3d5fb37424ab..f63b39613571 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>  			     int flags);
>  #endif
>  
> +#define INVALID_PHYS_ADDR	-1
> +
>  static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>  				       enum pgtable_type pgtable_type)
>  {
> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>  	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>  	phys_addr_t pa;
>  
> -	BUG_ON(!ptdesc);
> +	if (!ptdesc)
> +		return INVALID_PHYS_ADDR;
> +
>  	pa = page_to_phys(ptdesc_page(ptdesc));
>  
>  	switch (pgtable_type) {
> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>  }
>  
>  static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>  {
>  	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>  }
>  
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
> +}
> +
>  static phys_addr_t
>  pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>  {
> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
>  }

The allocation all looks clean to me now. Thanks.

>  
>  /*
> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			     pgd_pgtable_alloc_special_mm, flags);
>  }
>  
> +static DEFINE_MUTEX(pgtable_split_lock);
> +
> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	pte_t *ptep;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +
> +	ptep = pte_offset_kernel(pmdp, addr);
> +
> +	do {
> +		pte_t *_ptep;
> +
> +		nr = 0;
> +		next = pte_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
> +		span = nr * CONT_PTE_SIZE;
> +
> +		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> +		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
> +
> +		if (((addr | next) & ~CONT_PTE_MASK) == 0)
> +			continue;
> +
> +		if (!pte_cont(__ptep_get(_ptep)))
> +			continue;
> +
> +		for (int i = 0; i < CONT_PTES; i++, _ptep++)
> +			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
> +	} while (addr = next + span, addr != end);
> +
> +	return 0;
> +}
> +
> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	do {
> +		pmd_t pmd;
> +
> +		nr = 1;
> +		next = pmd_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PMD_SIZE));
> +		span = (nr - 1) * PMD_SIZE;
> +
> +		if (((addr | next) & ~PMD_MASK) == 0)
> +			continue;
> +
> +		pmd = pmdp_get(pmdp);
> +		if (pmd_leaf(pmd)) {
> +			phys_addr_t pte_phys;
> +			pte_t *ptep;
> +			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
> +					  PMD_TABLE_AF;
> +			unsigned long pfn = pmd_pfn(pmd);
> +			pgprot_t prot = pmd_pgprot(pmd);
> +
> +			pte_phys = split_pgtable_alloc(TABLE_PTE);
> +			if (pte_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pmdval |= PMD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
> +					PTE_TYPE_PAGE);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			ptep = (pte_t *)phys_to_virt(pte_phys);
> +			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> +				__set_pte(ptep, pfn_pte(pfn, prot));
> +
> +			dsb(ishst);
> +
> +			__pmd_populate(pmdp, pte_phys, pmdval);
> +		}
> +
> +		ret = split_cont_pte(pmdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pmdp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
> +{
> +	pmd_t *pmdp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pmdp = pmd_offset(pudp, addr);
> +
> +	do {
> +		pmd_t *_pmdp;
> +
> +		nr = 0;
> +		next = pmd_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
> +		span = nr * CONT_PMD_SIZE;
> +
> +		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
> +			pmdp += pmd_index(next) - pmd_index(addr) +
> +				nr * CONT_PMDS;
> +			continue;
> +		}
> +
> +		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> +		if (!pmd_cont(pmdp_get(_pmdp)))
> +			goto split;
> +
> +		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
> +			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
> +
> +split:
> +		ret = split_pmd(pmdp, addr, next);
> +		if (ret)
> +			break;
> +
> +		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
> +	} while (addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
> +{
> +	pud_t *pudp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pudp = pud_offset(p4dp, addr);
> +
> +	do {
> +		pud_t pud;
> +
> +		nr = 1;
> +		next = pud_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PUD_SIZE));
> +		span = (nr - 1) * PUD_SIZE;
> +
> +		if (((addr | next) & ~PUD_MASK) == 0)
> +			continue;
> +
> +		pud = pudp_get(pudp);
> +		if (pud_leaf(pud)) {
> +			phys_addr_t pmd_phys;
> +			pmd_t *pmdp;
> +			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
> +					  PUD_TABLE_AF;
> +			unsigned long pfn = pud_pfn(pud);
> +			pgprot_t prot = pud_pgprot(pud);
> +			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> +			pmd_phys = split_pgtable_alloc(TABLE_PMD);
> +			if (pmd_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pudval |= PUD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
> +					PMD_TYPE_SECT);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> +				set_pmd(pmdp, pfn_pmd(pfn, prot));
> +				pfn += step;
> +			}
> +
> +			dsb(ishst);
> +
> +			__pud_populate(pudp, pmd_phys, pudval);
> +		}
> +
> +		ret = split_cont_pmd(pudp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pudp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	p4d_t *p4dp;
> +	unsigned long next;
> +	int ret = 0;
> +
> +	p4dp = p4d_offset(pgdp, addr);
> +
> +	do {
> +		next = p4d_addr_end(addr, end);
> +
> +		ret = split_pud(p4dp, addr, next);
> +		if (ret)
> +			break;
> +	} while (p4dp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	int ret = 0;
> +
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		ret = split_p4d(pgdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pgdp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
> +{
> +	int ret;
> +
> +	if (!system_supports_bbml2_noabort())
> +		return 0;
> +
> +	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> +		return -EINVAL;
> +
> +	mutex_lock(&pgtable_split_lock);

On reflection, I agree this lock approach is simpler than my suggestion to do it
locklessly and I doubt this will become a bottleneck. Given x86 does it this
way, I guess it's fine.

> +	arch_enter_lazy_mmu_mode();
> +	ret = split_pgd(pgd_offset_k(start), start, end);

My instinct still remains that it would be better not to iterate over the range
here, but instead call a "split(start); split(end);" since we just want to split
the start and end. So the code would be simpler and probably more performant if
we get rid of all the iteration.

But I'm guessing you are going to enhance this infra in the next patch to
support splitting all entries in the range for the system-doesn't-support-bbml case?

Anyway, I'll take a look at the next patch then come back to review the details
of split_pgd().

> +	arch_leave_lazy_mmu_mode();
> +	mutex_unlock(&pgtable_split_lock);
> +
> +	return ret;
> +}
> +
>  static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>  				phys_addr_t size, pgprot_t prot)
>  {
> @@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>  
>  #endif /* CONFIG_KFENCE */
>  
> +bool linear_map_requires_bbml2;

Until the next patch, you're only using this variable in this file, so at the
very least, it should be static for now. But I'm proposing below it should be
removed entirely.

> +
> +static inline bool force_pte_mapping(void)
> +{
> +	/*
> +	 * Can't use cpufeature API to determine whether BBM level 2
> +	 * is supported or not since cpufeature have not been
> +	 * finalized yet.
> +	 */
> +	return (!bbml2_noabort_available() && (rodata_full ||
> +		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
> +		debug_pagealloc_enabled();
> +}
> +
>  static void __init map_mem(pgd_t *pgdp)
>  {
>  	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
>  
>  	early_kfence_pool = arm64_kfence_alloc_pool();
>  
> -	if (can_set_direct_map())
> +	linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;

This looks wrong; what about the kfence_can_set_direct_map and is_realm_world
conditions?

perhaps:

	linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map()

> +
> +	if (force_pte_mapping())
>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	/*
> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>  
>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>  
> -	if (can_set_direct_map())
> +	if (force_pte_mapping() ||
> +	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))

So force_pte_mapping() isn't actually returning what it sounds like it is; it's
returning whether you would have to force pte mapping based on the current cpu's
support for bbml2. Perhaps it would be better to implement force_pte_mapping()  as:

static inline bool force_pte_mapping(void)
{
	bool bbml2 = (system_capabilities_finalized() &&
			system_supports_bbml2_noabort()) ||
			bbml2_noabort_available();

	return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
			   is_realm_world())) ||
		debug_pagealloc_enabled();
}

Then we can just use force_pte_mapping() in both the boot and runtime paths
without any adjustment based on linear_map_requires_bbml2. So you could drop
linear_map_requires_bbml2 entirely, until the next patch at least.

>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index c6a85000fa0e..6566aa9d8abb 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
>  	data.set_mask = set_mask;
>  	data.clear_mask = clear_mask;
>  
> +	ret = split_kernel_pgtable_mapping(start, start + size);
> +	if (WARN_ON_ONCE(ret))

I'm on the fence as to whether this warning is desirable. Would we normally want
to warn in an OOM situation, or just unwind and carry on?

Thanks,
Ryan


> +		return ret;
> +
>  	arch_enter_lazy_mmu_mode();
>  
>  	/*



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-01 14:35   ` Ryan Roberts
@ 2025-08-04 10:07     ` Ryan Roberts
  2025-08-05 18:53     ` Yang Shi
  1 sibling, 0 replies; 34+ messages in thread
From: Ryan Roberts @ 2025-08-04 10:07 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 01/08/2025 15:35, Ryan Roberts wrote:

[...]

>> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>  
>>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>  
>> -	if (can_set_direct_map())
>> +	if (force_pte_mapping() ||
>> +	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
> 
> So force_pte_mapping() isn't actually returning what it sounds like it is; it's
> returning whether you would have to force pte mapping based on the current cpu's
> support for bbml2. Perhaps it would be better to implement force_pte_mapping()  as:
> 
> static inline bool force_pte_mapping(void)
> {
> 	bool bbml2 = (system_capabilities_finalized() &&
> 			system_supports_bbml2_noabort()) ||
> 			bbml2_noabort_available();

Sorry that should have been:

	bool bbml2 = system_capabilities_finalized() ?
		system_supports_bbml2_noabort() : bbml2_noabort_available();

> 
> 	return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> 			   is_realm_world())) ||
> 		debug_pagealloc_enabled();
> }


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-01 14:35   ` Ryan Roberts
  2025-08-04 10:07     ` Ryan Roberts
@ 2025-08-05 18:53     ` Yang Shi
  2025-08-06  7:20       ` Ryan Roberts
  1 sibling, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-08-05 18:53 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 8/1/25 7:35 AM, Ryan Roberts wrote:
> On 24/07/2025 23:11, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>>    - performance degradation
>>    - more TLB pressure
>>    - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full.  When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal:       258988984 kB
>> MemFree:        254821700 kB
>>
>> After:
>> MemTotal:       259505132 kB
>> MemFree:        255410264 kB
>>
>> Around 500MB more memory are free to use.  The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>      --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>      --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>      --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>>   arch/arm64/include/asm/cpufeature.h |  34 ++++
>>   arch/arm64/include/asm/mmu.h        |   1 +
>>   arch/arm64/include/asm/pgtable.h    |   5 +
>>   arch/arm64/kernel/cpufeature.c      |  31 +--
>>   arch/arm64/mm/mmu.c                 | 293 +++++++++++++++++++++++++++-
>>   arch/arm64/mm/pageattr.c            |   4 +
>>   6 files changed, 333 insertions(+), 35 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index bf13d676aae2..d0d394cc837d 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
>>   	return cpus_have_final_cap(ARM64_HAS_PMUV3);
>>   }
>>   
>> +static inline bool bbml2_noabort_available(void)
>> +{
>> +	/*
>> +	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> +	 * as possible. This list is therefore an allow-list of known-good
>> +	 * implementations that both support BBML2 and additionally, fulfill the
>> +	 * extra constraint of never generating TLB conflict aborts when using
>> +	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
>> +	 * kernel contexts difficult to prove safe against recursive aborts).
>> +	 *
>> +	 * Note that implementations can only be considered "known-good" if their
>> +	 * implementors attest to the fact that the implementation never raises
>> +	 * TLB conflict aborts for BBML2 mapping granularity changes.
>> +	 */
>> +	static const struct midr_range supports_bbml2_noabort_list[] = {
>> +		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> +		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> +		{}
>> +	};
>> +
>> +	/* Does our cpu guarantee to never raise TLB conflict aborts? */
>> +	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
>> +		return false;
>> +
>> +	/*
>> +	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
>> +	 * about whether the MIDR check passes.
>> +	 */
>> +
>> +	return true;
>> +}
> I don't think this function should be inline. Won't we end up duplicating the
> midr list everywhere? Suggest moving back to cpufeature.c.

Yes, you are right.

>
>> +
>>   static inline bool system_supports_bbml2_noabort(void)
>>   {
>>   	return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 6e8aa8e72601..57f4b25e6f33 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>   			       pgprot_t prot, bool page_mappings_only);
>>   extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>   extern void mark_linear_text_alias_ro(void);
>> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>>   
>>   /*
>>    * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index ba63c8736666..ad2a6a7e86b0 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>>   	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>>   }
>>   
>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>> +{
>> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>> +}
>> +
>>   #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
>>   static inline int pte_uffd_wp(pte_t pte)
>>   {
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index d2a8a509a58e..1c96016a7a41 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>>   
>>   static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>>   {
>> -	/*
>> -	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> -	 * as possible. This list is therefore an allow-list of known-good
>> -	 * implementations that both support BBML2 and additionally, fulfill the
>> -	 * extra constraint of never generating TLB conflict aborts when using
>> -	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
>> -	 * kernel contexts difficult to prove safe against recursive aborts).
>> -	 *
>> -	 * Note that implementations can only be considered "known-good" if their
>> -	 * implementors attest to the fact that the implementation never raises
>> -	 * TLB conflict aborts for BBML2 mapping granularity changes.
>> -	 */
>> -	static const struct midr_range supports_bbml2_noabort_list[] = {
>> -		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> -		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> -		{}
>> -	};
>> -
>> -	/* Does our cpu guarantee to never raise TLB conflict aborts? */
>> -	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
>> -		return false;
>> -
>> -	/*
>> -	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
>> -	 * about whether the MIDR check passes.
>> -	 */
>> -
>> -	return true;
>> +	return bbml2_noabort_available();
>>   }
>>   
>>   #ifdef CONFIG_ARM64_PAN
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 3d5fb37424ab..f63b39613571 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>   			     int flags);
>>   #endif
>>   
>> +#define INVALID_PHYS_ADDR	-1
>> +
>>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>   				       enum pgtable_type pgtable_type)
>>   {
>> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>   	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>>   	phys_addr_t pa;
>>   
>> -	BUG_ON(!ptdesc);
>> +	if (!ptdesc)
>> +		return INVALID_PHYS_ADDR;
>> +
>>   	pa = page_to_phys(ptdesc_page(ptdesc));
>>   
>>   	switch (pgtable_type) {
>> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>   }
>>   
>>   static phys_addr_t __maybe_unused
>> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>>   {
>>   	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>>   }
>>   
>> +static phys_addr_t __maybe_unused
>> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> +	phys_addr_t pa;
>> +
>> +	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> +	BUG_ON(pa == INVALID_PHYS_ADDR);
>> +	return pa;
>> +}
>> +
>>   static phys_addr_t
>>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>>   {
>> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
>> +	phys_addr_t pa;
>> +
>> +	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> +	BUG_ON(pa == INVALID_PHYS_ADDR);
>> +	return pa;
>>   }
> The allocation all looks clean to me now. Thanks.

Thank you for the suggestion.

>
>>   
>>   /*
>> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>   			     pgd_pgtable_alloc_special_mm, flags);
>>   }
>>   
>> +static DEFINE_MUTEX(pgtable_split_lock);
>> +
>> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
>> +{
>> +	pte_t *ptep;
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +
>> +	ptep = pte_offset_kernel(pmdp, addr);
>> +
>> +	do {
>> +		pte_t *_ptep;
>> +
>> +		nr = 0;
>> +		next = pte_cont_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
>> +		span = nr * CONT_PTE_SIZE;
>> +
>> +		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> +		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
>> +
>> +		if (((addr | next) & ~CONT_PTE_MASK) == 0)
>> +			continue;
>> +
>> +		if (!pte_cont(__ptep_get(_ptep)))
>> +			continue;
>> +
>> +		for (int i = 0; i < CONT_PTES; i++, _ptep++)
>> +			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
>> +	} while (addr = next + span, addr != end);
>> +
>> +	return 0;
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
>> +{
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +	int ret = 0;
>> +
>> +	do {
>> +		pmd_t pmd;
>> +
>> +		nr = 1;
>> +		next = pmd_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / PMD_SIZE));
>> +		span = (nr - 1) * PMD_SIZE;
>> +
>> +		if (((addr | next) & ~PMD_MASK) == 0)
>> +			continue;
>> +
>> +		pmd = pmdp_get(pmdp);
>> +		if (pmd_leaf(pmd)) {
>> +			phys_addr_t pte_phys;
>> +			pte_t *ptep;
>> +			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
>> +					  PMD_TABLE_AF;
>> +			unsigned long pfn = pmd_pfn(pmd);
>> +			pgprot_t prot = pmd_pgprot(pmd);
>> +
>> +			pte_phys = split_pgtable_alloc(TABLE_PTE);
>> +			if (pte_phys == INVALID_PHYS_ADDR)
>> +				return -ENOMEM;
>> +
>> +			if (pgprot_val(prot) & PMD_SECT_PXN)
>> +				pmdval |= PMD_TABLE_PXN;
>> +
>> +			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
>> +					PTE_TYPE_PAGE);
>> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +			ptep = (pte_t *)phys_to_virt(pte_phys);
>> +			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> +				__set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> +			dsb(ishst);
>> +
>> +			__pmd_populate(pmdp, pte_phys, pmdval);
>> +		}
>> +
>> +		ret = split_cont_pte(pmdp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (pmdp += nr, addr = next + span, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
>> +{
>> +	pmd_t *pmdp;
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +	int ret = 0;
>> +
>> +	pmdp = pmd_offset(pudp, addr);
>> +
>> +	do {
>> +		pmd_t *_pmdp;
>> +
>> +		nr = 0;
>> +		next = pmd_cont_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
>> +		span = nr * CONT_PMD_SIZE;
>> +
>> +		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
>> +			pmdp += pmd_index(next) - pmd_index(addr) +
>> +				nr * CONT_PMDS;
>> +			continue;
>> +		}
>> +
>> +		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> +		if (!pmd_cont(pmdp_get(_pmdp)))
>> +			goto split;
>> +
>> +		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
>> +			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
>> +
>> +split:
>> +		ret = split_pmd(pmdp, addr, next);
>> +		if (ret)
>> +			break;
>> +
>> +		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
>> +	} while (addr = next + span, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
>> +{
>> +	pud_t *pudp;
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +	int ret = 0;
>> +
>> +	pudp = pud_offset(p4dp, addr);
>> +
>> +	do {
>> +		pud_t pud;
>> +
>> +		nr = 1;
>> +		next = pud_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / PUD_SIZE));
>> +		span = (nr - 1) * PUD_SIZE;
>> +
>> +		if (((addr | next) & ~PUD_MASK) == 0)
>> +			continue;
>> +
>> +		pud = pudp_get(pudp);
>> +		if (pud_leaf(pud)) {
>> +			phys_addr_t pmd_phys;
>> +			pmd_t *pmdp;
>> +			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
>> +					  PUD_TABLE_AF;
>> +			unsigned long pfn = pud_pfn(pud);
>> +			pgprot_t prot = pud_pgprot(pud);
>> +			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> +			pmd_phys = split_pgtable_alloc(TABLE_PMD);
>> +			if (pmd_phys == INVALID_PHYS_ADDR)
>> +				return -ENOMEM;
>> +
>> +			if (pgprot_val(prot) & PMD_SECT_PXN)
>> +				pudval |= PUD_TABLE_PXN;
>> +
>> +			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
>> +					PMD_TYPE_SECT);
>> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> +				set_pmd(pmdp, pfn_pmd(pfn, prot));
>> +				pfn += step;
>> +			}
>> +
>> +			dsb(ishst);
>> +
>> +			__pud_populate(pudp, pmd_phys, pudval);
>> +		}
>> +
>> +		ret = split_cont_pmd(pudp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (pudp += nr, addr = next + span, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
>> +{
>> +	p4d_t *p4dp;
>> +	unsigned long next;
>> +	int ret = 0;
>> +
>> +	p4dp = p4d_offset(pgdp, addr);
>> +
>> +	do {
>> +		next = p4d_addr_end(addr, end);
>> +
>> +		ret = split_pud(p4dp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (p4dp++, addr = next, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
>> +{
>> +	unsigned long next;
>> +	int ret = 0;
>> +
>> +	do {
>> +		next = pgd_addr_end(addr, end);
>> +		ret = split_p4d(pgdp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (pgdp++, addr = next, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
>> +{
>> +	int ret;
>> +
>> +	if (!system_supports_bbml2_noabort())
>> +		return 0;
>> +
>> +	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> +		return -EINVAL;
>> +
>> +	mutex_lock(&pgtable_split_lock);
> On reflection, I agree this lock approach is simpler than my suggestion to do it
> locklessly and I doubt this will become a bottleneck. Given x86 does it this
> way, I guess it's fine.
>
>> +	arch_enter_lazy_mmu_mode();
>> +	ret = split_pgd(pgd_offset_k(start), start, end);
> My instinct still remains that it would be better not to iterate over the range
> here, but instead call a "split(start); split(end);" since we just want to split
> the start and end. So the code would be simpler and probably more performant if
> we get rid of all the iteration.

It should be more performant for splitting large range, especially the 
range includes leaf mappings at different levels. But I had some 
optimization to skip leaf mappings in this version, so it should be 
close to your implementation from performance perspective. And it just 
walks the page table once instead of twice. It should be more efficient 
for small split, for example, 4K.

>
> But I'm guessing you are going to enhance this infra in the next patch to
> support splitting all entries in the range for the system-doesn't-support-bbml case?

Yes. I added "flags" parameter in patch #4. When repainting the page 
table, NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS will be passed in to tell 
split_pgd() to split the page table all the way down to PTEs.

>
> Anyway, I'll take a look at the next patch then come back to review the details
> of split_pgd().
>
>> +	arch_leave_lazy_mmu_mode();
>> +	mutex_unlock(&pgtable_split_lock);
>> +
>> +	return ret;
>> +}
>> +
>>   static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>>   				phys_addr_t size, pgprot_t prot)
>>   {
>> @@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>   
>>   #endif /* CONFIG_KFENCE */
>>   
>> +bool linear_map_requires_bbml2;
> Until the next patch, you're only using this variable in this file, so at the
> very least, it should be static for now. But I'm proposing below it should be
> removed entirely.
>
>> +
>> +static inline bool force_pte_mapping(void)
>> +{
>> +	/*
>> +	 * Can't use cpufeature API to determine whether BBM level 2
>> +	 * is supported or not since cpufeature have not been
>> +	 * finalized yet.
>> +	 */
>> +	return (!bbml2_noabort_available() && (rodata_full ||
>> +		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
>> +		debug_pagealloc_enabled();
>> +}
>> +
>>   static void __init map_mem(pgd_t *pgdp)
>>   {
>>   	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
>>   
>>   	early_kfence_pool = arm64_kfence_alloc_pool();
>>   
>> -	if (can_set_direct_map())
>> +	linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
> This looks wrong; what about the kfence_can_set_direct_map and is_realm_world
> conditions?
>
> perhaps:
>
> 	linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map()

Thanks for figuring this out.

>
>> +
>> +	if (force_pte_mapping())
>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>   
>>   	/*
>> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>   
>>   	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>   
>> -	if (can_set_direct_map())
>> +	if (force_pte_mapping() ||
>> +	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
> So force_pte_mapping() isn't actually returning what it sounds like it is; it's
> returning whether you would have to force pte mapping based on the current cpu's
> support for bbml2. Perhaps it would be better to implement force_pte_mapping()  as:
>
> static inline bool force_pte_mapping(void)
> {
> 	bool bbml2 = (system_capabilities_finalized() &&
> 			system_supports_bbml2_noabort()) ||
> 			bbml2_noabort_available();
>
> 	return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> 			   is_realm_world())) ||
> 		debug_pagealloc_enabled();
> }
>
> Then we can just use force_pte_mapping() in both the boot and runtime paths
> without any adjustment based on linear_map_requires_bbml2. So you could drop
> linear_map_requires_bbml2 entirely, until the next patch at least.

Good idea.

>
>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>   
>>   	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c6a85000fa0e..6566aa9d8abb 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
>>   	data.set_mask = set_mask;
>>   	data.clear_mask = clear_mask;
>>   
>> +	ret = split_kernel_pgtable_mapping(start, start + size);
>> +	if (WARN_ON_ONCE(ret))
> I'm on the fence as to whether this warning is desirable. Would we normally want
> to warn in an OOM situation, or just unwind and carry on?

I don't have strong preference on this. It returns errno anyway. The 
operation, for example, insmod, will fail if split fails.

Thanks,
Yang

>
> Thanks,
> Ryan
>
>
>> +		return ret;
>> +
>>   	arch_enter_lazy_mmu_mode();
>>   
>>   	/*



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-29 12:34   ` Dev Jain
@ 2025-08-05 21:28     ` Yang Shi
  2025-08-06  0:10       ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Yang Shi @ 2025-08-05 21:28 UTC (permalink / raw)
  To: Dev Jain, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 7/29/25 5:34 AM, Dev Jain wrote:
>
> On 25/07/25 3:41 am, Yang Shi wrote:
>> [----- snip -----]
>>     #ifdef CONFIG_ARM64_PAN
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 3d5fb37424ab..f63b39613571 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, 
>> phys_addr_t phys, unsigned long virt,
>>                    int flags);
>>   #endif
>>   +#define INVALID_PHYS_ADDR    -1
>> +
>>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>                          enum pgtable_type pgtable_type)
>>   {
>> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct 
>> mm_struct *mm,
>>       struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & 
>> ~__GFP_ZERO, 0);
>>       phys_addr_t pa;
>>   -    BUG_ON(!ptdesc);
>> +    if (!ptdesc)
>> +        return INVALID_PHYS_ADDR;
>> +
>>       pa = page_to_phys(ptdesc_page(ptdesc));
>>         switch (pgtable_type) {
>> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct 
>> mm_struct *mm,
>>   }
>>     static phys_addr_t __maybe_unused
>> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>>   {
>>       return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>>   }
>>   +static phys_addr_t __maybe_unused
>> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> +    phys_addr_t pa;
>> +
>> +    pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> +    BUG_ON(pa == INVALID_PHYS_ADDR);
>> +    return pa;
>> +}
>> +
>>   static phys_addr_t
>>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>>   {
>> -    return __pgd_pgtable_alloc(NULL, pgtable_type);
>> +    phys_addr_t pa;
>> +
>> +    pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> +    BUG_ON(pa == INVALID_PHYS_ADDR);
>> +    return pa;
>>   }
>>     /*
>> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct 
>> *mm, phys_addr_t phys,
>>                    pgd_pgtable_alloc_special_mm, flags);
>>   }
>>   +static DEFINE_MUTEX(pgtable_split_lock);
>
> Thanks for taking a separate lock.
>
>> +
>> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned 
>> long end)
>> +{
>> +    pte_t *ptep;
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +
>> +    ptep = pte_offset_kernel(pmdp, addr);
>> +
>> +    do {
>> +        pte_t *_ptep;
>> +
>> +        nr = 0;
>> +        next = pte_cont_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / CONT_PTE_SIZE));
>> +        span = nr * CONT_PTE_SIZE;
>> +
>> +        _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> +        ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
>> +
>> +        if (((addr | next) & ~CONT_PTE_MASK) == 0)
>> +            continue;
>> +
>> +        if (!pte_cont(__ptep_get(_ptep)))
>> +            continue;
>> +
>> +        for (int i = 0; i < CONT_PTES; i++, _ptep++)
>> +            __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
>> +    } while (addr = next + span, addr != end);
>> +
>> +    return 0;
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +    int ret = 0;
>> +
>> +    do {
>> +        pmd_t pmd;
>> +
>> +        nr = 1;
>> +        next = pmd_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / PMD_SIZE));
>> +        span = (nr - 1) * PMD_SIZE;
>> +
>> +        if (((addr | next) & ~PMD_MASK) == 0)
>> +            continue;
>> +
>> +        pmd = pmdp_get(pmdp);
>> +        if (pmd_leaf(pmd)) {
>> +            phys_addr_t pte_phys;
>> +            pte_t *ptep;
>> +            pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
>> +                      PMD_TABLE_AF;
>> +            unsigned long pfn = pmd_pfn(pmd);
>> +            pgprot_t prot = pmd_pgprot(pmd);
>> +
>> +            pte_phys = split_pgtable_alloc(TABLE_PTE);
>> +            if (pte_phys == INVALID_PHYS_ADDR)
>> +                return -ENOMEM;
>> +
>> +            if (pgprot_val(prot) & PMD_SECT_PXN)
>> +                pmdval |= PMD_TABLE_PXN;
>> +
>> +            prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
>> +                    PTE_TYPE_PAGE);
>> +            prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +            ptep = (pte_t *)phys_to_virt(pte_phys);
>> +            for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> +                __set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> +            dsb(ishst);
>> +
>> +            __pmd_populate(pmdp, pte_phys, pmdval);
>> +        }
>> +
>> +        ret = split_cont_pte(pmdp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (pmdp += nr, addr = next + span, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned 
>> long end)
>> +{
>> +    pmd_t *pmdp;
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +    int ret = 0;
>> +
>> +    pmdp = pmd_offset(pudp, addr);
>> +
>> +    do {
>> +        pmd_t *_pmdp;
>> +
>> +        nr = 0;
>> +        next = pmd_cont_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / CONT_PMD_SIZE));
>> +        span = nr * CONT_PMD_SIZE;
>> +
>> +        if (((addr | next) & ~CONT_PMD_MASK) == 0) {
>> +            pmdp += pmd_index(next) - pmd_index(addr) +
>> +                nr * CONT_PMDS;
>> +            continue;
>> +        }
>> +
>> +        _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> +        if (!pmd_cont(pmdp_get(_pmdp)))
>> +            goto split;
>> +
>> +        for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
>> +            set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
>> +
>> +split:
>> +        ret = split_pmd(pmdp, addr, next);
>> +        if (ret)
>> +            break;
>> +
>> +        pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
>> +    } while (addr = next + span, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    pud_t *pudp;
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +    int ret = 0;
>> +
>> +    pudp = pud_offset(p4dp, addr);
>> +
>> +    do {
>> +        pud_t pud;
>> +
>> +        nr = 1;
>> +        next = pud_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / PUD_SIZE));
>> +        span = (nr - 1) * PUD_SIZE;
>> +
>> +        if (((addr | next) & ~PUD_MASK) == 0)
>> +            continue;
>> +
>> +        pud = pudp_get(pudp);
>> +        if (pud_leaf(pud)) {
>> +            phys_addr_t pmd_phys;
>> +            pmd_t *pmdp;
>> +            pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
>> +                      PUD_TABLE_AF;
>> +            unsigned long pfn = pud_pfn(pud);
>> +            pgprot_t prot = pud_pgprot(pud);
>> +            unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> +            pmd_phys = split_pgtable_alloc(TABLE_PMD);
>> +            if (pmd_phys == INVALID_PHYS_ADDR)
>> +                return -ENOMEM;
>> +
>> +            if (pgprot_val(prot) & PMD_SECT_PXN)
>> +                pudval |= PUD_TABLE_PXN;
>> +
>> +            prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
>> +                    PMD_TYPE_SECT);
>> +            prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +            pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +            for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> +                set_pmd(pmdp, pfn_pmd(pfn, prot));
>> +                pfn += step;
>> +            }
>> +
>> +            dsb(ishst);
>> +
>> +            __pud_populate(pudp, pmd_phys, pudval);
>> +        }
>> +
>> +        ret = split_cont_pmd(pudp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (pudp += nr, addr = next + span, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    p4d_t *p4dp;
>> +    unsigned long next;
>> +    int ret = 0;
>> +
>> +    p4dp = p4d_offset(pgdp, addr);
>> +
>> +    do {
>> +        next = p4d_addr_end(addr, end);
>> +
>> +        ret = split_pud(p4dp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (p4dp++, addr = next, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    unsigned long next;
>> +    int ret = 0;
>> +
>> +    do {
>> +        next = pgd_addr_end(addr, end);
>> +        ret = split_p4d(pgdp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (pgdp++, addr = next, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long 
>> end)
>> +{
>> +    int ret;
>> +
>> +    if (!system_supports_bbml2_noabort())
>> +        return 0;
>> +
>> +    if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> +        return -EINVAL;
>> +
>> +    mutex_lock(&pgtable_split_lock);
>> +    arch_enter_lazy_mmu_mode();
>> +    ret = split_pgd(pgd_offset_k(start), start, end);
>> +    arch_leave_lazy_mmu_mode();
>> +    mutex_unlock(&pgtable_split_lock);
>> +
>> +    return ret;
>> +}
>> +
>>       /*
>
> --- snip ---
>
> I'm afraid I'll have to agree with Ryan :) Looking at the signature of 
> split_kernel_pgtable_mapping,
> one would expect that this function splits all block mappings in this 
> region. But that is just a
> nit; it does not seem right to me that we are iterating over the 
> entire space when we know *exactly* where
> we have to make the split, just to save on pgd/p4d/pud loads - the 
> effect of which is probably cancelled
> out by unnecessary iterations your approach takes to skip the 
> intermediate blocks.

The implementation is aimed to reuse the split code for repainting. We 
have to split all leaf mappings down to PTEs for repainting.

Now Ryan suggested use pgtable walk API for repainting, it made the 
duplicate code problem gone. We had some discussion in the other series.

>
> If we are concerned that most change_memory_common() operations are 
> for a single page, then
> we can do something like:
>
> unsigned long size = end - start;
> bool end_split, start_split = false;
>
> if (start not aligned to block mapping)
>     start_split = split(start);
>
> /*
>  * split the end only if the start wasn't split, or
>  * if it cannot be guaranteed that start and end lie
>  * on the same contig block
>  */
> if (!start_split || (size > CONT_PTE_SIZE))
>     end_split = split(end);

Thanks for the suggestion. It works for some cases, but I don't think it 
can work if the range is cross page table IIUC. For example, start is in 
a PMD, but end is in another PMD.

Regards,
Yang

>
>
>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-05 21:28     ` Yang Shi
@ 2025-08-06  0:10       ` Yang Shi
  0 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-08-06  0:10 UTC (permalink / raw)
  To: Dev Jain, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, scott, cl
  Cc: linux-arm-kernel, linux-kernel

>>
>> --- snip ---
>>
>> I'm afraid I'll have to agree with Ryan :) Looking at the signature 
>> of split_kernel_pgtable_mapping,
>> one would expect that this function splits all block mappings in this 
>> region. But that is just a
>> nit; it does not seem right to me that we are iterating over the 
>> entire space when we know *exactly* where
>> we have to make the split, just to save on pgd/p4d/pud loads - the 
>> effect of which is probably cancelled
>> out by unnecessary iterations your approach takes to skip the 
>> intermediate blocks.
>
> The implementation is aimed to reuse the split code for repainting. We 
> have to split all leaf mappings down to PTEs for repainting.
>
> Now Ryan suggested use pgtable walk API for repainting, it made the 
> duplicate code problem gone. We had some discussion in the other series.
>
>>
>> If we are concerned that most change_memory_common() operations are 
>> for a single page, then
>> we can do something like:
>>
>> unsigned long size = end - start;
>> bool end_split, start_split = false;
>>
>> if (start not aligned to block mapping)
>>     start_split = split(start);
>>
>> /*
>>  * split the end only if the start wasn't split, or
>>  * if it cannot be guaranteed that start and end lie
>>  * on the same contig block
>>  */
>> if (!start_split || (size > CONT_PTE_SIZE))
>>     end_split = split(end);
>
> Thanks for the suggestion. It works for some cases, but I don't think 
> it can work if the range is cross page table IIUC. For example, start 
> is in a PMD, but end is in another PMD.

I think we have to call split_mapping(end) if size is greater than page 
size, i.e.

split_mapping(start);
if (size > PAGE_SIZE)
     split_mapping(end);

This can avoid walking page table twice for page size range, which 
should be the most case in the current kernel.

Thanks,
Yang

>
> Regards,
> Yang
>
>>
>>
>>
>



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-05 18:53     ` Yang Shi
@ 2025-08-06  7:20       ` Ryan Roberts
  2025-08-07  0:44         ` Yang Shi
  0 siblings, 1 reply; 34+ messages in thread
From: Ryan Roberts @ 2025-08-06  7:20 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 05/08/2025 19:53, Yang Shi wrote:

[...]

>>> +    arch_enter_lazy_mmu_mode();
>>> +    ret = split_pgd(pgd_offset_k(start), start, end);
>> My instinct still remains that it would be better not to iterate over the range
>> here, but instead call a "split(start); split(end);" since we just want to split
>> the start and end. So the code would be simpler and probably more performant if
>> we get rid of all the iteration.
> 
> It should be more performant for splitting large range, especially the range
> includes leaf mappings at different levels. But I had some optimization to skip
> leaf mappings in this version, so it should be close to your implementation from
> performance perspective. And it just walks the page table once instead of twice.
> It should be more efficient for small split, for example, 4K.

I guess this is the crux of our disagreement. I think the "walks the table once
for 4K" is a micro optimization, which I doubt we would see on any benchmark
results. In the absence of data, I'd prefer the simpler, smaller, easier to
understand version.

Both implementations are on list now; perhaps the maintainers can steer us.

Thanks,
Ryan


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-06  7:20       ` Ryan Roberts
@ 2025-08-07  0:44         ` Yang Shi
  0 siblings, 0 replies; 34+ messages in thread
From: Yang Shi @ 2025-08-07  0:44 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 8/6/25 12:20 AM, Ryan Roberts wrote:
> On 05/08/2025 19:53, Yang Shi wrote:
>
> [...]
>
>>>> +    arch_enter_lazy_mmu_mode();
>>>> +    ret = split_pgd(pgd_offset_k(start), start, end);
>>> My instinct still remains that it would be better not to iterate over the range
>>> here, but instead call a "split(start); split(end);" since we just want to split
>>> the start and end. So the code would be simpler and probably more performant if
>>> we get rid of all the iteration.
>> It should be more performant for splitting large range, especially the range
>> includes leaf mappings at different levels. But I had some optimization to skip
>> leaf mappings in this version, so it should be close to your implementation from
>> performance perspective. And it just walks the page table once instead of twice.
>> It should be more efficient for small split, for example, 4K.
> I guess this is the crux of our disagreement. I think the "walks the table once
> for 4K" is a micro optimization, which I doubt we would see on any benchmark
> results. In the absence of data, I'd prefer the simpler, smaller, easier to
> understand version.

I did a simple benchmark with module stressor from stress-ng. I used the 
below command line:
stress-ng --module 1 --module-name loop --module-ops 1000

It basically loads loop module 1000 times. I saw a slight slowdown (2% - 
3% slowdown, average time spent in 5 iterations) with your 
implementation on my AmpereOne machine. It shouldn't result in any 
noticeable slowdown for real life workloads per the data.

Thanks,
Yang

>
> Both implementations are on list now; perhaps the maintainers can steer us.
>
> Thanks,
> Ryan



^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2025-08-07  0:47 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-31  2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-05-31  2:41 ` [PATCH 1/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
2025-05-31  2:41 ` [PATCH 2/4] arm64: mm: make __create_pgd_mapping() and helpers non-void Yang Shi
2025-06-16 10:04   ` Ryan Roberts
2025-06-17 21:11     ` Yang Shi
2025-06-23 13:05       ` Ryan Roberts
2025-05-31  2:41 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
2025-06-16 11:58   ` Ryan Roberts
2025-06-16 12:33     ` Ryan Roberts
2025-06-17 21:01       ` Yang Shi
2025-06-16 16:24   ` Ryan Roberts
2025-06-17 21:09     ` Yang Shi
2025-06-23 13:26       ` Ryan Roberts
2025-06-23 19:12         ` Yang Shi
2025-06-26 22:39         ` Yang Shi
2025-07-23 17:38         ` Dev Jain
2025-07-23 20:51           ` Yang Shi
2025-07-24 11:43             ` Dev Jain
2025-07-24 17:59               ` Yang Shi
2025-05-31  2:41 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
2025-06-23 12:26   ` Ryan Roberts
2025-06-23 20:56     ` Yang Shi
2025-06-13 17:21 ` [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-06-16  9:09   ` Ryan Roberts
2025-06-17 20:57     ` Yang Shi
  -- strict thread matches above, loose matches on Subject: below --
2025-07-24 22:11 [v5 " Yang Shi
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support " Yang Shi
2025-07-29 12:34   ` Dev Jain
2025-08-05 21:28     ` Yang Shi
2025-08-06  0:10       ` Yang Shi
2025-08-01 14:35   ` Ryan Roberts
2025-08-04 10:07     ` Ryan Roberts
2025-08-05 18:53     ` Yang Shi
2025-08-06  7:20       ` Ryan Roberts
2025-08-07  0:44         ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).