linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
@ 2025-07-24 22:11 Yang Shi
  2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
                   ` (3 more replies)
  0 siblings, 4 replies; 20+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel


Changelog
=========
v5:
  * Rebased to v6.16 rc.
  * Based on Miko's v8 BBML2 cpufeature patch (https://lore.kernel.org/lkml/20250625113435.26849-3-miko.lenczewski@arm.com/).
  * Based on Dev's "v4 enable permission change on kernel block mapping" patch (https://lore.kernel.org/lkml/20250703151441.60325-1-dev.jain@arm.com/),
    also included this patch in this series per the sugestion from Catalin.
    Andrew had some nits on the generic mm part for that patch, but I didn't change anything.
  * Implemented split in dedicated function per Ryan.
  * Dropped "make __create_pgd_mapping() and helpers non-void" patch. It is not
    needed anymore.
  * Optimized page table walk to reduce loop overhead if the range spans
    multiple leaf mappings.
  * Used dedicated lock to serialize split instead of init_mm's mmap_lock
    per Ryan.
  * Allowed split page table for kfence and realm per Ryan.
  * Used lazy mmu to reduce unnecessary barriers per Ryan.
  * Removed unnecessary TLB flush per Ryan.
  * Cleaned up "wait for boot cpu" logic for repainting per Ryan.
  * Misc clean up.
v4:
  * Rebased to v6.15-rc4.
  * Based on Miko's latest BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250428153514.55772-4-miko.lenczewski@arm.com/).
  * Keep block mappings rather than splitting to PTEs if it is fully contained
    per Ryan.
  * Return -EINVAL if page table allocation failed instead of BUG_ON per Ryan.
  * When page table allocation failed, return -1 instead of 0 per Ryan.
  * Allocate page table with GFP_ATOMIC for repainting per Ryan.
  * Use idmap to wait for repainting is done per Ryan.
  * Some minor fixes per the discussion for v3.
  * Some clean up to reduce redundant code.

v3:
  * Rebased to v6.14-rc4.
  * Based on Miko's BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250228182403.6269-3-miko.lenczewski@arm.com/).
    Also included in this series in order to have the complete patchset.
  * Enhanced __create_pgd_mapping() to handle split as well per Ryan.
  * Supported CONT mappings per Ryan.
  * Supported asymmetric system by splitting kernel linear mapping if such
    system is detected per Ryan. I don't have such system to test, so the
    testing is done by hacking kernel to call linear mapping repainting
    unconditionally. The linear mapping doesn't have any block and cont
    mappings after booting.

RFC v2:
  * Used allowlist to advertise BBM lv2 on the CPUs which can handle TLB
    conflict gracefully per Will Deacon
  * Rebased onto v6.13-rc5
  * https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/

v4: https://lore.kernel.org/lkml/20250531024545.1101304-1-yang@os.amperecomputing.com/
v3: https://lore.kernel.org/linux-arm-kernel/20250304222018.615808-1-yang@os.amperecomputing.com/
RFC v2: https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/
RFC v1: https://lore.kernel.org/lkml/20241118181711.962576-1-yang@os.amperecomputing.com/

Description
===========
When rodata=full kernel linear mapping is mapped by PTE due to arm's
break-before-make rule.

A number of performance issues arise when the kernel linear map is using
PTE entries due to arm's break-before-make rule:
  - performance degradation
  - more TLB pressure
  - memory waste for kernel page table

These issues can be avoided by specifying rodata=on the kernel command
line but this disables the alias checks on page table permissions and
therefore compromises security somewhat.

With FEAT_BBM level 2 support it is no longer necessary to invalidate the
page table entry when changing page sizes.  This allows the kernel to
split large mappings after boot is complete.

This patch adds support for splitting large mappings when FEAT_BBM level 2
is available and rodata=full is used. This functionality will be used
when modifying page permissions for individual page frames.

Without FEAT_BBM level 2 we will keep the kernel linear map using PTEs
only.

If the system is asymmetric, the kernel linear mapping may be repainted once
the BBML2 capability is finalized on all CPUs.  See patch #4 for more details.

We saw significant performance increases in some benchmarks with
rodata=full without compromising the security features of the kernel.

Testing
=======
The test was done on AmpereOne machine (192 cores, 1P) with 256GB memory and
4K page size + 48 bit VA.

Function test (4K/16K/64K page size)
  - Kernel boot.  Kernel needs change kernel linear mapping permission at
    boot stage, if the patch didn't work, kernel typically didn't boot.
  - Module stress from stress-ng. Kernel module load change permission for
    linear mapping.
  - A test kernel module which allocates 80% of total memory via vmalloc(),
    then change the vmalloc area permission to RO, this also change linear
    mapping permission to RO, then change it back before vfree(). Then launch
    a VM which consumes almost all physical memory.
  - VM with the patchset applied in guest kernel too.
  - Kernel build in VM with guest kernel which has this series applied.
  - rodata=on. Make sure other rodata mode is not broken.
  - Boot on the machine which doesn't support BBML2.

Performance
===========
Memory consumption
Before:
MemTotal:       258988984 kB
MemFree:        254821700 kB

After:
MemTotal:       259505132 kB
MemFree:        255410264 kB

Around 500MB more memory are free to use.  The larger the machine, the
more memory saved.

Performance benchmarking
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses.  The kernel TLB
MPKI is reduced by 28.5%.

The benchmark data is now on par with rodata=on too.

* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt with no read/write workqueue).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
    --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
    --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
    --name=iops-test-job --eta-newline=1 --size 100G

The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.

* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.


Dev Jain (1):
      arm64: Enable permission change on arm64 kernel block mappings

Yang Shi (3):
      arm64: cpufeature: add AmpereOne to BBML2 allow list
      arm64: mm: support large block mapping when rodata=full
      arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs

 arch/arm64/include/asm/cpufeature.h |  34 ++++++++++++++
 arch/arm64/include/asm/mmu.h        |   5 +++
 arch/arm64/include/asm/pgtable.h    |   5 +++
 arch/arm64/kernel/cpufeature.c      |  37 ++++-----------
 arch/arm64/mm/mmu.c                 | 408 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
 arch/arm64/mm/pageattr.c            | 159 +++++++++++++++++++++++++++++++++++++++++++++++++++-------------
 arch/arm64/mm/proc.S                |  57 ++++++++++++++++-------
 include/linux/pagewalk.h            |   3 ++
 mm/pagewalk.c                       |  24 ++++++++++
 9 files changed, 648 insertions(+), 84 deletions(-)



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings
  2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
  2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 20+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

From: Dev Jain <dev.jain@arm.com>

This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we can
handle any permission games on the kernel (init_mm) pagetable. Currently,
__change_memory_common() uses apply_to_page_range() which does not support
changing permissions for block mappings. We attempt to move away from this
by using the pagewalk API, similar to what riscv does right now; however,
it is the responsibility of the caller to ensure that we do not pass a
range overlapping a partial block mapping or cont mapping; in such a case,
the system must be able to support range splitting.

This patch is tied with Yang Shi's attempt [1] at using huge mappings
in the linear mapping in case the system supports BBML2, in which case
we will be able to split the linear mapping if needed without
break-before-make. Thus, Yang's series, IIUC, will be one such user of my
patch; suppose we are changing permissions on a range of the linear map
backed by PMD-hugepages, then the sequence of operations should look
like the following:

split_range(start)
split_range(end);
__change_memory_common(start, end);

However, this patch can be used independently of Yang's; since currently
permission games are being played only on pte mappings (due to
apply_to_page_range not supporting otherwise), this patch provides the
mechanism for enabling huge mappings for various kernel mappings
like linear map and vmalloc.

---------------------
Implementation
---------------------

arm64 currently changes permissions on vmalloc objects locklessly, via
apply_to_page_range, whose limitation is to deny changing permissions for
block mappings. Therefore, we move away to use the generic pagewalk API,
thus paving the path for enabling huge mappings by default on kernel space
mappings, thus leading to more efficient TLB usage. However, the API
currently enforces the init_mm.mmap_lock to be held. To avoid the
unnecessary bottleneck of the mmap_lock for our usecase, this patch
extends this generic API to be used locklessly, so as to retain the
existing behaviour for changing permissions. Apart from this reason, it is
noted at [2] that KFENCE can manipulate kernel pgtable entries during
softirqs. It does this by calling set_memory_valid() -> __change_memory_common().
This being a non-sleepable context, we cannot take the init_mm mmap lock.

Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive control
over the range, thus guaranteeing no concurrent access.

We require that the start and end of a given range do not partially overlap
block mappings, or cont mappings. Return -EINVAL in case a partial block
mapping is detected in any of the PGD/P4D/PUD/PMD levels; add a
corresponding comment in update_range_prot() to warn that eliminating
such a condition is the responsibility of the caller.

Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to
an atomic operation for the block mappings. This is fine as any access
will decode either the old or the new permission until the TLBI.

apply_to_page_range() currently performs all pte level callbacks while in
lazy mmu mode. Since arm64 can optimize performance by batching barriers
when modifying kernel pgtables in lazy mmu mode, we would like to continue
to benefit from this optimisation. Unfortunately walk_kernel_page_table_range()
does not use lazy mmu mode. However, since the pagewalk framework is not
allocating any memory, we can safely bracket the whole operation inside
lazy mmu mode ourselves. Therefore, wrap the call to
walk_kernel_page_table_range() with the lazy MMU helpers.

[1] https://lore.kernel.org/all/20250304222018.615808-1-yang@os.amperecomputing.com/
[2] https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 arch/arm64/mm/pageattr.c | 155 +++++++++++++++++++++++++++++++--------
 include/linux/pagewalk.h |   3 +
 mm/pagewalk.c            |  24 ++++++
 3 files changed, 150 insertions(+), 32 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..c6a85000fa0e 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
 #include <linux/mem_encrypt.h>
 #include <linux/sched.h>
 #include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
 
 #include <asm/cacheflush.h>
 #include <asm/pgtable-prot.h>
@@ -20,6 +21,99 @@ struct page_change_data {
 	pgprot_t clear_mask;
 };
 
+static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk)
+{
+	struct page_change_data *masks = walk->private;
+
+	val &= ~(pgprot_val(masks->clear_mask));
+	val |= (pgprot_val(masks->set_mask));
+
+	return val;
+}
+
+static int pageattr_pgd_entry(pgd_t *pgd, unsigned long addr,
+			      unsigned long next, struct mm_walk *walk)
+{
+	pgd_t val = pgdp_get(pgd);
+
+	if (pgd_leaf(val)) {
+		if (WARN_ON_ONCE((next - addr) != PGDIR_SIZE))
+			return -EINVAL;
+		val = __pgd(set_pageattr_masks(pgd_val(val), walk));
+		set_pgd(pgd, val);
+		walk->action = ACTION_CONTINUE;
+	}
+
+	return 0;
+}
+
+static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr,
+			      unsigned long next, struct mm_walk *walk)
+{
+	p4d_t val = p4dp_get(p4d);
+
+	if (p4d_leaf(val)) {
+		if (WARN_ON_ONCE((next - addr) != P4D_SIZE))
+			return -EINVAL;
+		val = __p4d(set_pageattr_masks(p4d_val(val), walk));
+		set_p4d(p4d, val);
+		walk->action = ACTION_CONTINUE;
+	}
+
+	return 0;
+}
+
+static int pageattr_pud_entry(pud_t *pud, unsigned long addr,
+			      unsigned long next, struct mm_walk *walk)
+{
+	pud_t val = pudp_get(pud);
+
+	if (pud_leaf(val)) {
+		if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+			return -EINVAL;
+		val = __pud(set_pageattr_masks(pud_val(val), walk));
+		set_pud(pud, val);
+		walk->action = ACTION_CONTINUE;
+	}
+
+	return 0;
+}
+
+static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr,
+			      unsigned long next, struct mm_walk *walk)
+{
+	pmd_t val = pmdp_get(pmd);
+
+	if (pmd_leaf(val)) {
+		if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+			return -EINVAL;
+		val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+		set_pmd(pmd, val);
+		walk->action = ACTION_CONTINUE;
+	}
+
+	return 0;
+}
+
+static int pageattr_pte_entry(pte_t *pte, unsigned long addr,
+			      unsigned long next, struct mm_walk *walk)
+{
+	pte_t val = __ptep_get(pte);
+
+	val = __pte(set_pageattr_masks(pte_val(val), walk));
+	__set_pte(pte, val);
+
+	return 0;
+}
+
+static const struct mm_walk_ops pageattr_ops = {
+	.pgd_entry	= pageattr_pgd_entry,
+	.p4d_entry	= pageattr_p4d_entry,
+	.pud_entry	= pageattr_pud_entry,
+	.pmd_entry	= pageattr_pmd_entry,
+	.pte_entry	= pageattr_pte_entry,
+};
+
 bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
 
 bool can_set_direct_map(void)
@@ -37,33 +131,35 @@ bool can_set_direct_map(void)
 		arm64_kfence_can_set_direct_map() || is_realm_world();
 }
 
-static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
+static int update_range_prot(unsigned long start, unsigned long size,
+			     pgprot_t set_mask, pgprot_t clear_mask)
 {
-	struct page_change_data *cdata = data;
-	pte_t pte = __ptep_get(ptep);
+	struct page_change_data data;
+	int ret;
 
-	pte = clear_pte_bit(pte, cdata->clear_mask);
-	pte = set_pte_bit(pte, cdata->set_mask);
+	data.set_mask = set_mask;
+	data.clear_mask = clear_mask;
 
-	__set_pte(ptep, pte);
-	return 0;
+	arch_enter_lazy_mmu_mode();
+
+	/*
+	 * The caller must ensure that the range we are operating on does not
+	 * partially overlap a block mapping, or a cont mapping. Any such case
+	 * must be eliminated by splitting the mapping.
+	 */
+	ret = walk_kernel_page_table_range_lockless(start, start + size,
+						    &pageattr_ops, &data);
+	arch_leave_lazy_mmu_mode();
+
+	return ret;
 }
 
-/*
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
 static int __change_memory_common(unsigned long start, unsigned long size,
-				pgprot_t set_mask, pgprot_t clear_mask)
+				  pgprot_t set_mask, pgprot_t clear_mask)
 {
-	struct page_change_data data;
 	int ret;
 
-	data.set_mask = set_mask;
-	data.clear_mask = clear_mask;
-
-	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
-					&data);
-
+	ret = update_range_prot(start, size, set_mask, clear_mask);
 	/*
 	 * If the memory is being made valid without changing any other bits
 	 * then a TLBI isn't required as a non-valid entry cannot be cached in
@@ -71,6 +167,7 @@ static int __change_memory_common(unsigned long start, unsigned long size,
 	 */
 	if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask))
 		flush_tlb_kernel_range(start, start + size);
+
 	return ret;
 }
 
@@ -174,32 +271,26 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 
 int set_direct_map_invalid_noflush(struct page *page)
 {
-	struct page_change_data data = {
-		.set_mask = __pgprot(0),
-		.clear_mask = __pgprot(PTE_VALID),
-	};
+	pgprot_t clear_mask = __pgprot(PTE_VALID);
+	pgprot_t set_mask = __pgprot(0);
 
 	if (!can_set_direct_map())
 		return 0;
 
-	return apply_to_page_range(&init_mm,
-				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+	return update_range_prot((unsigned long)page_address(page),
+				 PAGE_SIZE, set_mask, clear_mask);
 }
 
 int set_direct_map_default_noflush(struct page *page)
 {
-	struct page_change_data data = {
-		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
-		.clear_mask = __pgprot(PTE_RDONLY),
-	};
+	pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+	pgprot_t clear_mask = __pgprot(PTE_RDONLY);
 
 	if (!can_set_direct_map())
 		return 0;
 
-	return apply_to_page_range(&init_mm,
-				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+	return update_range_prot((unsigned long)page_address(page),
+				 PAGE_SIZE, set_mask, clear_mask);
 }
 
 static int __set_memory_enc_dec(unsigned long addr,
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 682472c15495..8212e8f2d2d5 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
 int walk_kernel_page_table_range(unsigned long start,
 		unsigned long end, const struct mm_walk_ops *ops,
 		pgd_t *pgd, void *private);
+int walk_kernel_page_table_range_lockless(unsigned long start,
+		unsigned long end, const struct mm_walk_ops *ops,
+		void *private);
 int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
 			unsigned long end, const struct mm_walk_ops *ops,
 			void *private);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 648038247a8d..18a675ab87cf 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -633,6 +633,30 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
 	return walk_pgd_range(start, end, &walk);
 }
 
+/*
+ * Use this function to walk the kernel page tables locklessly. It should be
+ * guaranteed that the caller has exclusive access over the range they are
+ * operating on - that there should be no concurrent access, for example,
+ * changing permissions for vmalloc objects.
+ */
+int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
+		const struct mm_walk_ops *ops, void *private)
+{
+	struct mm_walk walk = {
+		.ops		= ops,
+		.mm		= &init_mm,
+		.private	= private,
+		.no_vma		= true
+	};
+
+	if (start >= end)
+		return -EINVAL;
+	if (!check_ops_valid(ops))
+		return -EINVAL;
+
+	return walk_pgd_range(start, end, &walk);
+}
+
 /**
  * walk_page_range_debug - walk a range of pagetables not backed by a vma
  * @mm:		mm_struct representing the target process of page table walk
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
  2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
  2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
  2025-08-01 14:36   ` Ryan Roberts
  2025-08-04 23:20   ` Christoph Lameter (Ampere)
  2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
  2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
  3 siblings, 2 replies; 20+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

AmpereOne supports BBML2 without conflict abort, add to the allow list.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/kernel/cpufeature.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7a3c404288d5..d2a8a509a58e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2230,6 +2230,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
 	static const struct midr_range supports_bbml2_noabort_list[] = {
 		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
 		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
 		{}
 	};
 
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
  2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
  2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
  2025-07-29 12:34   ` Dev Jain
  2025-08-01 14:35   ` Ryan Roberts
  2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
  3 siblings, 2 replies; 20+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.

This resulted in a couple of problems:
  - performance degradation
  - more TLB pressure
  - memory waste for kernel page table

With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.

Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full.  When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.

The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.

With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P) with
256GB memory.

* Memory use after boot
Before:
MemTotal:       258988984 kB
MemFree:        254821700 kB

After:
MemTotal:       259505132 kB
MemFree:        255410264 kB

Around 500MB more memory are free to use.  The larger the machine, the
more memory saved.

* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses.  The kernel TLB
MPKI is reduced by 28.5%.

The benchmark data is now on par with rodata=on too.

* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
    --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
    --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
    --name=iops-test-job --eta-newline=1 --size 100G

The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.

* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/include/asm/cpufeature.h |  34 ++++
 arch/arm64/include/asm/mmu.h        |   1 +
 arch/arm64/include/asm/pgtable.h    |   5 +
 arch/arm64/kernel/cpufeature.c      |  31 +--
 arch/arm64/mm/mmu.c                 | 293 +++++++++++++++++++++++++++-
 arch/arm64/mm/pageattr.c            |   4 +
 6 files changed, 333 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..d0d394cc837d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
 	return cpus_have_final_cap(ARM64_HAS_PMUV3);
 }
 
+static inline bool bbml2_noabort_available(void)
+{
+	/*
+	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
+	 * as possible. This list is therefore an allow-list of known-good
+	 * implementations that both support BBML2 and additionally, fulfill the
+	 * extra constraint of never generating TLB conflict aborts when using
+	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
+	 * kernel contexts difficult to prove safe against recursive aborts).
+	 *
+	 * Note that implementations can only be considered "known-good" if their
+	 * implementors attest to the fact that the implementation never raises
+	 * TLB conflict aborts for BBML2 mapping granularity changes.
+	 */
+	static const struct midr_range supports_bbml2_noabort_list[] = {
+		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
+		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
+		{}
+	};
+
+	/* Does our cpu guarantee to never raise TLB conflict aborts? */
+	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
+		return false;
+
+	/*
+	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
+	 * about whether the MIDR check passes.
+	 */
+
+	return true;
+}
+
 static inline bool system_supports_bbml2_noabort(void)
 {
 	return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..57f4b25e6f33 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
 extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
 
 /*
  * This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ba63c8736666..ad2a6a7e86b0 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
 	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
 }
 
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
 #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
 static inline int pte_uffd_wp(pte_t pte)
 {
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d2a8a509a58e..1c96016a7a41 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
 
 static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
 {
-	/*
-	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
-	 * as possible. This list is therefore an allow-list of known-good
-	 * implementations that both support BBML2 and additionally, fulfill the
-	 * extra constraint of never generating TLB conflict aborts when using
-	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
-	 * kernel contexts difficult to prove safe against recursive aborts).
-	 *
-	 * Note that implementations can only be considered "known-good" if their
-	 * implementors attest to the fact that the implementation never raises
-	 * TLB conflict aborts for BBML2 mapping granularity changes.
-	 */
-	static const struct midr_range supports_bbml2_noabort_list[] = {
-		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
-		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
-		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
-		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
-		{}
-	};
-
-	/* Does our cpu guarantee to never raise TLB conflict aborts? */
-	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
-		return false;
-
-	/*
-	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
-	 * about whether the MIDR check passes.
-	 */
-
-	return true;
+	return bbml2_noabort_available();
 }
 
 #ifdef CONFIG_ARM64_PAN
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3d5fb37424ab..f63b39613571 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
 			     int flags);
 #endif
 
+#define INVALID_PHYS_ADDR	-1
+
 static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 				       enum pgtable_type pgtable_type)
 {
@@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
 	phys_addr_t pa;
 
-	BUG_ON(!ptdesc);
+	if (!ptdesc)
+		return INVALID_PHYS_ADDR;
+
 	pa = page_to_phys(ptdesc_page(ptdesc));
 
 	switch (pgtable_type) {
@@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 }
 
 static phys_addr_t __maybe_unused
-pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+split_pgtable_alloc(enum pgtable_type pgtable_type)
 {
 	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
 }
 
+static phys_addr_t __maybe_unused
+pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+	phys_addr_t pa;
+
+	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+	BUG_ON(pa == INVALID_PHYS_ADDR);
+	return pa;
+}
+
 static phys_addr_t
 pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
 {
-	return __pgd_pgtable_alloc(NULL, pgtable_type);
+	phys_addr_t pa;
+
+	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+	BUG_ON(pa == INVALID_PHYS_ADDR);
+	return pa;
 }
 
 /*
@@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			     pgd_pgtable_alloc_special_mm, flags);
 }
 
+static DEFINE_MUTEX(pgtable_split_lock);
+
+static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
+{
+	pte_t *ptep;
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+
+	ptep = pte_offset_kernel(pmdp, addr);
+
+	do {
+		pte_t *_ptep;
+
+		nr = 0;
+		next = pte_cont_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
+		span = nr * CONT_PTE_SIZE;
+
+		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
+
+		if (((addr | next) & ~CONT_PTE_MASK) == 0)
+			continue;
+
+		if (!pte_cont(__ptep_get(_ptep)))
+			continue;
+
+		for (int i = 0; i < CONT_PTES; i++, _ptep++)
+			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
+	} while (addr = next + span, addr != end);
+
+	return 0;
+}
+
+static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
+{
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+	int ret = 0;
+
+	do {
+		pmd_t pmd;
+
+		nr = 1;
+		next = pmd_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / PMD_SIZE));
+		span = (nr - 1) * PMD_SIZE;
+
+		if (((addr | next) & ~PMD_MASK) == 0)
+			continue;
+
+		pmd = pmdp_get(pmdp);
+		if (pmd_leaf(pmd)) {
+			phys_addr_t pte_phys;
+			pte_t *ptep;
+			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
+					  PMD_TABLE_AF;
+			unsigned long pfn = pmd_pfn(pmd);
+			pgprot_t prot = pmd_pgprot(pmd);
+
+			pte_phys = split_pgtable_alloc(TABLE_PTE);
+			if (pte_phys == INVALID_PHYS_ADDR)
+				return -ENOMEM;
+
+			if (pgprot_val(prot) & PMD_SECT_PXN)
+				pmdval |= PMD_TABLE_PXN;
+
+			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
+					PTE_TYPE_PAGE);
+			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			ptep = (pte_t *)phys_to_virt(pte_phys);
+			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+				__set_pte(ptep, pfn_pte(pfn, prot));
+
+			dsb(ishst);
+
+			__pmd_populate(pmdp, pte_phys, pmdval);
+		}
+
+		ret = split_cont_pte(pmdp, addr, next);
+		if (ret)
+			break;
+	} while (pmdp += nr, addr = next + span, addr != end);
+
+	return ret;
+}
+
+static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
+{
+	pmd_t *pmdp;
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+	int ret = 0;
+
+	pmdp = pmd_offset(pudp, addr);
+
+	do {
+		pmd_t *_pmdp;
+
+		nr = 0;
+		next = pmd_cont_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
+		span = nr * CONT_PMD_SIZE;
+
+		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
+			pmdp += pmd_index(next) - pmd_index(addr) +
+				nr * CONT_PMDS;
+			continue;
+		}
+
+		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+		if (!pmd_cont(pmdp_get(_pmdp)))
+			goto split;
+
+		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
+			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
+
+split:
+		ret = split_pmd(pmdp, addr, next);
+		if (ret)
+			break;
+
+		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
+	} while (addr = next + span, addr != end);
+
+	return ret;
+}
+
+static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
+{
+	pud_t *pudp;
+	unsigned long next;
+	unsigned int nr;
+	unsigned long span;
+	int ret = 0;
+
+	pudp = pud_offset(p4dp, addr);
+
+	do {
+		pud_t pud;
+
+		nr = 1;
+		next = pud_addr_end(addr, end);
+		if (next < end)
+			nr = max(nr, ((end - next) / PUD_SIZE));
+		span = (nr - 1) * PUD_SIZE;
+
+		if (((addr | next) & ~PUD_MASK) == 0)
+			continue;
+
+		pud = pudp_get(pudp);
+		if (pud_leaf(pud)) {
+			phys_addr_t pmd_phys;
+			pmd_t *pmdp;
+			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
+					  PUD_TABLE_AF;
+			unsigned long pfn = pud_pfn(pud);
+			pgprot_t prot = pud_pgprot(pud);
+			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+
+			pmd_phys = split_pgtable_alloc(TABLE_PMD);
+			if (pmd_phys == INVALID_PHYS_ADDR)
+				return -ENOMEM;
+
+			if (pgprot_val(prot) & PMD_SECT_PXN)
+				pudval |= PUD_TABLE_PXN;
+
+			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
+					PMD_TYPE_SECT);
+			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
+				set_pmd(pmdp, pfn_pmd(pfn, prot));
+				pfn += step;
+			}
+
+			dsb(ishst);
+
+			__pud_populate(pudp, pmd_phys, pudval);
+		}
+
+		ret = split_cont_pmd(pudp, addr, next);
+		if (ret)
+			break;
+	} while (pudp += nr, addr = next + span, addr != end);
+
+	return ret;
+}
+
+static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
+{
+	p4d_t *p4dp;
+	unsigned long next;
+	int ret = 0;
+
+	p4dp = p4d_offset(pgdp, addr);
+
+	do {
+		next = p4d_addr_end(addr, end);
+
+		ret = split_pud(p4dp, addr, next);
+		if (ret)
+			break;
+	} while (p4dp++, addr = next, addr != end);
+
+	return ret;
+}
+
+static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
+{
+	unsigned long next;
+	int ret = 0;
+
+	do {
+		next = pgd_addr_end(addr, end);
+		ret = split_p4d(pgdp, addr, next);
+		if (ret)
+			break;
+	} while (pgdp++, addr = next, addr != end);
+
+	return ret;
+}
+
+int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
+{
+	int ret;
+
+	if (!system_supports_bbml2_noabort())
+		return 0;
+
+	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
+		return -EINVAL;
+
+	mutex_lock(&pgtable_split_lock);
+	arch_enter_lazy_mmu_mode();
+	ret = split_pgd(pgd_offset_k(start), start, end);
+	arch_leave_lazy_mmu_mode();
+	mutex_unlock(&pgtable_split_lock);
+
+	return ret;
+}
+
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
@@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
 
 #endif /* CONFIG_KFENCE */
 
+bool linear_map_requires_bbml2;
+
+static inline bool force_pte_mapping(void)
+{
+	/*
+	 * Can't use cpufeature API to determine whether BBM level 2
+	 * is supported or not since cpufeature have not been
+	 * finalized yet.
+	 */
+	return (!bbml2_noabort_available() && (rodata_full ||
+		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
+		debug_pagealloc_enabled();
+}
+
 static void __init map_mem(pgd_t *pgdp)
 {
 	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
 
 	early_kfence_pool = arm64_kfence_alloc_pool();
 
-	if (can_set_direct_map())
+	linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
+
+	if (force_pte_mapping())
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
 
 	VM_BUG_ON(!mhp_range_allowed(start, size, true));
 
-	if (can_set_direct_map())
+	if (force_pte_mapping() ||
+	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c6a85000fa0e..6566aa9d8abb 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
 	data.set_mask = set_mask;
 	data.clear_mask = clear_mask;
 
+	ret = split_kernel_pgtable_mapping(start, start + size);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
 	arch_enter_lazy_mmu_mode();
 
 	/*
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
                   ` (2 preceding siblings ...)
  2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
  2025-07-26 11:10   ` kernel test robot
                     ` (2 more replies)
  3 siblings, 3 replies; 20+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
  To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: yang, linux-arm-kernel, linux-kernel

The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point.  So the linear
mapping is determined by the capability of boot CPU.  If the boot CPU
supports BBML2, large block mapping will be used for linear mapping.

But the secondary CPUs may not support BBML2, so repaint the linear mapping
if large block mapping is used and the secondary CPUs don't support BBML2
once cpufeature is finalized on all CPUs.

If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.

Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
 arch/arm64/include/asm/mmu.h   |   6 +-
 arch/arm64/kernel/cpufeature.c |   8 ++
 arch/arm64/mm/mmu.c            | 173 +++++++++++++++++++++++++++------
 arch/arm64/mm/pageattr.c       |   2 +-
 arch/arm64/mm/proc.S           |  57 ++++++++---
 5 files changed, 196 insertions(+), 50 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 57f4b25e6f33..9bf50e8897e2 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -56,6 +56,8 @@ typedef struct {
  */
 #define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
 
+extern bool linear_map_requires_bbml2;
+
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
 	return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
@@ -71,7 +73,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
 extern void mark_linear_text_alias_ro(void);
-extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
+extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
+					unsigned int flags);
+extern int linear_map_split_to_ptes(void *__unused);
 
 /*
  * This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1c96016a7a41..23c01d679c40 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
 #include <asm/insn.h>
 #include <asm/kvm_host.h>
 #include <asm/mmu_context.h>
+#include <asm/mmu.h>
 #include <asm/mte.h>
 #include <asm/hypervisor.h>
 #include <asm/processor.h>
@@ -2009,6 +2010,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
 	return 0;
 }
 
+static void __init linear_map_maybe_split_to_ptes(void)
+{
+	if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
+		stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
+}
+
 static void __init kpti_install_ng_mappings(void)
 {
 	/* Check whether KPTI is going to be used */
@@ -3855,6 +3862,7 @@ void __init setup_system_features(void)
 {
 	setup_system_capabilities();
 
+	linear_map_maybe_split_to_ptes();
 	kpti_install_ng_mappings();
 
 	sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f63b39613571..22f2d0869fdd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -482,11 +482,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
 
 #define INVALID_PHYS_ADDR	-1
 
-static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
+static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
 				       enum pgtable_type pgtable_type)
 {
 	/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
-	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
+	struct ptdesc *ptdesc = pagetable_alloc(gfp, 0);
 	phys_addr_t pa;
 
 	if (!ptdesc)
@@ -513,9 +513,16 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
 }
 
 static phys_addr_t __maybe_unused
-split_pgtable_alloc(enum pgtable_type pgtable_type)
+split_pgtable_alloc(enum pgtable_type pgtable_type, int flags)
 {
-	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+	gfp_t gfp;
+
+	if ((flags & (NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS)) == 0)
+		gfp = GFP_PGTABLE_KERNEL & ~__GFP_ZERO;
+	else
+		gfp = GFP_ATOMIC;
+
+	return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
 }
 
 static phys_addr_t __maybe_unused
@@ -523,7 +530,8 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
 {
 	phys_addr_t pa;
 
-	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+	pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL & ~__GFP_ZERO,
+				 pgtable_type);
 	BUG_ON(pa == INVALID_PHYS_ADDR);
 	return pa;
 }
@@ -533,7 +541,8 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
 {
 	phys_addr_t pa;
 
-	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+	pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL & ~__GFP_ZERO,
+				 pgtable_type);
 	BUG_ON(pa == INVALID_PHYS_ADDR);
 	return pa;
 }
@@ -572,7 +581,8 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 
 static DEFINE_MUTEX(pgtable_split_lock);
 
-static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
+static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+			  unsigned int flags)
 {
 	pte_t *ptep;
 	unsigned long next;
@@ -586,14 +596,16 @@ static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
 
 		nr = 0;
 		next = pte_cont_addr_end(addr, end);
-		if (next < end)
+		if (next < end &&
+		    (flags & NO_CONT_MAPPINGS) == 0)
 			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
 		span = nr * CONT_PTE_SIZE;
 
 		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
 		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
 
-		if (((addr | next) & ~CONT_PTE_MASK) == 0)
+		if (((addr | next) & ~CONT_PTE_MASK) == 0 &&
+		    (flags & NO_CONT_MAPPINGS) == 0)
 			continue;
 
 		if (!pte_cont(__ptep_get(_ptep)))
@@ -606,7 +618,8 @@ static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
 	return 0;
 }
 
-static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
+static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
+		     unsigned int flags)
 {
 	unsigned long next;
 	unsigned int nr;
@@ -618,11 +631,13 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
 
 		nr = 1;
 		next = pmd_addr_end(addr, end);
-		if (next < end)
+		if (next < end &&
+		    (flags & NO_BLOCK_MAPPINGS) == 0)
 			nr = max(nr, ((end - next) / PMD_SIZE));
 		span = (nr - 1) * PMD_SIZE;
 
-		if (((addr | next) & ~PMD_MASK) == 0)
+		if (((addr | next) & ~PMD_MASK) == 0 &&
+		    (flags & NO_BLOCK_MAPPINGS) == 0)
 			continue;
 
 		pmd = pmdp_get(pmdp);
@@ -634,7 +649,7 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
 			unsigned long pfn = pmd_pfn(pmd);
 			pgprot_t prot = pmd_pgprot(pmd);
 
-			pte_phys = split_pgtable_alloc(TABLE_PTE);
+			pte_phys = split_pgtable_alloc(TABLE_PTE, flags);
 			if (pte_phys == INVALID_PHYS_ADDR)
 				return -ENOMEM;
 
@@ -643,7 +658,8 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
 
 			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
 					PTE_TYPE_PAGE);
-			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			if ((flags & NO_CONT_MAPPINGS) == 0)
+				prot = __pgprot(pgprot_val(prot) | PTE_CONT);
 			ptep = (pte_t *)phys_to_virt(pte_phys);
 			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
 				__set_pte(ptep, pfn_pte(pfn, prot));
@@ -653,7 +669,7 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
 			__pmd_populate(pmdp, pte_phys, pmdval);
 		}
 
-		ret = split_cont_pte(pmdp, addr, next);
+		ret = split_cont_pte(pmdp, addr, next, flags);
 		if (ret)
 			break;
 	} while (pmdp += nr, addr = next + span, addr != end);
@@ -661,7 +677,8 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
 	return ret;
 }
 
-static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
+static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+			  unsigned int flags)
 {
 	pmd_t *pmdp;
 	unsigned long next;
@@ -676,11 +693,13 @@ static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
 
 		nr = 0;
 		next = pmd_cont_addr_end(addr, end);
-		if (next < end)
+		if (next < end &&
+		    (flags & NO_CONT_MAPPINGS) == 0)
 			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
 		span = nr * CONT_PMD_SIZE;
 
-		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
+		if (((addr | next) & ~CONT_PMD_MASK) == 0 &&
+		    (flags & NO_CONT_MAPPINGS) == 0) {
 			pmdp += pmd_index(next) - pmd_index(addr) +
 				nr * CONT_PMDS;
 			continue;
@@ -694,7 +713,7 @@ static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
 			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
 
 split:
-		ret = split_pmd(pmdp, addr, next);
+		ret = split_pmd(pmdp, addr, next, flags);
 		if (ret)
 			break;
 
@@ -704,7 +723,8 @@ static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
 	return ret;
 }
 
-static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
+static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
+		     unsigned int flags)
 {
 	pud_t *pudp;
 	unsigned long next;
@@ -719,11 +739,13 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
 
 		nr = 1;
 		next = pud_addr_end(addr, end);
-		if (next < end)
+		if (next < end &&
+		    (flags & NO_BLOCK_MAPPINGS) == 0)
 			nr = max(nr, ((end - next) / PUD_SIZE));
 		span = (nr - 1) * PUD_SIZE;
 
-		if (((addr | next) & ~PUD_MASK) == 0)
+		if (((addr | next) & ~PUD_MASK) == 0 &&
+		    (flags & NO_BLOCK_MAPPINGS) == 0)
 			continue;
 
 		pud = pudp_get(pudp);
@@ -736,7 +758,7 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
 			pgprot_t prot = pud_pgprot(pud);
 			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
 
-			pmd_phys = split_pgtable_alloc(TABLE_PMD);
+			pmd_phys = split_pgtable_alloc(TABLE_PMD, flags);
 			if (pmd_phys == INVALID_PHYS_ADDR)
 				return -ENOMEM;
 
@@ -745,7 +767,8 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
 
 			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
 					PMD_TYPE_SECT);
-			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+			if ((flags & NO_CONT_MAPPINGS) == 0)
+				prot = __pgprot(pgprot_val(prot) | PTE_CONT);
 			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
 			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
 				set_pmd(pmdp, pfn_pmd(pfn, prot));
@@ -757,7 +780,7 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
 			__pud_populate(pudp, pmd_phys, pudval);
 		}
 
-		ret = split_cont_pmd(pudp, addr, next);
+		ret = split_cont_pmd(pudp, addr, next, flags);
 		if (ret)
 			break;
 	} while (pudp += nr, addr = next + span, addr != end);
@@ -765,7 +788,8 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
 	return ret;
 }
 
-static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
+static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
+		     unsigned int flags)
 {
 	p4d_t *p4dp;
 	unsigned long next;
@@ -776,7 +800,7 @@ static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
 	do {
 		next = p4d_addr_end(addr, end);
 
-		ret = split_pud(p4dp, addr, next);
+		ret = split_pud(p4dp, addr, next, flags);
 		if (ret)
 			break;
 	} while (p4dp++, addr = next, addr != end);
@@ -784,14 +808,15 @@ static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
 	return ret;
 }
 
-static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
+static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end,
+		     unsigned int flags)
 {
 	unsigned long next;
 	int ret = 0;
 
 	do {
 		next = pgd_addr_end(addr, end);
-		ret = split_p4d(pgdp, addr, next);
+		ret = split_p4d(pgdp, addr, next, flags);
 		if (ret)
 			break;
 	} while (pgdp++, addr = next, addr != end);
@@ -799,7 +824,8 @@ static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
 	return ret;
 }
 
-int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
+int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
+				 unsigned int flags)
 {
 	int ret;
 
@@ -811,7 +837,7 @@ int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
 
 	mutex_lock(&pgtable_split_lock);
 	arch_enter_lazy_mmu_mode();
-	ret = split_pgd(pgd_offset_k(start), start, end);
+	ret = split_pgd(pgd_offset_k(start), start, end, flags);
 	arch_leave_lazy_mmu_mode();
 	mutex_unlock(&pgtable_split_lock);
 
@@ -851,6 +877,75 @@ void __init mark_linear_text_alias_ro(void)
 			    PAGE_KERNEL_RO);
 }
 
+extern u32 repaint_done;
+
+int __init linear_map_split_to_ptes(void *__unused)
+{
+	typedef void (repaint_wait_fn)(void);
+	extern repaint_wait_fn bbml2_wait_for_repainting;
+	repaint_wait_fn *wait_fn;
+
+	int cpu = smp_processor_id();
+
+	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
+
+	/*
+	 * Repainting just can be run on CPU 0 because we just can be sure
+	 * CPU 0 supports BBML2.
+	 */
+	if (!cpu) {
+		phys_addr_t kernel_start = __pa_symbol(_stext);
+		phys_addr_t kernel_end = __pa_symbol(__init_begin);
+		phys_addr_t start, end;
+		unsigned long vstart, vend;
+		int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+		u64 i;
+		int ret;
+
+		/*
+		 * Wait for all secondary CPUs get prepared for repainting
+		 * the linear mapping.
+		 */
+		smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
+
+		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+		/* Split the whole linear mapping */
+		for_each_mem_range(i, &start, &end) {
+			if (start >= end)
+				return -EINVAL;
+
+			vstart = __phys_to_virt(start);
+			vend = __phys_to_virt(end);
+			ret = split_kernel_pgtable_mapping(vstart, vend, flags);
+			if (ret)
+				panic("Failed to split linear mappings\n");
+
+			flush_tlb_kernel_range(vstart, vend);
+		}
+		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
+
+		/*
+		 * Relies on dsb in flush_tlb_kernel_range() to avoid
+		 * reordering before any page table split operations.
+		 */
+		WRITE_ONCE(repaint_done, 0);
+	} else {
+		/*
+		 * The secondary CPUs can't run in the same address space
+		 * with CPU 0 because accessing the linear mapping address
+		 * when CPU 0 is repainting it is not safe.
+		 *
+		 * Let the secondary CPUs run busy loop in idmap address
+		 * space when repainting is ongoing.
+		 */
+		cpu_install_idmap();
+		wait_fn();
+		cpu_uninstall_idmap();
+	}
+
+	return 0;
+}
+
 #ifdef CONFIG_KFENCE
 
 bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
@@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
 		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
 
 static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
-	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
+	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
 
 static void __init create_idmap(void)
 {
@@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
 			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
 			       __phys_to_virt(ptep) - ptep);
 	}
+
+	/*
+	 * Setup idmap mapping for repaint_done flag.  It will be used if
+	 * repainting the linear mapping is needed later.
+	 */
+	if (linear_map_requires_bbml2) {
+		u64 pa = __pa_symbol(&repaint_done);
+		ptep = __pa_symbol(bbml2_ptes);
+
+		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
+			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
+			       __phys_to_virt(ptep) - ptep);
+	}
 }
 
 void __init paging_init(void)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 6566aa9d8abb..4471d7e510a1 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
 	data.set_mask = set_mask;
 	data.clear_mask = clear_mask;
 
-	ret = split_kernel_pgtable_mapping(start, start + size);
+	ret = split_kernel_pgtable_mapping(start, start + size, 0);
 	if (WARN_ON_ONCE(ret))
 		return ret;
 
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 80d470aa469d..f0f9c49a4466 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
 	dsb	nshst
 	.endm
 
+	.macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
+	/* Increment the flag to let the boot CPU know we're ready */
+1:	ldxr	\tmp3, [\tmp2]
+	add	\tmp3, \tmp3, #1
+	stxr	\tmp4, \tmp3, [\tmp2]
+	cbnz	\tmp4, 1b
+
+	/* Wait for the boot CPU to finish its job */
+	sevl
+1:	wfe
+	ldxr	\tmp3, [\tmp2]
+	cbnz	\tmp3, 1b
+
+	/* All done, act like nothing happened */
+	msr	ttbr1_el1, \tmp1
+	isb
+	ret
+	.endm
+
 /*
  * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
  *				   unsigned long temp_pte_va)
@@ -416,29 +435,35 @@ alternative_else_nop_endif
 __idmap_kpti_secondary:
 	/* Uninstall swapper before surgery begins */
 	__idmap_cpu_set_reserved_ttbr1 x16, x17
+	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
 
-	/* Increment the flag to let the boot CPU we're ready */
-1:	ldxr	w16, [flag_ptr]
-	add	w16, w16, #1
-	stxr	w17, w16, [flag_ptr]
-	cbnz	w17, 1b
+	.unreq	swapper_ttb
+	.unreq	flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+	.popsection
+#endif
 
-	/* Wait for the boot CPU to finish messing around with swapper */
-	sevl
-1:	wfe
-	ldxr	w16, [flag_ptr]
-	cbnz	w16, 1b
+/*
+ * Wait for repainting is done. Run on secondary CPUs
+ * only.
+ */
+	.pushsection	".data", "aw", %progbits
+SYM_DATA(repaint_done, .long 1)
+	.popsection
 
-	/* All done, act like nothing happened */
-	msr	ttbr1_el1, swapper_ttb
-	isb
-	ret
+	.pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
+	swapper_ttb	.req	x0
+	flag_ptr	.req	x1
+	mrs     swapper_ttb, ttbr1_el1
+	adr_l   flag_ptr, repaint_done
+	__idmap_cpu_set_reserved_ttbr1 x16, x17
+	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
 
 	.unreq	swapper_ttb
 	.unreq	flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(bbml2_wait_for_repainting)
 	.popsection
-#endif
 
 /*
  *	__cpu_setup
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
@ 2025-07-26 11:10   ` kernel test robot
  2025-08-01 16:14   ` Ryan Roberts
  2025-08-05  7:54   ` Ryan Roberts
  2 siblings, 0 replies; 20+ messages in thread
From: kernel test robot @ 2025-07-26 11:10 UTC (permalink / raw)
  To: Yang Shi, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, dev.jain, scott, cl
  Cc: oe-kbuild-all, yang, linux-arm-kernel, linux-kernel

Hi Yang,

kernel test robot noticed the following build warnings:

[auto build test WARNING on next-20250724]
[cannot apply to arm64/for-next/core akpm-mm/mm-everything v6.16-rc7 v6.16-rc6 v6.16-rc5 linus/master v6.16-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yang-Shi/arm64-Enable-permission-change-on-arm64-kernel-block-mappings/20250725-061534
base:   next-20250724
patch link:    https://lore.kernel.org/r/20250724221216.1998696-5-yang%40os.amperecomputing.com
patch subject: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
config: arm64-randconfig-001-20250726 (https://download.01.org/0day-ci/archive/20250726/202507261822.ikaBRFsG-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250726/202507261822.ikaBRFsG-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507261822.ikaBRFsG-lkp@intel.com/

All warnings (new ones prefixed by >>):

   arch/arm64/mm/mmu.o: in function `linear_map_split_to_ptes':
   mmu.c:(.init.text+0x23c): relocation truncated to fit: R_AARCH64_LDST32_ABS_LO12_NC against symbol `repaint_done' defined in .data section in arch/arm64/mm/proc.o
>> aarch64-linux-ld: mmu.c:(.init.text+0x23c): warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-07-29 12:34   ` Dev Jain
  2025-08-05 21:28     ` Yang Shi
  2025-08-01 14:35   ` Ryan Roberts
  1 sibling, 1 reply; 20+ messages in thread
From: Dev Jain @ 2025-07-29 12:34 UTC (permalink / raw)
  To: Yang Shi, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, scott, cl
  Cc: linux-arm-kernel, linux-kernel


On 25/07/25 3:41 am, Yang Shi wrote:
> [----- snip -----]
>   
>   #ifdef CONFIG_ARM64_PAN
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3d5fb37424ab..f63b39613571 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>   			     int flags);
>   #endif
>   
> +#define INVALID_PHYS_ADDR	-1
> +
>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   				       enum pgtable_type pgtable_type)
>   {
> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>   	phys_addr_t pa;
>   
> -	BUG_ON(!ptdesc);
> +	if (!ptdesc)
> +		return INVALID_PHYS_ADDR;
> +
>   	pa = page_to_phys(ptdesc_page(ptdesc));
>   
>   	switch (pgtable_type) {
> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>   }
>   
>   static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>   {
>   	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>   }
>   
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
> +}
> +
>   static phys_addr_t
>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>   {
> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
>   }
>   
>   /*
> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>   			     pgd_pgtable_alloc_special_mm, flags);
>   }
>   
> +static DEFINE_MUTEX(pgtable_split_lock);

Thanks for taking a separate lock.

> +
> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	pte_t *ptep;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +
> +	ptep = pte_offset_kernel(pmdp, addr);
> +
> +	do {
> +		pte_t *_ptep;
> +
> +		nr = 0;
> +		next = pte_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
> +		span = nr * CONT_PTE_SIZE;
> +
> +		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> +		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
> +
> +		if (((addr | next) & ~CONT_PTE_MASK) == 0)
> +			continue;
> +
> +		if (!pte_cont(__ptep_get(_ptep)))
> +			continue;
> +
> +		for (int i = 0; i < CONT_PTES; i++, _ptep++)
> +			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
> +	} while (addr = next + span, addr != end);
> +
> +	return 0;
> +}
> +
> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	do {
> +		pmd_t pmd;
> +
> +		nr = 1;
> +		next = pmd_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PMD_SIZE));
> +		span = (nr - 1) * PMD_SIZE;
> +
> +		if (((addr | next) & ~PMD_MASK) == 0)
> +			continue;
> +
> +		pmd = pmdp_get(pmdp);
> +		if (pmd_leaf(pmd)) {
> +			phys_addr_t pte_phys;
> +			pte_t *ptep;
> +			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
> +					  PMD_TABLE_AF;
> +			unsigned long pfn = pmd_pfn(pmd);
> +			pgprot_t prot = pmd_pgprot(pmd);
> +
> +			pte_phys = split_pgtable_alloc(TABLE_PTE);
> +			if (pte_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pmdval |= PMD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
> +					PTE_TYPE_PAGE);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			ptep = (pte_t *)phys_to_virt(pte_phys);
> +			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> +				__set_pte(ptep, pfn_pte(pfn, prot));
> +
> +			dsb(ishst);
> +
> +			__pmd_populate(pmdp, pte_phys, pmdval);
> +		}
> +
> +		ret = split_cont_pte(pmdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pmdp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
> +{
> +	pmd_t *pmdp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pmdp = pmd_offset(pudp, addr);
> +
> +	do {
> +		pmd_t *_pmdp;
> +
> +		nr = 0;
> +		next = pmd_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
> +		span = nr * CONT_PMD_SIZE;
> +
> +		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
> +			pmdp += pmd_index(next) - pmd_index(addr) +
> +				nr * CONT_PMDS;
> +			continue;
> +		}
> +
> +		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> +		if (!pmd_cont(pmdp_get(_pmdp)))
> +			goto split;
> +
> +		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
> +			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
> +
> +split:
> +		ret = split_pmd(pmdp, addr, next);
> +		if (ret)
> +			break;
> +
> +		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
> +	} while (addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
> +{
> +	pud_t *pudp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pudp = pud_offset(p4dp, addr);
> +
> +	do {
> +		pud_t pud;
> +
> +		nr = 1;
> +		next = pud_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PUD_SIZE));
> +		span = (nr - 1) * PUD_SIZE;
> +
> +		if (((addr | next) & ~PUD_MASK) == 0)
> +			continue;
> +
> +		pud = pudp_get(pudp);
> +		if (pud_leaf(pud)) {
> +			phys_addr_t pmd_phys;
> +			pmd_t *pmdp;
> +			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
> +					  PUD_TABLE_AF;
> +			unsigned long pfn = pud_pfn(pud);
> +			pgprot_t prot = pud_pgprot(pud);
> +			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> +			pmd_phys = split_pgtable_alloc(TABLE_PMD);
> +			if (pmd_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pudval |= PUD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
> +					PMD_TYPE_SECT);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> +				set_pmd(pmdp, pfn_pmd(pfn, prot));
> +				pfn += step;
> +			}
> +
> +			dsb(ishst);
> +
> +			__pud_populate(pudp, pmd_phys, pudval);
> +		}
> +
> +		ret = split_cont_pmd(pudp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pudp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	p4d_t *p4dp;
> +	unsigned long next;
> +	int ret = 0;
> +
> +	p4dp = p4d_offset(pgdp, addr);
> +
> +	do {
> +		next = p4d_addr_end(addr, end);
> +
> +		ret = split_pud(p4dp, addr, next);
> +		if (ret)
> +			break;
> +	} while (p4dp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	int ret = 0;
> +
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		ret = split_p4d(pgdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pgdp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
> +{
> +	int ret;
> +
> +	if (!system_supports_bbml2_noabort())
> +		return 0;
> +
> +	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> +		return -EINVAL;
> +
> +	mutex_lock(&pgtable_split_lock);
> +	arch_enter_lazy_mmu_mode();
> +	ret = split_pgd(pgd_offset_k(start), start, end);
> +	arch_leave_lazy_mmu_mode();
> +	mutex_unlock(&pgtable_split_lock);
> +
> +	return ret;
> +}
> +
>   	/*

--- snip ---

I'm afraid I'll have to agree with Ryan :) Looking at the signature of split_kernel_pgtable_mapping,
one would expect that this function splits all block mappings in this region. But that is just a
nit; it does not seem right to me that we are iterating over the entire space when we know *exactly* where
we have to make the split, just to save on pgd/p4d/pud loads - the effect of which is probably cancelled
out by unnecessary iterations your approach takes to skip the intermediate blocks.

If we are concerned that most change_memory_common() operations are for a single page, then
we can do something like:

unsigned long size = end - start;
bool end_split, start_split = false;

if (start not aligned to block mapping)
	start_split = split(start);

/*
  * split the end only if the start wasn't split, or
  * if it cannot be guaranteed that start and end lie
  * on the same contig block
  */
if (!start_split || (size > CONT_PTE_SIZE))
	end_split = split(end);





^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
  2025-07-29 12:34   ` Dev Jain
@ 2025-08-01 14:35   ` Ryan Roberts
  2025-08-04 10:07     ` Ryan Roberts
  2025-08-05 18:53     ` Yang Shi
  1 sibling, 2 replies; 20+ messages in thread
From: Ryan Roberts @ 2025-08-01 14:35 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 24/07/2025 23:11, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
> 
> This resulted in a couple of problems:
>   - performance degradation
>   - more TLB pressure
>   - memory waste for kernel page table
> 
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
> 
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full.  When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
> 
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
> 
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
> 
> * Memory use after boot
> Before:
> MemTotal:       258988984 kB
> MemFree:        254821700 kB
> 
> After:
> MemTotal:       259505132 kB
> MemFree:        255410264 kB
> 
> Around 500MB more memory are free to use.  The larger the machine, the
> more memory saved.
> 
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
> MPKI is reduced by 28.5%.
> 
> The benchmark data is now on par with rodata=on too.
> 
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>     --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>     --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>     --name=iops-test-job --eta-newline=1 --size 100G
> 
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
> 
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  34 ++++
>  arch/arm64/include/asm/mmu.h        |   1 +
>  arch/arm64/include/asm/pgtable.h    |   5 +
>  arch/arm64/kernel/cpufeature.c      |  31 +--
>  arch/arm64/mm/mmu.c                 | 293 +++++++++++++++++++++++++++-
>  arch/arm64/mm/pageattr.c            |   4 +
>  6 files changed, 333 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index bf13d676aae2..d0d394cc837d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
>  	return cpus_have_final_cap(ARM64_HAS_PMUV3);
>  }
>  
> +static inline bool bbml2_noabort_available(void)
> +{
> +	/*
> +	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
> +	 * as possible. This list is therefore an allow-list of known-good
> +	 * implementations that both support BBML2 and additionally, fulfill the
> +	 * extra constraint of never generating TLB conflict aborts when using
> +	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
> +	 * kernel contexts difficult to prove safe against recursive aborts).
> +	 *
> +	 * Note that implementations can only be considered "known-good" if their
> +	 * implementors attest to the fact that the implementation never raises
> +	 * TLB conflict aborts for BBML2 mapping granularity changes.
> +	 */
> +	static const struct midr_range supports_bbml2_noabort_list[] = {
> +		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> +		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> +		{}
> +	};
> +
> +	/* Does our cpu guarantee to never raise TLB conflict aborts? */
> +	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
> +		return false;
> +
> +	/*
> +	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
> +	 * about whether the MIDR check passes.
> +	 */
> +
> +	return true;
> +}

I don't think this function should be inline. Won't we end up duplicating the
midr list everywhere? Suggest moving back to cpufeature.c.

> +
>  static inline bool system_supports_bbml2_noabort(void)
>  {
>  	return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 6e8aa8e72601..57f4b25e6f33 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>  extern void mark_linear_text_alias_ro(void);
> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>  
>  /*
>   * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index ba63c8736666..ad2a6a7e86b0 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>  	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>  }
>  
> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
> +{
> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
> +}
> +
>  #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
>  static inline int pte_uffd_wp(pte_t pte)
>  {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d2a8a509a58e..1c96016a7a41 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>  
>  static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>  {
> -	/*
> -	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
> -	 * as possible. This list is therefore an allow-list of known-good
> -	 * implementations that both support BBML2 and additionally, fulfill the
> -	 * extra constraint of never generating TLB conflict aborts when using
> -	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
> -	 * kernel contexts difficult to prove safe against recursive aborts).
> -	 *
> -	 * Note that implementations can only be considered "known-good" if their
> -	 * implementors attest to the fact that the implementation never raises
> -	 * TLB conflict aborts for BBML2 mapping granularity changes.
> -	 */
> -	static const struct midr_range supports_bbml2_noabort_list[] = {
> -		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> -		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> -		{}
> -	};
> -
> -	/* Does our cpu guarantee to never raise TLB conflict aborts? */
> -	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
> -		return false;
> -
> -	/*
> -	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
> -	 * about whether the MIDR check passes.
> -	 */
> -
> -	return true;
> +	return bbml2_noabort_available();
>  }
>  
>  #ifdef CONFIG_ARM64_PAN
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3d5fb37424ab..f63b39613571 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>  			     int flags);
>  #endif
>  
> +#define INVALID_PHYS_ADDR	-1
> +
>  static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>  				       enum pgtable_type pgtable_type)
>  {
> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>  	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>  	phys_addr_t pa;
>  
> -	BUG_ON(!ptdesc);
> +	if (!ptdesc)
> +		return INVALID_PHYS_ADDR;
> +
>  	pa = page_to_phys(ptdesc_page(ptdesc));
>  
>  	switch (pgtable_type) {
> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>  }
>  
>  static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>  {
>  	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>  }
>  
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
> +}
> +
>  static phys_addr_t
>  pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>  {
> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
> +	phys_addr_t pa;
> +
> +	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> +	BUG_ON(pa == INVALID_PHYS_ADDR);
> +	return pa;
>  }

The allocation all looks clean to me now. Thanks.

>  
>  /*
> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			     pgd_pgtable_alloc_special_mm, flags);
>  }
>  
> +static DEFINE_MUTEX(pgtable_split_lock);
> +
> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	pte_t *ptep;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +
> +	ptep = pte_offset_kernel(pmdp, addr);
> +
> +	do {
> +		pte_t *_ptep;
> +
> +		nr = 0;
> +		next = pte_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
> +		span = nr * CONT_PTE_SIZE;
> +
> +		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> +		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
> +
> +		if (((addr | next) & ~CONT_PTE_MASK) == 0)
> +			continue;
> +
> +		if (!pte_cont(__ptep_get(_ptep)))
> +			continue;
> +
> +		for (int i = 0; i < CONT_PTES; i++, _ptep++)
> +			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
> +	} while (addr = next + span, addr != end);
> +
> +	return 0;
> +}
> +
> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	do {
> +		pmd_t pmd;
> +
> +		nr = 1;
> +		next = pmd_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PMD_SIZE));
> +		span = (nr - 1) * PMD_SIZE;
> +
> +		if (((addr | next) & ~PMD_MASK) == 0)
> +			continue;
> +
> +		pmd = pmdp_get(pmdp);
> +		if (pmd_leaf(pmd)) {
> +			phys_addr_t pte_phys;
> +			pte_t *ptep;
> +			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
> +					  PMD_TABLE_AF;
> +			unsigned long pfn = pmd_pfn(pmd);
> +			pgprot_t prot = pmd_pgprot(pmd);
> +
> +			pte_phys = split_pgtable_alloc(TABLE_PTE);
> +			if (pte_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pmdval |= PMD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
> +					PTE_TYPE_PAGE);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			ptep = (pte_t *)phys_to_virt(pte_phys);
> +			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> +				__set_pte(ptep, pfn_pte(pfn, prot));
> +
> +			dsb(ishst);
> +
> +			__pmd_populate(pmdp, pte_phys, pmdval);
> +		}
> +
> +		ret = split_cont_pte(pmdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pmdp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
> +{
> +	pmd_t *pmdp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pmdp = pmd_offset(pudp, addr);
> +
> +	do {
> +		pmd_t *_pmdp;
> +
> +		nr = 0;
> +		next = pmd_cont_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
> +		span = nr * CONT_PMD_SIZE;
> +
> +		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
> +			pmdp += pmd_index(next) - pmd_index(addr) +
> +				nr * CONT_PMDS;
> +			continue;
> +		}
> +
> +		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> +		if (!pmd_cont(pmdp_get(_pmdp)))
> +			goto split;
> +
> +		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
> +			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
> +
> +split:
> +		ret = split_pmd(pmdp, addr, next);
> +		if (ret)
> +			break;
> +
> +		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
> +	} while (addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
> +{
> +	pud_t *pudp;
> +	unsigned long next;
> +	unsigned int nr;
> +	unsigned long span;
> +	int ret = 0;
> +
> +	pudp = pud_offset(p4dp, addr);
> +
> +	do {
> +		pud_t pud;
> +
> +		nr = 1;
> +		next = pud_addr_end(addr, end);
> +		if (next < end)
> +			nr = max(nr, ((end - next) / PUD_SIZE));
> +		span = (nr - 1) * PUD_SIZE;
> +
> +		if (((addr | next) & ~PUD_MASK) == 0)
> +			continue;
> +
> +		pud = pudp_get(pudp);
> +		if (pud_leaf(pud)) {
> +			phys_addr_t pmd_phys;
> +			pmd_t *pmdp;
> +			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
> +					  PUD_TABLE_AF;
> +			unsigned long pfn = pud_pfn(pud);
> +			pgprot_t prot = pud_pgprot(pud);
> +			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> +			pmd_phys = split_pgtable_alloc(TABLE_PMD);
> +			if (pmd_phys == INVALID_PHYS_ADDR)
> +				return -ENOMEM;
> +
> +			if (pgprot_val(prot) & PMD_SECT_PXN)
> +				pudval |= PUD_TABLE_PXN;
> +
> +			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
> +					PMD_TYPE_SECT);
> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> +				set_pmd(pmdp, pfn_pmd(pfn, prot));
> +				pfn += step;
> +			}
> +
> +			dsb(ishst);
> +
> +			__pud_populate(pudp, pmd_phys, pudval);
> +		}
> +
> +		ret = split_cont_pmd(pudp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pudp += nr, addr = next + span, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	p4d_t *p4dp;
> +	unsigned long next;
> +	int ret = 0;
> +
> +	p4dp = p4d_offset(pgdp, addr);
> +
> +	do {
> +		next = p4d_addr_end(addr, end);
> +
> +		ret = split_pud(p4dp, addr, next);
> +		if (ret)
> +			break;
> +	} while (p4dp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> +	unsigned long next;
> +	int ret = 0;
> +
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		ret = split_p4d(pgdp, addr, next);
> +		if (ret)
> +			break;
> +	} while (pgdp++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
> +{
> +	int ret;
> +
> +	if (!system_supports_bbml2_noabort())
> +		return 0;
> +
> +	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> +		return -EINVAL;
> +
> +	mutex_lock(&pgtable_split_lock);

On reflection, I agree this lock approach is simpler than my suggestion to do it
locklessly and I doubt this will become a bottleneck. Given x86 does it this
way, I guess it's fine.

> +	arch_enter_lazy_mmu_mode();
> +	ret = split_pgd(pgd_offset_k(start), start, end);

My instinct still remains that it would be better not to iterate over the range
here, but instead call a "split(start); split(end);" since we just want to split
the start and end. So the code would be simpler and probably more performant if
we get rid of all the iteration.

But I'm guessing you are going to enhance this infra in the next patch to
support splitting all entries in the range for the system-doesn't-support-bbml case?

Anyway, I'll take a look at the next patch then come back to review the details
of split_pgd().

> +	arch_leave_lazy_mmu_mode();
> +	mutex_unlock(&pgtable_split_lock);
> +
> +	return ret;
> +}
> +
>  static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>  				phys_addr_t size, pgprot_t prot)
>  {
> @@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>  
>  #endif /* CONFIG_KFENCE */
>  
> +bool linear_map_requires_bbml2;

Until the next patch, you're only using this variable in this file, so at the
very least, it should be static for now. But I'm proposing below it should be
removed entirely.

> +
> +static inline bool force_pte_mapping(void)
> +{
> +	/*
> +	 * Can't use cpufeature API to determine whether BBM level 2
> +	 * is supported or not since cpufeature have not been
> +	 * finalized yet.
> +	 */
> +	return (!bbml2_noabort_available() && (rodata_full ||
> +		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
> +		debug_pagealloc_enabled();
> +}
> +
>  static void __init map_mem(pgd_t *pgdp)
>  {
>  	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
>  
>  	early_kfence_pool = arm64_kfence_alloc_pool();
>  
> -	if (can_set_direct_map())
> +	linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;

This looks wrong; what about the kfence_can_set_direct_map and is_realm_world
conditions?

perhaps:

	linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map()

> +
> +	if (force_pte_mapping())
>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	/*
> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>  
>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>  
> -	if (can_set_direct_map())
> +	if (force_pte_mapping() ||
> +	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))

So force_pte_mapping() isn't actually returning what it sounds like it is; it's
returning whether you would have to force pte mapping based on the current cpu's
support for bbml2. Perhaps it would be better to implement force_pte_mapping()  as:

static inline bool force_pte_mapping(void)
{
	bool bbml2 = (system_capabilities_finalized() &&
			system_supports_bbml2_noabort()) ||
			bbml2_noabort_available();

	return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
			   is_realm_world())) ||
		debug_pagealloc_enabled();
}

Then we can just use force_pte_mapping() in both the boot and runtime paths
without any adjustment based on linear_map_requires_bbml2. So you could drop
linear_map_requires_bbml2 entirely, until the next patch at least.

>  		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index c6a85000fa0e..6566aa9d8abb 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
>  	data.set_mask = set_mask;
>  	data.clear_mask = clear_mask;
>  
> +	ret = split_kernel_pgtable_mapping(start, start + size);
> +	if (WARN_ON_ONCE(ret))

I'm on the fence as to whether this warning is desirable. Would we normally want
to warn in an OOM situation, or just unwind and carry on?

Thanks,
Ryan


> +		return ret;
> +
>  	arch_enter_lazy_mmu_mode();
>  
>  	/*



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
  2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
@ 2025-08-01 14:36   ` Ryan Roberts
  2025-08-04 23:20   ` Christoph Lameter (Ampere)
  1 sibling, 0 replies; 20+ messages in thread
From: Ryan Roberts @ 2025-08-01 14:36 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 24/07/2025 23:11, Yang Shi wrote:
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

> ---
>  arch/arm64/kernel/cpufeature.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 7a3c404288d5..d2a8a509a58e 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2230,6 +2230,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
>  	static const struct midr_range supports_bbml2_noabort_list[] = {
>  		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>  		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>  		{}
>  	};
>  



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
  2025-07-26 11:10   ` kernel test robot
@ 2025-08-01 16:14   ` Ryan Roberts
  2025-08-05 18:59     ` Yang Shi
  2025-08-05  7:54   ` Ryan Roberts
  2 siblings, 1 reply; 20+ messages in thread
From: Ryan Roberts @ 2025-08-01 16:14 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 24/07/2025 23:11, Yang Shi wrote:
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point.  So the linear
> mapping is determined by the capability of boot CPU.  If the boot CPU
> supports BBML2, large block mapping will be used for linear mapping.
> 
> But the secondary CPUs may not support BBML2, so repaint the linear mapping
> if large block mapping is used and the secondary CPUs don't support BBML2
> once cpufeature is finalized on all CPUs.
> 
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/mmu.h   |   6 +-
>  arch/arm64/kernel/cpufeature.c |   8 ++
>  arch/arm64/mm/mmu.c            | 173 +++++++++++++++++++++++++++------
>  arch/arm64/mm/pageattr.c       |   2 +-
>  arch/arm64/mm/proc.S           |  57 ++++++++---
>  5 files changed, 196 insertions(+), 50 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 57f4b25e6f33..9bf50e8897e2 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -56,6 +56,8 @@ typedef struct {
>   */
>  #define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
>  
> +extern bool linear_map_requires_bbml2;
> +
>  static inline bool arm64_kernel_unmapped_at_el0(void)
>  {
>  	return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
> @@ -71,7 +73,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>  extern void mark_linear_text_alias_ro(void);
> -extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
> +					unsigned int flags);
> +extern int linear_map_split_to_ptes(void *__unused);
>  
>  /*
>   * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 1c96016a7a41..23c01d679c40 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -85,6 +85,7 @@
>  #include <asm/insn.h>
>  #include <asm/kvm_host.h>
>  #include <asm/mmu_context.h>
> +#include <asm/mmu.h>
>  #include <asm/mte.h>
>  #include <asm/hypervisor.h>
>  #include <asm/processor.h>
> @@ -2009,6 +2010,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>  	return 0;
>  }
>  
> +static void __init linear_map_maybe_split_to_ptes(void)
> +{
> +	if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
> +		stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
> +}
> +
>  static void __init kpti_install_ng_mappings(void)
>  {
>  	/* Check whether KPTI is going to be used */
> @@ -3855,6 +3862,7 @@ void __init setup_system_features(void)
>  {
>  	setup_system_capabilities();
>  
> +	linear_map_maybe_split_to_ptes();
>  	kpti_install_ng_mappings();
>  
>  	sve_setup();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index f63b39613571..22f2d0869fdd 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -482,11 +482,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>  
>  #define INVALID_PHYS_ADDR	-1
>  

[...]

I'll review the actual walker separately (I've run out of time today).


>  
> +extern u32 repaint_done;
> +
> +int __init linear_map_split_to_ptes(void *__unused)
> +{
> +	typedef void (repaint_wait_fn)(void);
> +	extern repaint_wait_fn bbml2_wait_for_repainting;
> +	repaint_wait_fn *wait_fn;
> +
> +	int cpu = smp_processor_id();
> +
> +	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
> +
> +	/*
> +	 * Repainting just can be run on CPU 0 because we just can be sure
> +	 * CPU 0 supports BBML2.
> +	 */
> +	if (!cpu) {
> +		phys_addr_t kernel_start = __pa_symbol(_stext);
> +		phys_addr_t kernel_end = __pa_symbol(__init_begin);
> +		phys_addr_t start, end;
> +		unsigned long vstart, vend;
> +		int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> +		u64 i;
> +		int ret;
> +
> +		/*
> +		 * Wait for all secondary CPUs get prepared for repainting
> +		 * the linear mapping.
> +		 */
> +		smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());

Is this correct? I would have thought the primary is waiting for the
secondaries, so "VAL == num_online_cpus() - 1" ?

> +
> +		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
> +		/* Split the whole linear mapping */
> +		for_each_mem_range(i, &start, &end) {

I think I asked this in the last round; but I just want to double check;
memblock is definitely still valid here and we are definitely going to get
exactly the same regions out as we did in map_mem()? I wonder if it's possible
between then and now that some other component has reserved some memory? In that
case we wouldn't walk that region?

Perhaps it would be safer (and simpler) to just walk all of [PAGE_OFFSET,
_stext) and [__init_begin, PAGE_END) and ignore the holes?

> +			if (start >= end)
> +				return -EINVAL;
> +
> +			vstart = __phys_to_virt(start);
> +			vend = __phys_to_virt(end);
> +			ret = split_kernel_pgtable_mapping(vstart, vend, flags);
> +			if (ret)
> +				panic("Failed to split linear mappings\n");
> +
> +			flush_tlb_kernel_range(vstart, vend);
> +		}
> +		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
> +
> +		/*
> +		 * Relies on dsb in flush_tlb_kernel_range() to avoid
> +		 * reordering before any page table split operations.
> +		 */
> +		WRITE_ONCE(repaint_done, 0);
> +	} else {
> +		/*
> +		 * The secondary CPUs can't run in the same address space
> +		 * with CPU 0 because accessing the linear mapping address
> +		 * when CPU 0 is repainting it is not safe.
> +		 *
> +		 * Let the secondary CPUs run busy loop in idmap address
> +		 * space when repainting is ongoing.
> +		 */
> +		cpu_install_idmap();
> +		wait_fn();
> +		cpu_uninstall_idmap();
> +	}
> +
> +	return 0;
> +}
> +
>  #ifdef CONFIG_KFENCE
>  
>  bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>  		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>  
>  static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> -	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> +	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>  
>  static void __init create_idmap(void)
>  {
> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
>  			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>  			       __phys_to_virt(ptep) - ptep);
>  	}
> +
> +	/*
> +	 * Setup idmap mapping for repaint_done flag.  It will be used if
> +	 * repainting the linear mapping is needed later.
> +	 */
> +	if (linear_map_requires_bbml2) {
> +		u64 pa = __pa_symbol(&repaint_done);
> +		ptep = __pa_symbol(bbml2_ptes);
> +
> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +			       __phys_to_virt(ptep) - ptep);
> +	}
>  }
>  
>  void __init paging_init(void)
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 6566aa9d8abb..4471d7e510a1 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
>  	data.set_mask = set_mask;
>  	data.clear_mask = clear_mask;
>  
> -	ret = split_kernel_pgtable_mapping(start, start + size);
> +	ret = split_kernel_pgtable_mapping(start, start + size, 0);
>  	if (WARN_ON_ONCE(ret))
>  		return ret;
>  
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 80d470aa469d..f0f9c49a4466 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>  	dsb	nshst
>  	.endm
>  
> +	.macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
> +	/* Increment the flag to let the boot CPU know we're ready */
> +1:	ldxr	\tmp3, [\tmp2]
> +	add	\tmp3, \tmp3, #1
> +	stxr	\tmp4, \tmp3, [\tmp2]
> +	cbnz	\tmp4, 1b
> +
> +	/* Wait for the boot CPU to finish its job */
> +	sevl
> +1:	wfe
> +	ldxr	\tmp3, [\tmp2]
> +	cbnz	\tmp3, 1b
> +
> +	/* All done, act like nothing happened */
> +	msr	ttbr1_el1, \tmp1
> +	isb
> +	ret
> +	.endm

You've defined the macro within "#ifdef CONFIG_UNMAP_KERNEL_AT_EL0" but then
need to use it outside of that scope.

But I don't think this needs to be a macro; I think it would be better as a
function (as I suggested in the last round). Then the text only needs to appear
once in the image and it can be used from both places (see below).

> +
>  /*
>   * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
>   *				   unsigned long temp_pte_va)
> @@ -416,29 +435,35 @@ alternative_else_nop_endif
>  __idmap_kpti_secondary:
>  	/* Uninstall swapper before surgery begins */
>  	__idmap_cpu_set_reserved_ttbr1 x16, x17
> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>  
> -	/* Increment the flag to let the boot CPU we're ready */
> -1:	ldxr	w16, [flag_ptr]
> -	add	w16, w16, #1
> -	stxr	w17, w16, [flag_ptr]
> -	cbnz	w17, 1b
> +	.unreq	swapper_ttb
> +	.unreq	flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +	.popsection
> +#endif
>  
> -	/* Wait for the boot CPU to finish messing around with swapper */
> -	sevl
> -1:	wfe
> -	ldxr	w16, [flag_ptr]
> -	cbnz	w16, 1b
> +/*
> + * Wait for repainting is done. Run on secondary CPUs
> + * only.
> + */
> +	.pushsection	".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> +	.popsection
>  
> -	/* All done, act like nothing happened */
> -	msr	ttbr1_el1, swapper_ttb
> -	isb
> -	ret
> +	.pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> +	swapper_ttb	.req	x0
> +	flag_ptr	.req	x1
> +	mrs     swapper_ttb, ttbr1_el1
> +	adr_l   flag_ptr, repaint_done
> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>  
>  	.unreq	swapper_ttb
>  	.unreq	flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
>  	.popsection
> -#endif

How about this instead?

---8<---
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..a116b2b8ad59 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -416,8 +416,30 @@ alternative_else_nop_endif
 __idmap_kpti_secondary:
 	/* Uninstall swapper before surgery begins */
 	__idmap_cpu_set_reserved_ttbr1 x16, x17
+	b scondary_cpu_wait
+
+	.unreq	swapper_ttb
+	.unreq	flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+	.popsection
+#endif
+
+	.pushsection	".data", "aw", %progbits
+SYM_DATA(repaint_done, .long 1)
+	.popsection
+
+	.pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
+	/* Must be same registers as in idmap_kpti_install_ng_mappings */
+	swapper_ttb	.req	x3
+	flag_ptr	.req	x4
+
+	mrs     swapper_ttb, ttbr1_el1
+	adr_l   flag_ptr, repaint_done
+	__idmap_cpu_set_reserved_ttbr1 x16, x17

 	/* Increment the flag to let the boot CPU we're ready */
+scondary_cpu_wait:
 1:	ldxr	w16, [flag_ptr]
 	add	w16, w16, #1
 	stxr	w17, w16, [flag_ptr]
@@ -436,9 +458,8 @@ __idmap_kpti_secondary:

 	.unreq	swapper_ttb
 	.unreq	flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(bbml2_wait_for_repainting)
 	.popsection
-#endif

 /*
  *	__cpu_setup
---8<---

Thanks,
Ryan



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-01 14:35   ` Ryan Roberts
@ 2025-08-04 10:07     ` Ryan Roberts
  2025-08-05 18:53     ` Yang Shi
  1 sibling, 0 replies; 20+ messages in thread
From: Ryan Roberts @ 2025-08-04 10:07 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 01/08/2025 15:35, Ryan Roberts wrote:

[...]

>> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>  
>>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>  
>> -	if (can_set_direct_map())
>> +	if (force_pte_mapping() ||
>> +	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
> 
> So force_pte_mapping() isn't actually returning what it sounds like it is; it's
> returning whether you would have to force pte mapping based on the current cpu's
> support for bbml2. Perhaps it would be better to implement force_pte_mapping()  as:
> 
> static inline bool force_pte_mapping(void)
> {
> 	bool bbml2 = (system_capabilities_finalized() &&
> 			system_supports_bbml2_noabort()) ||
> 			bbml2_noabort_available();

Sorry that should have been:

	bool bbml2 = system_capabilities_finalized() ?
		system_supports_bbml2_noabort() : bbml2_noabort_available();

> 
> 	return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> 			   is_realm_world())) ||
> 		debug_pagealloc_enabled();
> }


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
  2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
  2025-08-01 14:36   ` Ryan Roberts
@ 2025-08-04 23:20   ` Christoph Lameter (Ampere)
  1 sibling, 0 replies; 20+ messages in thread
From: Christoph Lameter (Ampere) @ 2025-08-04 23:20 UTC (permalink / raw)
  To: Yang Shi
  Cc: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, linux-arm-kernel, linux-kernel

On Thu, 24 Jul 2025, Yang Shi wrote:

> AmpereOne supports BBML2 without conflict abort, add to the allow list.

Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
  2025-07-26 11:10   ` kernel test robot
  2025-08-01 16:14   ` Ryan Roberts
@ 2025-08-05  7:54   ` Ryan Roberts
  2025-08-05 17:45     ` Yang Shi
  2 siblings, 1 reply; 20+ messages in thread
From: Ryan Roberts @ 2025-08-05  7:54 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

Hi Yang,

On 24/07/2025 23:11, Yang Shi wrote:
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point.  So the linear
> mapping is determined by the capability of boot CPU.  If the boot CPU
> supports BBML2, large block mapping will be used for linear mapping.
> 
> But the secondary CPUs may not support BBML2, so repaint the linear mapping
> if large block mapping is used and the secondary CPUs don't support BBML2
> once cpufeature is finalized on all CPUs.
> 
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
> 
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
>  arch/arm64/include/asm/mmu.h   |   6 +-
>  arch/arm64/kernel/cpufeature.c |   8 ++
>  arch/arm64/mm/mmu.c            | 173 +++++++++++++++++++++++++++------
>  arch/arm64/mm/pageattr.c       |   2 +-
>  arch/arm64/mm/proc.S           |  57 ++++++++---
>  5 files changed, 196 insertions(+), 50 deletions(-)

[...]

> +int __init linear_map_split_to_ptes(void *__unused)
> +{
> +	typedef void (repaint_wait_fn)(void);
> +	extern repaint_wait_fn bbml2_wait_for_repainting;
> +	repaint_wait_fn *wait_fn;
> +
> +	int cpu = smp_processor_id();
> +
> +	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
> +
> +	/*
> +	 * Repainting just can be run on CPU 0 because we just can be sure
> +	 * CPU 0 supports BBML2.
> +	 */
> +	if (!cpu) {
> +		phys_addr_t kernel_start = __pa_symbol(_stext);
> +		phys_addr_t kernel_end = __pa_symbol(__init_begin);
> +		phys_addr_t start, end;
> +		unsigned long vstart, vend;
> +		int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> +		u64 i;
> +		int ret;
> +
> +		/*
> +		 * Wait for all secondary CPUs get prepared for repainting
> +		 * the linear mapping.
> +		 */
> +		smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
> +
> +		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
> +		/* Split the whole linear mapping */
> +		for_each_mem_range(i, &start, &end) {
> +			if (start >= end)
> +				return -EINVAL;
> +
> +			vstart = __phys_to_virt(start);
> +			vend = __phys_to_virt(end);
> +			ret = split_kernel_pgtable_mapping(vstart, vend, flags);

I've been thinking about this; I think the best approach is to use the pagewalk
API here, then you don't need to implement your own pgtable walker; you can just
implement the pud, pmd and pte callbacks to do the splitting and they can reuse
common split helper functions. This reduces code size quite a bit I think. And
also means that for split_kernel_pgtable_mapping() you can just pass a single
address and don't need to iterate over every entry.

I started prototyping this to prove to myself that it is possible and ended up
with quite a clean implementation. I'm going to post that as a v6 RFC shortly -
I hope that's ok. I've retained you as primary author since it's all based on
your work. I'm hoping that the posting will speed up review so we can hopefully
get this into 6.18.

Thanks,
Ryan

> +			if (ret)
> +				panic("Failed to split linear mappings\n");
> +
> +			flush_tlb_kernel_range(vstart, vend);
> +		}
> +		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
> +
> +		/*
> +		 * Relies on dsb in flush_tlb_kernel_range() to avoid
> +		 * reordering before any page table split operations.
> +		 */
> +		WRITE_ONCE(repaint_done, 0);
> +	} else {
> +		/*
> +		 * The secondary CPUs can't run in the same address space
> +		 * with CPU 0 because accessing the linear mapping address
> +		 * when CPU 0 is repainting it is not safe.
> +		 *
> +		 * Let the secondary CPUs run busy loop in idmap address
> +		 * space when repainting is ongoing.
> +		 */
> +		cpu_install_idmap();
> +		wait_fn();
> +		cpu_uninstall_idmap();
> +	}
> +
> +	return 0;
> +}
> +
>  #ifdef CONFIG_KFENCE
>  
>  bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>  		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>  
>  static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> -	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> +	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>  
>  static void __init create_idmap(void)
>  {
> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
>  			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>  			       __phys_to_virt(ptep) - ptep);
>  	}
> +
> +	/*
> +	 * Setup idmap mapping for repaint_done flag.  It will be used if
> +	 * repainting the linear mapping is needed later.
> +	 */
> +	if (linear_map_requires_bbml2) {
> +		u64 pa = __pa_symbol(&repaint_done);
> +		ptep = __pa_symbol(bbml2_ptes);
> +
> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +			       __phys_to_virt(ptep) - ptep);
> +	}
>  }
>  
>  void __init paging_init(void)
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 6566aa9d8abb..4471d7e510a1 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
>  	data.set_mask = set_mask;
>  	data.clear_mask = clear_mask;
>  
> -	ret = split_kernel_pgtable_mapping(start, start + size);
> +	ret = split_kernel_pgtable_mapping(start, start + size, 0);
>  	if (WARN_ON_ONCE(ret))
>  		return ret;
>  
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 80d470aa469d..f0f9c49a4466 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>  	dsb	nshst
>  	.endm
>  
> +	.macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
> +	/* Increment the flag to let the boot CPU know we're ready */
> +1:	ldxr	\tmp3, [\tmp2]
> +	add	\tmp3, \tmp3, #1
> +	stxr	\tmp4, \tmp3, [\tmp2]
> +	cbnz	\tmp4, 1b
> +
> +	/* Wait for the boot CPU to finish its job */
> +	sevl
> +1:	wfe
> +	ldxr	\tmp3, [\tmp2]
> +	cbnz	\tmp3, 1b
> +
> +	/* All done, act like nothing happened */
> +	msr	ttbr1_el1, \tmp1
> +	isb
> +	ret
> +	.endm
> +
>  /*
>   * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
>   *				   unsigned long temp_pte_va)
> @@ -416,29 +435,35 @@ alternative_else_nop_endif
>  __idmap_kpti_secondary:
>  	/* Uninstall swapper before surgery begins */
>  	__idmap_cpu_set_reserved_ttbr1 x16, x17
> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>  
> -	/* Increment the flag to let the boot CPU we're ready */
> -1:	ldxr	w16, [flag_ptr]
> -	add	w16, w16, #1
> -	stxr	w17, w16, [flag_ptr]
> -	cbnz	w17, 1b
> +	.unreq	swapper_ttb
> +	.unreq	flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +	.popsection
> +#endif
>  
> -	/* Wait for the boot CPU to finish messing around with swapper */
> -	sevl
> -1:	wfe
> -	ldxr	w16, [flag_ptr]
> -	cbnz	w16, 1b
> +/*
> + * Wait for repainting is done. Run on secondary CPUs
> + * only.
> + */
> +	.pushsection	".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> +	.popsection
>  
> -	/* All done, act like nothing happened */
> -	msr	ttbr1_el1, swapper_ttb
> -	isb
> -	ret
> +	.pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> +	swapper_ttb	.req	x0
> +	flag_ptr	.req	x1
> +	mrs     swapper_ttb, ttbr1_el1
> +	adr_l   flag_ptr, repaint_done
> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>  
>  	.unreq	swapper_ttb
>  	.unreq	flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
>  	.popsection
> -#endif
>  
>  /*
>   *	__cpu_setup



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-08-05  7:54   ` Ryan Roberts
@ 2025-08-05 17:45     ` Yang Shi
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Shi @ 2025-08-05 17:45 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 8/5/25 12:54 AM, Ryan Roberts wrote:
> Hi Yang,
>
> On 24/07/2025 23:11, Yang Shi wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point.  So the linear
>> mapping is determined by the capability of boot CPU.  If the boot CPU
>> supports BBML2, large block mapping will be used for linear mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear mapping
>> if large block mapping is used and the secondary CPUs don't support BBML2
>> once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>>   arch/arm64/include/asm/mmu.h   |   6 +-
>>   arch/arm64/kernel/cpufeature.c |   8 ++
>>   arch/arm64/mm/mmu.c            | 173 +++++++++++++++++++++++++++------
>>   arch/arm64/mm/pageattr.c       |   2 +-
>>   arch/arm64/mm/proc.S           |  57 ++++++++---
>>   5 files changed, 196 insertions(+), 50 deletions(-)
> [...]
>
>> +int __init linear_map_split_to_ptes(void *__unused)
>> +{
>> +	typedef void (repaint_wait_fn)(void);
>> +	extern repaint_wait_fn bbml2_wait_for_repainting;
>> +	repaint_wait_fn *wait_fn;
>> +
>> +	int cpu = smp_processor_id();
>> +
>> +	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>> +
>> +	/*
>> +	 * Repainting just can be run on CPU 0 because we just can be sure
>> +	 * CPU 0 supports BBML2.
>> +	 */
>> +	if (!cpu) {
>> +		phys_addr_t kernel_start = __pa_symbol(_stext);
>> +		phys_addr_t kernel_end = __pa_symbol(__init_begin);
>> +		phys_addr_t start, end;
>> +		unsigned long vstart, vend;
>> +		int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> +		u64 i;
>> +		int ret;
>> +
>> +		/*
>> +		 * Wait for all secondary CPUs get prepared for repainting
>> +		 * the linear mapping.
>> +		 */
>> +		smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
>> +
>> +		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
>> +		/* Split the whole linear mapping */
>> +		for_each_mem_range(i, &start, &end) {
>> +			if (start >= end)
>> +				return -EINVAL;
>> +
>> +			vstart = __phys_to_virt(start);
>> +			vend = __phys_to_virt(end);
>> +			ret = split_kernel_pgtable_mapping(vstart, vend, flags);

Hi Ryan,

> I've been thinking about this; I think the best approach is to use the pagewalk
> API here, then you don't need to implement your own pgtable walker; you can just
> implement the pud, pmd and pte callbacks to do the splitting and they can reuse
> common split helper functions. This reduces code size quite a bit I think. And
> also means that for split_kernel_pgtable_mapping() you can just pass a single
> address and don't need to iterate over every entry.

Using pgtable walker API is fine to me. The biggest concern is how to 
reuse split code for repainting. I think it basically solves the problem.

>
> I started prototyping this to prove to myself that it is possible and ended up
> with quite a clean implementation. I'm going to post that as a v6 RFC shortly -
> I hope that's ok. I've retained you as primary author since it's all based on
> your work. I'm hoping that the posting will speed up review so we can hopefully
> get this into 6.18.

Thank you for making the prototype. I will take a look that and reply in 
that series directly.

Regards,
Yang

>
> Thanks,
> Ryan
>
>> +			if (ret)
>> +				panic("Failed to split linear mappings\n");
>> +
>> +			flush_tlb_kernel_range(vstart, vend);
>> +		}
>> +		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
>> +
>> +		/*
>> +		 * Relies on dsb in flush_tlb_kernel_range() to avoid
>> +		 * reordering before any page table split operations.
>> +		 */
>> +		WRITE_ONCE(repaint_done, 0);
>> +	} else {
>> +		/*
>> +		 * The secondary CPUs can't run in the same address space
>> +		 * with CPU 0 because accessing the linear mapping address
>> +		 * when CPU 0 is repainting it is not safe.
>> +		 *
>> +		 * Let the secondary CPUs run busy loop in idmap address
>> +		 * space when repainting is ongoing.
>> +		 */
>> +		cpu_install_idmap();
>> +		wait_fn();
>> +		cpu_uninstall_idmap();
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   #ifdef CONFIG_KFENCE
>>   
>>   bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
>> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>>   		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>>   
>>   static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> -	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> +	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>>   
>>   static void __init create_idmap(void)
>>   {
>> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
>>   			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>>   			       __phys_to_virt(ptep) - ptep);
>>   	}
>> +
>> +	/*
>> +	 * Setup idmap mapping for repaint_done flag.  It will be used if
>> +	 * repainting the linear mapping is needed later.
>> +	 */
>> +	if (linear_map_requires_bbml2) {
>> +		u64 pa = __pa_symbol(&repaint_done);
>> +		ptep = __pa_symbol(bbml2_ptes);
>> +
>> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> +			       __phys_to_virt(ptep) - ptep);
>> +	}
>>   }
>>   
>>   void __init paging_init(void)
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 6566aa9d8abb..4471d7e510a1 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
>>   	data.set_mask = set_mask;
>>   	data.clear_mask = clear_mask;
>>   
>> -	ret = split_kernel_pgtable_mapping(start, start + size);
>> +	ret = split_kernel_pgtable_mapping(start, start + size, 0);
>>   	if (WARN_ON_ONCE(ret))
>>   		return ret;
>>   
>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>> index 80d470aa469d..f0f9c49a4466 100644
>> --- a/arch/arm64/mm/proc.S
>> +++ b/arch/arm64/mm/proc.S
>> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>>   	dsb	nshst
>>   	.endm
>>   
>> +	.macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
>> +	/* Increment the flag to let the boot CPU know we're ready */
>> +1:	ldxr	\tmp3, [\tmp2]
>> +	add	\tmp3, \tmp3, #1
>> +	stxr	\tmp4, \tmp3, [\tmp2]
>> +	cbnz	\tmp4, 1b
>> +
>> +	/* Wait for the boot CPU to finish its job */
>> +	sevl
>> +1:	wfe
>> +	ldxr	\tmp3, [\tmp2]
>> +	cbnz	\tmp3, 1b
>> +
>> +	/* All done, act like nothing happened */
>> +	msr	ttbr1_el1, \tmp1
>> +	isb
>> +	ret
>> +	.endm
>> +
>>   /*
>>    * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
>>    *				   unsigned long temp_pte_va)
>> @@ -416,29 +435,35 @@ alternative_else_nop_endif
>>   __idmap_kpti_secondary:
>>   	/* Uninstall swapper before surgery begins */
>>   	__idmap_cpu_set_reserved_ttbr1 x16, x17
>> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>   
>> -	/* Increment the flag to let the boot CPU we're ready */
>> -1:	ldxr	w16, [flag_ptr]
>> -	add	w16, w16, #1
>> -	stxr	w17, w16, [flag_ptr]
>> -	cbnz	w17, 1b
>> +	.unreq	swapper_ttb
>> +	.unreq	flag_ptr
>> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +	.popsection
>> +#endif
>>   
>> -	/* Wait for the boot CPU to finish messing around with swapper */
>> -	sevl
>> -1:	wfe
>> -	ldxr	w16, [flag_ptr]
>> -	cbnz	w16, 1b
>> +/*
>> + * Wait for repainting is done. Run on secondary CPUs
>> + * only.
>> + */
>> +	.pushsection	".data", "aw", %progbits
>> +SYM_DATA(repaint_done, .long 1)
>> +	.popsection
>>   
>> -	/* All done, act like nothing happened */
>> -	msr	ttbr1_el1, swapper_ttb
>> -	isb
>> -	ret
>> +	.pushsection ".idmap.text", "a"
>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>> +	swapper_ttb	.req	x0
>> +	flag_ptr	.req	x1
>> +	mrs     swapper_ttb, ttbr1_el1
>> +	adr_l   flag_ptr, repaint_done
>> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
>> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>   
>>   	.unreq	swapper_ttb
>>   	.unreq	flag_ptr
>> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>>   	.popsection
>> -#endif
>>   
>>   /*
>>    *	__cpu_setup



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-01 14:35   ` Ryan Roberts
  2025-08-04 10:07     ` Ryan Roberts
@ 2025-08-05 18:53     ` Yang Shi
  2025-08-06  7:20       ` Ryan Roberts
  1 sibling, 1 reply; 20+ messages in thread
From: Yang Shi @ 2025-08-05 18:53 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 8/1/25 7:35 AM, Ryan Roberts wrote:
> On 24/07/2025 23:11, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>>    - performance degradation
>>    - more TLB pressure
>>    - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full.  When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal:       258988984 kB
>> MemFree:        254821700 kB
>>
>> After:
>> MemTotal:       259505132 kB
>> MemFree:        255410264 kB
>>
>> Around 500MB more memory are free to use.  The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on.  Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses.  The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>      --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>      --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>      --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>>   arch/arm64/include/asm/cpufeature.h |  34 ++++
>>   arch/arm64/include/asm/mmu.h        |   1 +
>>   arch/arm64/include/asm/pgtable.h    |   5 +
>>   arch/arm64/kernel/cpufeature.c      |  31 +--
>>   arch/arm64/mm/mmu.c                 | 293 +++++++++++++++++++++++++++-
>>   arch/arm64/mm/pageattr.c            |   4 +
>>   6 files changed, 333 insertions(+), 35 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index bf13d676aae2..d0d394cc837d 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
>>   	return cpus_have_final_cap(ARM64_HAS_PMUV3);
>>   }
>>   
>> +static inline bool bbml2_noabort_available(void)
>> +{
>> +	/*
>> +	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> +	 * as possible. This list is therefore an allow-list of known-good
>> +	 * implementations that both support BBML2 and additionally, fulfill the
>> +	 * extra constraint of never generating TLB conflict aborts when using
>> +	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
>> +	 * kernel contexts difficult to prove safe against recursive aborts).
>> +	 *
>> +	 * Note that implementations can only be considered "known-good" if their
>> +	 * implementors attest to the fact that the implementation never raises
>> +	 * TLB conflict aborts for BBML2 mapping granularity changes.
>> +	 */
>> +	static const struct midr_range supports_bbml2_noabort_list[] = {
>> +		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> +		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> +		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> +		{}
>> +	};
>> +
>> +	/* Does our cpu guarantee to never raise TLB conflict aborts? */
>> +	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
>> +		return false;
>> +
>> +	/*
>> +	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
>> +	 * about whether the MIDR check passes.
>> +	 */
>> +
>> +	return true;
>> +}
> I don't think this function should be inline. Won't we end up duplicating the
> midr list everywhere? Suggest moving back to cpufeature.c.

Yes, you are right.

>
>> +
>>   static inline bool system_supports_bbml2_noabort(void)
>>   {
>>   	return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 6e8aa8e72601..57f4b25e6f33 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>   			       pgprot_t prot, bool page_mappings_only);
>>   extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>   extern void mark_linear_text_alias_ro(void);
>> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>>   
>>   /*
>>    * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index ba63c8736666..ad2a6a7e86b0 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>>   	return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>>   }
>>   
>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>> +{
>> +	return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>> +}
>> +
>>   #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
>>   static inline int pte_uffd_wp(pte_t pte)
>>   {
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index d2a8a509a58e..1c96016a7a41 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>>   
>>   static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>>   {
>> -	/*
>> -	 * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> -	 * as possible. This list is therefore an allow-list of known-good
>> -	 * implementations that both support BBML2 and additionally, fulfill the
>> -	 * extra constraint of never generating TLB conflict aborts when using
>> -	 * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
>> -	 * kernel contexts difficult to prove safe against recursive aborts).
>> -	 *
>> -	 * Note that implementations can only be considered "known-good" if their
>> -	 * implementors attest to the fact that the implementation never raises
>> -	 * TLB conflict aborts for BBML2 mapping granularity changes.
>> -	 */
>> -	static const struct midr_range supports_bbml2_noabort_list[] = {
>> -		MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> -		MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> -		MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> -		{}
>> -	};
>> -
>> -	/* Does our cpu guarantee to never raise TLB conflict aborts? */
>> -	if (!is_midr_in_range_list(supports_bbml2_noabort_list))
>> -		return false;
>> -
>> -	/*
>> -	 * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
>> -	 * about whether the MIDR check passes.
>> -	 */
>> -
>> -	return true;
>> +	return bbml2_noabort_available();
>>   }
>>   
>>   #ifdef CONFIG_ARM64_PAN
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 3d5fb37424ab..f63b39613571 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>   			     int flags);
>>   #endif
>>   
>> +#define INVALID_PHYS_ADDR	-1
>> +
>>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>   				       enum pgtable_type pgtable_type)
>>   {
>> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>   	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>>   	phys_addr_t pa;
>>   
>> -	BUG_ON(!ptdesc);
>> +	if (!ptdesc)
>> +		return INVALID_PHYS_ADDR;
>> +
>>   	pa = page_to_phys(ptdesc_page(ptdesc));
>>   
>>   	switch (pgtable_type) {
>> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>   }
>>   
>>   static phys_addr_t __maybe_unused
>> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>>   {
>>   	return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>>   }
>>   
>> +static phys_addr_t __maybe_unused
>> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> +	phys_addr_t pa;
>> +
>> +	pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> +	BUG_ON(pa == INVALID_PHYS_ADDR);
>> +	return pa;
>> +}
>> +
>>   static phys_addr_t
>>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>>   {
>> -	return __pgd_pgtable_alloc(NULL, pgtable_type);
>> +	phys_addr_t pa;
>> +
>> +	pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> +	BUG_ON(pa == INVALID_PHYS_ADDR);
>> +	return pa;
>>   }
> The allocation all looks clean to me now. Thanks.

Thank you for the suggestion.

>
>>   
>>   /*
>> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>   			     pgd_pgtable_alloc_special_mm, flags);
>>   }
>>   
>> +static DEFINE_MUTEX(pgtable_split_lock);
>> +
>> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
>> +{
>> +	pte_t *ptep;
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +
>> +	ptep = pte_offset_kernel(pmdp, addr);
>> +
>> +	do {
>> +		pte_t *_ptep;
>> +
>> +		nr = 0;
>> +		next = pte_cont_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / CONT_PTE_SIZE));
>> +		span = nr * CONT_PTE_SIZE;
>> +
>> +		_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> +		ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
>> +
>> +		if (((addr | next) & ~CONT_PTE_MASK) == 0)
>> +			continue;
>> +
>> +		if (!pte_cont(__ptep_get(_ptep)))
>> +			continue;
>> +
>> +		for (int i = 0; i < CONT_PTES; i++, _ptep++)
>> +			__set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
>> +	} while (addr = next + span, addr != end);
>> +
>> +	return 0;
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
>> +{
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +	int ret = 0;
>> +
>> +	do {
>> +		pmd_t pmd;
>> +
>> +		nr = 1;
>> +		next = pmd_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / PMD_SIZE));
>> +		span = (nr - 1) * PMD_SIZE;
>> +
>> +		if (((addr | next) & ~PMD_MASK) == 0)
>> +			continue;
>> +
>> +		pmd = pmdp_get(pmdp);
>> +		if (pmd_leaf(pmd)) {
>> +			phys_addr_t pte_phys;
>> +			pte_t *ptep;
>> +			pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
>> +					  PMD_TABLE_AF;
>> +			unsigned long pfn = pmd_pfn(pmd);
>> +			pgprot_t prot = pmd_pgprot(pmd);
>> +
>> +			pte_phys = split_pgtable_alloc(TABLE_PTE);
>> +			if (pte_phys == INVALID_PHYS_ADDR)
>> +				return -ENOMEM;
>> +
>> +			if (pgprot_val(prot) & PMD_SECT_PXN)
>> +				pmdval |= PMD_TABLE_PXN;
>> +
>> +			prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
>> +					PTE_TYPE_PAGE);
>> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +			ptep = (pte_t *)phys_to_virt(pte_phys);
>> +			for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> +				__set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> +			dsb(ishst);
>> +
>> +			__pmd_populate(pmdp, pte_phys, pmdval);
>> +		}
>> +
>> +		ret = split_cont_pte(pmdp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (pmdp += nr, addr = next + span, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
>> +{
>> +	pmd_t *pmdp;
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +	int ret = 0;
>> +
>> +	pmdp = pmd_offset(pudp, addr);
>> +
>> +	do {
>> +		pmd_t *_pmdp;
>> +
>> +		nr = 0;
>> +		next = pmd_cont_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / CONT_PMD_SIZE));
>> +		span = nr * CONT_PMD_SIZE;
>> +
>> +		if (((addr | next) & ~CONT_PMD_MASK) == 0) {
>> +			pmdp += pmd_index(next) - pmd_index(addr) +
>> +				nr * CONT_PMDS;
>> +			continue;
>> +		}
>> +
>> +		_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> +		if (!pmd_cont(pmdp_get(_pmdp)))
>> +			goto split;
>> +
>> +		for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
>> +			set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
>> +
>> +split:
>> +		ret = split_pmd(pmdp, addr, next);
>> +		if (ret)
>> +			break;
>> +
>> +		pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
>> +	} while (addr = next + span, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
>> +{
>> +	pud_t *pudp;
>> +	unsigned long next;
>> +	unsigned int nr;
>> +	unsigned long span;
>> +	int ret = 0;
>> +
>> +	pudp = pud_offset(p4dp, addr);
>> +
>> +	do {
>> +		pud_t pud;
>> +
>> +		nr = 1;
>> +		next = pud_addr_end(addr, end);
>> +		if (next < end)
>> +			nr = max(nr, ((end - next) / PUD_SIZE));
>> +		span = (nr - 1) * PUD_SIZE;
>> +
>> +		if (((addr | next) & ~PUD_MASK) == 0)
>> +			continue;
>> +
>> +		pud = pudp_get(pudp);
>> +		if (pud_leaf(pud)) {
>> +			phys_addr_t pmd_phys;
>> +			pmd_t *pmdp;
>> +			pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
>> +					  PUD_TABLE_AF;
>> +			unsigned long pfn = pud_pfn(pud);
>> +			pgprot_t prot = pud_pgprot(pud);
>> +			unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> +			pmd_phys = split_pgtable_alloc(TABLE_PMD);
>> +			if (pmd_phys == INVALID_PHYS_ADDR)
>> +				return -ENOMEM;
>> +
>> +			if (pgprot_val(prot) & PMD_SECT_PXN)
>> +				pudval |= PUD_TABLE_PXN;
>> +
>> +			prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
>> +					PMD_TYPE_SECT);
>> +			prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +			pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +			for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> +				set_pmd(pmdp, pfn_pmd(pfn, prot));
>> +				pfn += step;
>> +			}
>> +
>> +			dsb(ishst);
>> +
>> +			__pud_populate(pudp, pmd_phys, pudval);
>> +		}
>> +
>> +		ret = split_cont_pmd(pudp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (pudp += nr, addr = next + span, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
>> +{
>> +	p4d_t *p4dp;
>> +	unsigned long next;
>> +	int ret = 0;
>> +
>> +	p4dp = p4d_offset(pgdp, addr);
>> +
>> +	do {
>> +		next = p4d_addr_end(addr, end);
>> +
>> +		ret = split_pud(p4dp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (p4dp++, addr = next, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
>> +{
>> +	unsigned long next;
>> +	int ret = 0;
>> +
>> +	do {
>> +		next = pgd_addr_end(addr, end);
>> +		ret = split_p4d(pgdp, addr, next);
>> +		if (ret)
>> +			break;
>> +	} while (pgdp++, addr = next, addr != end);
>> +
>> +	return ret;
>> +}
>> +
>> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
>> +{
>> +	int ret;
>> +
>> +	if (!system_supports_bbml2_noabort())
>> +		return 0;
>> +
>> +	if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> +		return -EINVAL;
>> +
>> +	mutex_lock(&pgtable_split_lock);
> On reflection, I agree this lock approach is simpler than my suggestion to do it
> locklessly and I doubt this will become a bottleneck. Given x86 does it this
> way, I guess it's fine.
>
>> +	arch_enter_lazy_mmu_mode();
>> +	ret = split_pgd(pgd_offset_k(start), start, end);
> My instinct still remains that it would be better not to iterate over the range
> here, but instead call a "split(start); split(end);" since we just want to split
> the start and end. So the code would be simpler and probably more performant if
> we get rid of all the iteration.

It should be more performant for splitting large range, especially the 
range includes leaf mappings at different levels. But I had some 
optimization to skip leaf mappings in this version, so it should be 
close to your implementation from performance perspective. And it just 
walks the page table once instead of twice. It should be more efficient 
for small split, for example, 4K.

>
> But I'm guessing you are going to enhance this infra in the next patch to
> support splitting all entries in the range for the system-doesn't-support-bbml case?

Yes. I added "flags" parameter in patch #4. When repainting the page 
table, NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS will be passed in to tell 
split_pgd() to split the page table all the way down to PTEs.

>
> Anyway, I'll take a look at the next patch then come back to review the details
> of split_pgd().
>
>> +	arch_leave_lazy_mmu_mode();
>> +	mutex_unlock(&pgtable_split_lock);
>> +
>> +	return ret;
>> +}
>> +
>>   static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>>   				phys_addr_t size, pgprot_t prot)
>>   {
>> @@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>   
>>   #endif /* CONFIG_KFENCE */
>>   
>> +bool linear_map_requires_bbml2;
> Until the next patch, you're only using this variable in this file, so at the
> very least, it should be static for now. But I'm proposing below it should be
> removed entirely.
>
>> +
>> +static inline bool force_pte_mapping(void)
>> +{
>> +	/*
>> +	 * Can't use cpufeature API to determine whether BBM level 2
>> +	 * is supported or not since cpufeature have not been
>> +	 * finalized yet.
>> +	 */
>> +	return (!bbml2_noabort_available() && (rodata_full ||
>> +		arm64_kfence_can_set_direct_map() || is_realm_world())) ||
>> +		debug_pagealloc_enabled();
>> +}
>> +
>>   static void __init map_mem(pgd_t *pgdp)
>>   {
>>   	static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
>>   
>>   	early_kfence_pool = arm64_kfence_alloc_pool();
>>   
>> -	if (can_set_direct_map())
>> +	linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
> This looks wrong; what about the kfence_can_set_direct_map and is_realm_world
> conditions?
>
> perhaps:
>
> 	linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map()

Thanks for figuring this out.

>
>> +
>> +	if (force_pte_mapping())
>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>   
>>   	/*
>> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>   
>>   	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>   
>> -	if (can_set_direct_map())
>> +	if (force_pte_mapping() ||
>> +	    (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
> So force_pte_mapping() isn't actually returning what it sounds like it is; it's
> returning whether you would have to force pte mapping based on the current cpu's
> support for bbml2. Perhaps it would be better to implement force_pte_mapping()  as:
>
> static inline bool force_pte_mapping(void)
> {
> 	bool bbml2 = (system_capabilities_finalized() &&
> 			system_supports_bbml2_noabort()) ||
> 			bbml2_noabort_available();
>
> 	return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> 			   is_realm_world())) ||
> 		debug_pagealloc_enabled();
> }
>
> Then we can just use force_pte_mapping() in both the boot and runtime paths
> without any adjustment based on linear_map_requires_bbml2. So you could drop
> linear_map_requires_bbml2 entirely, until the next patch at least.

Good idea.

>
>>   		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>   
>>   	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c6a85000fa0e..6566aa9d8abb 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
>>   	data.set_mask = set_mask;
>>   	data.clear_mask = clear_mask;
>>   
>> +	ret = split_kernel_pgtable_mapping(start, start + size);
>> +	if (WARN_ON_ONCE(ret))
> I'm on the fence as to whether this warning is desirable. Would we normally want
> to warn in an OOM situation, or just unwind and carry on?

I don't have strong preference on this. It returns errno anyway. The 
operation, for example, insmod, will fail if split fails.

Thanks,
Yang

>
> Thanks,
> Ryan
>
>
>> +		return ret;
>> +
>>   	arch_enter_lazy_mmu_mode();
>>   
>>   	/*



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
  2025-08-01 16:14   ` Ryan Roberts
@ 2025-08-05 18:59     ` Yang Shi
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Shi @ 2025-08-05 18:59 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 8/1/25 9:14 AM, Ryan Roberts wrote:
> On 24/07/2025 23:11, Yang Shi wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point.  So the linear
>> mapping is determined by the capability of boot CPU.  If the boot CPU
>> supports BBML2, large block mapping will be used for linear mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear mapping
>> if large block mapping is used and the secondary CPUs don't support BBML2
>> once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>>   arch/arm64/include/asm/mmu.h   |   6 +-
>>   arch/arm64/kernel/cpufeature.c |   8 ++
>>   arch/arm64/mm/mmu.c            | 173 +++++++++++++++++++++++++++------
>>   arch/arm64/mm/pageattr.c       |   2 +-
>>   arch/arm64/mm/proc.S           |  57 ++++++++---
>>   5 files changed, 196 insertions(+), 50 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 57f4b25e6f33..9bf50e8897e2 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -56,6 +56,8 @@ typedef struct {
>>    */
>>   #define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
>>   
>> +extern bool linear_map_requires_bbml2;
>> +
>>   static inline bool arm64_kernel_unmapped_at_el0(void)
>>   {
>>   	return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
>> @@ -71,7 +73,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>   			       pgprot_t prot, bool page_mappings_only);
>>   extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>   extern void mark_linear_text_alias_ro(void);
>> -extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
>> +					unsigned int flags);
>> +extern int linear_map_split_to_ptes(void *__unused);
>>   
>>   /*
>>    * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 1c96016a7a41..23c01d679c40 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -85,6 +85,7 @@
>>   #include <asm/insn.h>
>>   #include <asm/kvm_host.h>
>>   #include <asm/mmu_context.h>
>> +#include <asm/mmu.h>
>>   #include <asm/mte.h>
>>   #include <asm/hypervisor.h>
>>   #include <asm/processor.h>
>> @@ -2009,6 +2010,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>>   	return 0;
>>   }
>>   
>> +static void __init linear_map_maybe_split_to_ptes(void)
>> +{
>> +	if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
>> +		stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
>> +}
>> +
>>   static void __init kpti_install_ng_mappings(void)
>>   {
>>   	/* Check whether KPTI is going to be used */
>> @@ -3855,6 +3862,7 @@ void __init setup_system_features(void)
>>   {
>>   	setup_system_capabilities();
>>   
>> +	linear_map_maybe_split_to_ptes();
>>   	kpti_install_ng_mappings();
>>   
>>   	sve_setup();
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index f63b39613571..22f2d0869fdd 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -482,11 +482,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>   
>>   #define INVALID_PHYS_ADDR	-1
>>   
> [...]
>
> I'll review the actual walker separately (I've run out of time today).
>
>
>>   
>> +extern u32 repaint_done;
>> +
>> +int __init linear_map_split_to_ptes(void *__unused)
>> +{
>> +	typedef void (repaint_wait_fn)(void);
>> +	extern repaint_wait_fn bbml2_wait_for_repainting;
>> +	repaint_wait_fn *wait_fn;
>> +
>> +	int cpu = smp_processor_id();
>> +
>> +	wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>> +
>> +	/*
>> +	 * Repainting just can be run on CPU 0 because we just can be sure
>> +	 * CPU 0 supports BBML2.
>> +	 */
>> +	if (!cpu) {
>> +		phys_addr_t kernel_start = __pa_symbol(_stext);
>> +		phys_addr_t kernel_end = __pa_symbol(__init_begin);
>> +		phys_addr_t start, end;
>> +		unsigned long vstart, vend;
>> +		int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> +		u64 i;
>> +		int ret;
>> +
>> +		/*
>> +		 * Wait for all secondary CPUs get prepared for repainting
>> +		 * the linear mapping.
>> +		 */
>> +		smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
> Is this correct? I would have thought the primary is waiting for the
> secondaries, so "VAL == num_online_cpus() - 1" ?

It is correct because repaint_done is initialized to 1.

>
>> +
>> +		memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
>> +		/* Split the whole linear mapping */
>> +		for_each_mem_range(i, &start, &end) {
> I think I asked this in the last round; but I just want to double check;
> memblock is definitely still valid here and we are definitely going to get
> exactly the same regions out as we did in map_mem()? I wonder if it's possible
> between then and now that some other component has reserved some memory? In that
> case we wouldn't walk that region?

I think it is kept unchanged. The ptdump shows no block or cont mappings 
for linear mapping if I force repaint the page table.

>
> Perhaps it would be safer (and simpler) to just walk all of [PAGE_OFFSET,
> _stext) and [__init_begin, PAGE_END) and ignore the holes?

This may walk some holes, particularly for multiple sockets machines? It 
doesn't have correctness issue, but just not very efficient.

>
>> +			if (start >= end)
>> +				return -EINVAL;
>> +
>> +			vstart = __phys_to_virt(start);
>> +			vend = __phys_to_virt(end);
>> +			ret = split_kernel_pgtable_mapping(vstart, vend, flags);
>> +			if (ret)
>> +				panic("Failed to split linear mappings\n");
>> +
>> +			flush_tlb_kernel_range(vstart, vend);
>> +		}
>> +		memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
>> +
>> +		/*
>> +		 * Relies on dsb in flush_tlb_kernel_range() to avoid
>> +		 * reordering before any page table split operations.
>> +		 */
>> +		WRITE_ONCE(repaint_done, 0);
>> +	} else {
>> +		/*
>> +		 * The secondary CPUs can't run in the same address space
>> +		 * with CPU 0 because accessing the linear mapping address
>> +		 * when CPU 0 is repainting it is not safe.
>> +		 *
>> +		 * Let the secondary CPUs run busy loop in idmap address
>> +		 * space when repainting is ongoing.
>> +		 */
>> +		cpu_install_idmap();
>> +		wait_fn();
>> +		cpu_uninstall_idmap();
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   #ifdef CONFIG_KFENCE
>>   
>>   bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
>> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>>   		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>>   
>>   static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> -	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> +	  bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>>   
>>   static void __init create_idmap(void)
>>   {
>> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
>>   			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>>   			       __phys_to_virt(ptep) - ptep);
>>   	}
>> +
>> +	/*
>> +	 * Setup idmap mapping for repaint_done flag.  It will be used if
>> +	 * repainting the linear mapping is needed later.
>> +	 */
>> +	if (linear_map_requires_bbml2) {
>> +		u64 pa = __pa_symbol(&repaint_done);
>> +		ptep = __pa_symbol(bbml2_ptes);
>> +
>> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> +			       __phys_to_virt(ptep) - ptep);
>> +	}
>>   }
>>   
>>   void __init paging_init(void)
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 6566aa9d8abb..4471d7e510a1 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
>>   	data.set_mask = set_mask;
>>   	data.clear_mask = clear_mask;
>>   
>> -	ret = split_kernel_pgtable_mapping(start, start + size);
>> +	ret = split_kernel_pgtable_mapping(start, start + size, 0);
>>   	if (WARN_ON_ONCE(ret))
>>   		return ret;
>>   
>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>> index 80d470aa469d..f0f9c49a4466 100644
>> --- a/arch/arm64/mm/proc.S
>> +++ b/arch/arm64/mm/proc.S
>> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>>   	dsb	nshst
>>   	.endm
>>   
>> +	.macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
>> +	/* Increment the flag to let the boot CPU know we're ready */
>> +1:	ldxr	\tmp3, [\tmp2]
>> +	add	\tmp3, \tmp3, #1
>> +	stxr	\tmp4, \tmp3, [\tmp2]
>> +	cbnz	\tmp4, 1b
>> +
>> +	/* Wait for the boot CPU to finish its job */
>> +	sevl
>> +1:	wfe
>> +	ldxr	\tmp3, [\tmp2]
>> +	cbnz	\tmp3, 1b
>> +
>> +	/* All done, act like nothing happened */
>> +	msr	ttbr1_el1, \tmp1
>> +	isb
>> +	ret
>> +	.endm
> You've defined the macro within "#ifdef CONFIG_UNMAP_KERNEL_AT_EL0" but then
> need to use it outside of that scope.
>
> But I don't think this needs to be a macro; I think it would be better as a
> function (as I suggested in the last round). Then the text only needs to appear
> once in the image and it can be used from both places (see below).
>
>> +
>>   /*
>>    * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
>>    *				   unsigned long temp_pte_va)
>> @@ -416,29 +435,35 @@ alternative_else_nop_endif
>>   __idmap_kpti_secondary:
>>   	/* Uninstall swapper before surgery begins */
>>   	__idmap_cpu_set_reserved_ttbr1 x16, x17
>> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>   
>> -	/* Increment the flag to let the boot CPU we're ready */
>> -1:	ldxr	w16, [flag_ptr]
>> -	add	w16, w16, #1
>> -	stxr	w17, w16, [flag_ptr]
>> -	cbnz	w17, 1b
>> +	.unreq	swapper_ttb
>> +	.unreq	flag_ptr
>> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +	.popsection
>> +#endif
>>   
>> -	/* Wait for the boot CPU to finish messing around with swapper */
>> -	sevl
>> -1:	wfe
>> -	ldxr	w16, [flag_ptr]
>> -	cbnz	w16, 1b
>> +/*
>> + * Wait for repainting is done. Run on secondary CPUs
>> + * only.
>> + */
>> +	.pushsection	".data", "aw", %progbits
>> +SYM_DATA(repaint_done, .long 1)
>> +	.popsection
>>   
>> -	/* All done, act like nothing happened */
>> -	msr	ttbr1_el1, swapper_ttb
>> -	isb
>> -	ret
>> +	.pushsection ".idmap.text", "a"
>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>> +	swapper_ttb	.req	x0
>> +	flag_ptr	.req	x1
>> +	mrs     swapper_ttb, ttbr1_el1
>> +	adr_l   flag_ptr, repaint_done
>> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
>> +	wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>   
>>   	.unreq	swapper_ttb
>>   	.unreq	flag_ptr
>> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>>   	.popsection
>> -#endif
> How about this instead?

Looks good to me.

Thanks,
Yang


>
> ---8<---
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 8c75965afc9e..a116b2b8ad59 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -416,8 +416,30 @@ alternative_else_nop_endif
>   __idmap_kpti_secondary:
>   	/* Uninstall swapper before surgery begins */
>   	__idmap_cpu_set_reserved_ttbr1 x16, x17
> +	b scondary_cpu_wait
> +
> +	.unreq	swapper_ttb
> +	.unreq	flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +	.popsection
> +#endif
> +
> +	.pushsection	".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> +	.popsection
> +
> +	.pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> +	/* Must be same registers as in idmap_kpti_install_ng_mappings */
> +	swapper_ttb	.req	x3
> +	flag_ptr	.req	x4
> +
> +	mrs     swapper_ttb, ttbr1_el1
> +	adr_l   flag_ptr, repaint_done
> +	__idmap_cpu_set_reserved_ttbr1 x16, x17
>
>   	/* Increment the flag to let the boot CPU we're ready */
> +scondary_cpu_wait:
>   1:	ldxr	w16, [flag_ptr]
>   	add	w16, w16, #1
>   	stxr	w17, w16, [flag_ptr]
> @@ -436,9 +458,8 @@ __idmap_kpti_secondary:
>
>   	.unreq	swapper_ttb
>   	.unreq	flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
>   	.popsection
> -#endif
>
>   /*
>    *	__cpu_setup
> ---8<---
>
> Thanks,
> Ryan
>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-07-29 12:34   ` Dev Jain
@ 2025-08-05 21:28     ` Yang Shi
  2025-08-06  0:10       ` Yang Shi
  0 siblings, 1 reply; 20+ messages in thread
From: Yang Shi @ 2025-08-05 21:28 UTC (permalink / raw)
  To: Dev Jain, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 7/29/25 5:34 AM, Dev Jain wrote:
>
> On 25/07/25 3:41 am, Yang Shi wrote:
>> [----- snip -----]
>>     #ifdef CONFIG_ARM64_PAN
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 3d5fb37424ab..f63b39613571 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, 
>> phys_addr_t phys, unsigned long virt,
>>                    int flags);
>>   #endif
>>   +#define INVALID_PHYS_ADDR    -1
>> +
>>   static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>                          enum pgtable_type pgtable_type)
>>   {
>> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct 
>> mm_struct *mm,
>>       struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & 
>> ~__GFP_ZERO, 0);
>>       phys_addr_t pa;
>>   -    BUG_ON(!ptdesc);
>> +    if (!ptdesc)
>> +        return INVALID_PHYS_ADDR;
>> +
>>       pa = page_to_phys(ptdesc_page(ptdesc));
>>         switch (pgtable_type) {
>> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct 
>> mm_struct *mm,
>>   }
>>     static phys_addr_t __maybe_unused
>> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>>   {
>>       return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>>   }
>>   +static phys_addr_t __maybe_unused
>> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> +    phys_addr_t pa;
>> +
>> +    pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> +    BUG_ON(pa == INVALID_PHYS_ADDR);
>> +    return pa;
>> +}
>> +
>>   static phys_addr_t
>>   pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>>   {
>> -    return __pgd_pgtable_alloc(NULL, pgtable_type);
>> +    phys_addr_t pa;
>> +
>> +    pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> +    BUG_ON(pa == INVALID_PHYS_ADDR);
>> +    return pa;
>>   }
>>     /*
>> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct 
>> *mm, phys_addr_t phys,
>>                    pgd_pgtable_alloc_special_mm, flags);
>>   }
>>   +static DEFINE_MUTEX(pgtable_split_lock);
>
> Thanks for taking a separate lock.
>
>> +
>> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned 
>> long end)
>> +{
>> +    pte_t *ptep;
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +
>> +    ptep = pte_offset_kernel(pmdp, addr);
>> +
>> +    do {
>> +        pte_t *_ptep;
>> +
>> +        nr = 0;
>> +        next = pte_cont_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / CONT_PTE_SIZE));
>> +        span = nr * CONT_PTE_SIZE;
>> +
>> +        _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> +        ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
>> +
>> +        if (((addr | next) & ~CONT_PTE_MASK) == 0)
>> +            continue;
>> +
>> +        if (!pte_cont(__ptep_get(_ptep)))
>> +            continue;
>> +
>> +        for (int i = 0; i < CONT_PTES; i++, _ptep++)
>> +            __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
>> +    } while (addr = next + span, addr != end);
>> +
>> +    return 0;
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +    int ret = 0;
>> +
>> +    do {
>> +        pmd_t pmd;
>> +
>> +        nr = 1;
>> +        next = pmd_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / PMD_SIZE));
>> +        span = (nr - 1) * PMD_SIZE;
>> +
>> +        if (((addr | next) & ~PMD_MASK) == 0)
>> +            continue;
>> +
>> +        pmd = pmdp_get(pmdp);
>> +        if (pmd_leaf(pmd)) {
>> +            phys_addr_t pte_phys;
>> +            pte_t *ptep;
>> +            pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
>> +                      PMD_TABLE_AF;
>> +            unsigned long pfn = pmd_pfn(pmd);
>> +            pgprot_t prot = pmd_pgprot(pmd);
>> +
>> +            pte_phys = split_pgtable_alloc(TABLE_PTE);
>> +            if (pte_phys == INVALID_PHYS_ADDR)
>> +                return -ENOMEM;
>> +
>> +            if (pgprot_val(prot) & PMD_SECT_PXN)
>> +                pmdval |= PMD_TABLE_PXN;
>> +
>> +            prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
>> +                    PTE_TYPE_PAGE);
>> +            prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +            ptep = (pte_t *)phys_to_virt(pte_phys);
>> +            for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> +                __set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> +            dsb(ishst);
>> +
>> +            __pmd_populate(pmdp, pte_phys, pmdval);
>> +        }
>> +
>> +        ret = split_cont_pte(pmdp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (pmdp += nr, addr = next + span, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned 
>> long end)
>> +{
>> +    pmd_t *pmdp;
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +    int ret = 0;
>> +
>> +    pmdp = pmd_offset(pudp, addr);
>> +
>> +    do {
>> +        pmd_t *_pmdp;
>> +
>> +        nr = 0;
>> +        next = pmd_cont_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / CONT_PMD_SIZE));
>> +        span = nr * CONT_PMD_SIZE;
>> +
>> +        if (((addr | next) & ~CONT_PMD_MASK) == 0) {
>> +            pmdp += pmd_index(next) - pmd_index(addr) +
>> +                nr * CONT_PMDS;
>> +            continue;
>> +        }
>> +
>> +        _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> +        if (!pmd_cont(pmdp_get(_pmdp)))
>> +            goto split;
>> +
>> +        for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
>> +            set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
>> +
>> +split:
>> +        ret = split_pmd(pmdp, addr, next);
>> +        if (ret)
>> +            break;
>> +
>> +        pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
>> +    } while (addr = next + span, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    pud_t *pudp;
>> +    unsigned long next;
>> +    unsigned int nr;
>> +    unsigned long span;
>> +    int ret = 0;
>> +
>> +    pudp = pud_offset(p4dp, addr);
>> +
>> +    do {
>> +        pud_t pud;
>> +
>> +        nr = 1;
>> +        next = pud_addr_end(addr, end);
>> +        if (next < end)
>> +            nr = max(nr, ((end - next) / PUD_SIZE));
>> +        span = (nr - 1) * PUD_SIZE;
>> +
>> +        if (((addr | next) & ~PUD_MASK) == 0)
>> +            continue;
>> +
>> +        pud = pudp_get(pudp);
>> +        if (pud_leaf(pud)) {
>> +            phys_addr_t pmd_phys;
>> +            pmd_t *pmdp;
>> +            pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
>> +                      PUD_TABLE_AF;
>> +            unsigned long pfn = pud_pfn(pud);
>> +            pgprot_t prot = pud_pgprot(pud);
>> +            unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> +            pmd_phys = split_pgtable_alloc(TABLE_PMD);
>> +            if (pmd_phys == INVALID_PHYS_ADDR)
>> +                return -ENOMEM;
>> +
>> +            if (pgprot_val(prot) & PMD_SECT_PXN)
>> +                pudval |= PUD_TABLE_PXN;
>> +
>> +            prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
>> +                    PMD_TYPE_SECT);
>> +            prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +            pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +            for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> +                set_pmd(pmdp, pfn_pmd(pfn, prot));
>> +                pfn += step;
>> +            }
>> +
>> +            dsb(ishst);
>> +
>> +            __pud_populate(pudp, pmd_phys, pudval);
>> +        }
>> +
>> +        ret = split_cont_pmd(pudp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (pudp += nr, addr = next + span, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    p4d_t *p4dp;
>> +    unsigned long next;
>> +    int ret = 0;
>> +
>> +    p4dp = p4d_offset(pgdp, addr);
>> +
>> +    do {
>> +        next = p4d_addr_end(addr, end);
>> +
>> +        ret = split_pud(p4dp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (p4dp++, addr = next, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long 
>> end)
>> +{
>> +    unsigned long next;
>> +    int ret = 0;
>> +
>> +    do {
>> +        next = pgd_addr_end(addr, end);
>> +        ret = split_p4d(pgdp, addr, next);
>> +        if (ret)
>> +            break;
>> +    } while (pgdp++, addr = next, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long 
>> end)
>> +{
>> +    int ret;
>> +
>> +    if (!system_supports_bbml2_noabort())
>> +        return 0;
>> +
>> +    if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> +        return -EINVAL;
>> +
>> +    mutex_lock(&pgtable_split_lock);
>> +    arch_enter_lazy_mmu_mode();
>> +    ret = split_pgd(pgd_offset_k(start), start, end);
>> +    arch_leave_lazy_mmu_mode();
>> +    mutex_unlock(&pgtable_split_lock);
>> +
>> +    return ret;
>> +}
>> +
>>       /*
>
> --- snip ---
>
> I'm afraid I'll have to agree with Ryan :) Looking at the signature of 
> split_kernel_pgtable_mapping,
> one would expect that this function splits all block mappings in this 
> region. But that is just a
> nit; it does not seem right to me that we are iterating over the 
> entire space when we know *exactly* where
> we have to make the split, just to save on pgd/p4d/pud loads - the 
> effect of which is probably cancelled
> out by unnecessary iterations your approach takes to skip the 
> intermediate blocks.

The implementation is aimed to reuse the split code for repainting. We 
have to split all leaf mappings down to PTEs for repainting.

Now Ryan suggested use pgtable walk API for repainting, it made the 
duplicate code problem gone. We had some discussion in the other series.

>
> If we are concerned that most change_memory_common() operations are 
> for a single page, then
> we can do something like:
>
> unsigned long size = end - start;
> bool end_split, start_split = false;
>
> if (start not aligned to block mapping)
>     start_split = split(start);
>
> /*
>  * split the end only if the start wasn't split, or
>  * if it cannot be guaranteed that start and end lie
>  * on the same contig block
>  */
> if (!start_split || (size > CONT_PTE_SIZE))
>     end_split = split(end);

Thanks for the suggestion. It works for some cases, but I don't think it 
can work if the range is cross page table IIUC. For example, start is in 
a PMD, but end is in another PMD.

Regards,
Yang

>
>
>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-05 21:28     ` Yang Shi
@ 2025-08-06  0:10       ` Yang Shi
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Shi @ 2025-08-06  0:10 UTC (permalink / raw)
  To: Dev Jain, ryan.roberts, will, catalin.marinas, akpm,
	Miko.Lenczewski, scott, cl
  Cc: linux-arm-kernel, linux-kernel

>>
>> --- snip ---
>>
>> I'm afraid I'll have to agree with Ryan :) Looking at the signature 
>> of split_kernel_pgtable_mapping,
>> one would expect that this function splits all block mappings in this 
>> region. But that is just a
>> nit; it does not seem right to me that we are iterating over the 
>> entire space when we know *exactly* where
>> we have to make the split, just to save on pgd/p4d/pud loads - the 
>> effect of which is probably cancelled
>> out by unnecessary iterations your approach takes to skip the 
>> intermediate blocks.
>
> The implementation is aimed to reuse the split code for repainting. We 
> have to split all leaf mappings down to PTEs for repainting.
>
> Now Ryan suggested use pgtable walk API for repainting, it made the 
> duplicate code problem gone. We had some discussion in the other series.
>
>>
>> If we are concerned that most change_memory_common() operations are 
>> for a single page, then
>> we can do something like:
>>
>> unsigned long size = end - start;
>> bool end_split, start_split = false;
>>
>> if (start not aligned to block mapping)
>>     start_split = split(start);
>>
>> /*
>>  * split the end only if the start wasn't split, or
>>  * if it cannot be guaranteed that start and end lie
>>  * on the same contig block
>>  */
>> if (!start_split || (size > CONT_PTE_SIZE))
>>     end_split = split(end);
>
> Thanks for the suggestion. It works for some cases, but I don't think 
> it can work if the range is cross page table IIUC. For example, start 
> is in a PMD, but end is in another PMD.

I think we have to call split_mapping(end) if size is greater than page 
size, i.e.

split_mapping(start);
if (size > PAGE_SIZE)
     split_mapping(end);

This can avoid walking page table twice for page size range, which 
should be the most case in the current kernel.

Thanks,
Yang

>
> Regards,
> Yang
>
>>
>>
>>
>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-05 18:53     ` Yang Shi
@ 2025-08-06  7:20       ` Ryan Roberts
  2025-08-07  0:44         ` Yang Shi
  0 siblings, 1 reply; 20+ messages in thread
From: Ryan Roberts @ 2025-08-06  7:20 UTC (permalink / raw)
  To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
	scott, cl
  Cc: linux-arm-kernel, linux-kernel

On 05/08/2025 19:53, Yang Shi wrote:

[...]

>>> +    arch_enter_lazy_mmu_mode();
>>> +    ret = split_pgd(pgd_offset_k(start), start, end);
>> My instinct still remains that it would be better not to iterate over the range
>> here, but instead call a "split(start); split(end);" since we just want to split
>> the start and end. So the code would be simpler and probably more performant if
>> we get rid of all the iteration.
> 
> It should be more performant for splitting large range, especially the range
> includes leaf mappings at different levels. But I had some optimization to skip
> leaf mappings in this version, so it should be close to your implementation from
> performance perspective. And it just walks the page table once instead of twice.
> It should be more efficient for small split, for example, 4K.

I guess this is the crux of our disagreement. I think the "walks the table once
for 4K" is a micro optimization, which I doubt we would see on any benchmark
results. In the absence of data, I'd prefer the simpler, smaller, easier to
understand version.

Both implementations are on list now; perhaps the maintainers can steer us.

Thanks,
Ryan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
  2025-08-06  7:20       ` Ryan Roberts
@ 2025-08-07  0:44         ` Yang Shi
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Shi @ 2025-08-07  0:44 UTC (permalink / raw)
  To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
	dev.jain, scott, cl
  Cc: linux-arm-kernel, linux-kernel



On 8/6/25 12:20 AM, Ryan Roberts wrote:
> On 05/08/2025 19:53, Yang Shi wrote:
>
> [...]
>
>>>> +    arch_enter_lazy_mmu_mode();
>>>> +    ret = split_pgd(pgd_offset_k(start), start, end);
>>> My instinct still remains that it would be better not to iterate over the range
>>> here, but instead call a "split(start); split(end);" since we just want to split
>>> the start and end. So the code would be simpler and probably more performant if
>>> we get rid of all the iteration.
>> It should be more performant for splitting large range, especially the range
>> includes leaf mappings at different levels. But I had some optimization to skip
>> leaf mappings in this version, so it should be close to your implementation from
>> performance perspective. And it just walks the page table once instead of twice.
>> It should be more efficient for small split, for example, 4K.
> I guess this is the crux of our disagreement. I think the "walks the table once
> for 4K" is a micro optimization, which I doubt we would see on any benchmark
> results. In the absence of data, I'd prefer the simpler, smaller, easier to
> understand version.

I did a simple benchmark with module stressor from stress-ng. I used the 
below command line:
stress-ng --module 1 --module-name loop --module-ops 1000

It basically loads loop module 1000 times. I saw a slight slowdown (2% - 
3% slowdown, average time spent in 5 iterations) with your 
implementation on my AmpereOne machine. It shouldn't result in any 
noticeable slowdown for real life workloads per the data.

Thanks,
Yang

>
> Both implementations are on list now; perhaps the maintainers can steer us.
>
> Thanks,
> Ryan



^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2025-08-07  0:47 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
2025-08-01 14:36   ` Ryan Roberts
2025-08-04 23:20   ` Christoph Lameter (Ampere)
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
2025-07-29 12:34   ` Dev Jain
2025-08-05 21:28     ` Yang Shi
2025-08-06  0:10       ` Yang Shi
2025-08-01 14:35   ` Ryan Roberts
2025-08-04 10:07     ` Ryan Roberts
2025-08-05 18:53     ` Yang Shi
2025-08-06  7:20       ` Ryan Roberts
2025-08-07  0:44         ` Yang Shi
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
2025-07-26 11:10   ` kernel test robot
2025-08-01 16:14   ` Ryan Roberts
2025-08-05 18:59     ` Yang Shi
2025-08-05  7:54   ` Ryan Roberts
2025-08-05 17:45     ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).