* [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-05-31 2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
@ 2025-05-31 2:41 ` Yang Shi
2025-06-16 11:58 ` Ryan Roberts
2025-06-16 16:24 ` Ryan Roberts
0 siblings, 2 replies; 33+ messages in thread
From: Yang Shi @ 2025-05-31 2:41 UTC (permalink / raw)
To: ryan.roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
scott, cl
Cc: yang, linux-arm-kernel, linux-kernel
When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.
This resulted in a couple of problems:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.
Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full. When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.
The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.
With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P) with
256GB memory.
* Memory use after boot
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
--status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
--iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
--name=iops-test-job --eta-newline=1 --size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/cpufeature.h | 26 +++
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/pgtable.h | 12 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/mm/mmu.c | 269 +++++++++++++++++++++++++---
arch/arm64/mm/pageattr.c | 37 +++-
6 files changed, 319 insertions(+), 28 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8f36ffa16b73..a95806980298 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
#endif
}
+bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
+
+static inline bool has_nobbml2_override(void)
+{
+ u64 mmfr2;
+ unsigned int bbm;
+
+ mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+ mmfr2 &= ~id_aa64mmfr2_override.mask;
+ mmfr2 |= id_aa64mmfr2_override.val;
+ bbm = cpuid_feature_extract_unsigned_field(mmfr2,
+ ID_AA64MMFR2_EL1_BBM_SHIFT);
+ return bbm == 0;
+}
+
+/*
+ * Called at early boot stage on boot CPU before cpu info and cpu feature
+ * are ready.
+ */
+static inline bool bbml2_noabort_available(void)
+{
+ return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
+ cpu_has_bbml2_noabort(read_cpuid_id()) &&
+ !has_nobbml2_override();
+}
+
#endif /* __ASSEMBLY__ */
#endif
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..2693d63bf837 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
+extern int split_linear_mapping(unsigned long start, unsigned long end);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d3b538be1500..bf3cef31d243 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
}
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
static inline pte_t pte_mkdevmap(pte_t pte)
{
return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
@@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
((unsigned long)swapper_pg_dir & PAGE_MASK);
}
-static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
{
#ifdef __PAGETABLE_PMD_FOLDED
if (in_swapper_pgdir(pmdp)) {
@@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
#endif /* __PAGETABLE_PMD_FOLDED */
WRITE_ONCE(*pmdp, pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ __set_pmd_nosync(pmdp, pmd);
if (pmd_valid(pmd)) {
dsb(ishst);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e879bfcf853b..5fc2a4a804de 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
}
-static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
+bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
{
/*
* We want to allow usage of bbml2 in as wide a range of kernel contexts
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 775c0536b194..4c5d3aa35d62 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -45,6 +45,7 @@
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
+#define SPLIT_MAPPINGS BIT(3)
u64 kimage_voffset __ro_after_init;
EXPORT_SYMBOL(kimage_voffset);
@@ -166,12 +167,91 @@ static void init_clear_pgtable(void *table)
dsb(ishst);
}
+static void split_cont_pte(pte_t *ptep)
+{
+ pte_t *_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ pte_t _pte;
+
+ for (int i = 0; i < CONT_PTES; i++, _ptep++) {
+ _pte = READ_ONCE(*_ptep);
+ _pte = pte_mknoncont(_pte);
+ __set_pte_nosync(_ptep, _pte);
+ }
+
+ dsb(ishst);
+ isb();
+}
+
+static void split_cont_pmd(pmd_t *pmdp)
+{
+ pmd_t *_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ pmd_t _pmd;
+
+ for (int i = 0; i < CONT_PMDS; i++, _pmdp++) {
+ _pmd = READ_ONCE(*_pmdp);
+ _pmd = pmd_mknoncont(_pmd);
+ set_pmd(_pmdp, _pmd);
+ }
+}
+
+static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
+{
+ pte_t *ptep;
+ unsigned long pfn;
+ pgprot_t prot;
+
+ pfn = pmd_pfn(pmd);
+ prot = pmd_pgprot(pmd);
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PTE_TYPE_PAGE);
+
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+
+ /* It must be naturally aligned if PMD is leaf */
+ if ((flags & NO_CONT_MAPPINGS) == 0)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte_nosync(ptep, pfn_pte(pfn, prot));
+
+ dsb(ishst);
+}
+
+static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
+{
+ pmd_t *pmdp;
+ unsigned long pfn;
+ pgprot_t prot;
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+
+ pfn = pud_pfn(pud);
+ prot = pud_pgprot(pud);
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+ /* It must be naturally aligned if PUD is leaf */
+ if ((flags & NO_CONT_MAPPINGS) == 0)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
+ __set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
+ pfn += step;
+ }
+
+ dsb(ishst);
+}
+
static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot)
+ phys_addr_t phys, pgprot_t prot, int flags)
{
do {
pte_t old_pte = __ptep_get(ptep);
+ if (flags & SPLIT_MAPPINGS) {
+ if (pte_cont(old_pte))
+ split_cont_pte(ptep);
+
+ continue;
+ }
+
/*
* Required barriers to make this visible to the table walker
* are deferred to the end of alloc_init_cont_pte().
@@ -199,11 +279,20 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
pmd_t pmd = READ_ONCE(*pmdp);
pte_t *ptep;
int ret = 0;
+ bool split = flags & SPLIT_MAPPINGS;
+ pmdval_t pmdval;
+ phys_addr_t pte_phys;
- BUG_ON(pmd_sect(pmd));
- if (pmd_none(pmd)) {
- pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
- phys_addr_t pte_phys;
+ if (!split)
+ BUG_ON(pmd_sect(pmd));
+
+ if (pmd_none(pmd) && split) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (pmd_none(pmd) || (split && pmd_leaf(pmd))) {
+ pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
if (flags & NO_EXEC_MAPPINGS)
pmdval |= PMD_TABLE_PXN;
@@ -213,6 +302,18 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
ret = -ENOMEM;
goto out;
}
+ }
+
+ if (split) {
+ if (pmd_leaf(pmd)) {
+ split_pmd(pmd, pte_phys, flags);
+ __pmd_populate(pmdp, pte_phys, pmdval);
+ }
+ ptep = pte_offset_kernel(pmdp, addr);
+ goto split_pgtable;
+ }
+
+ if (pmd_none(pmd)) {
ptep = pte_set_fixmap(pte_phys);
init_clear_pgtable(ptep);
ptep += pte_index(addr);
@@ -222,17 +323,28 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
ptep = pte_set_fixmap_offset(pmdp, addr);
}
+split_pgtable:
do {
pgprot_t __prot = prot;
next = pte_cont_addr_end(addr, end);
+ if (split) {
+ pte_t pteval = READ_ONCE(*ptep);
+ bool cont = pte_cont(pteval);
+
+ if (cont &&
+ ((addr | next) & ~CONT_PTE_MASK) == 0 &&
+ (flags & NO_CONT_MAPPINGS) == 0)
+ continue;
+ }
+
/* use a contiguous mapping if the range is suitably aligned */
if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
(flags & NO_CONT_MAPPINGS) == 0)
__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
- init_pte(ptep, addr, next, phys, __prot);
+ init_pte(ptep, addr, next, phys, __prot, flags);
ptep += pte_index(next) - pte_index(addr);
phys += next - addr;
@@ -243,7 +355,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
* ensure that all previous pgtable writes are visible to the table
* walker.
*/
- pte_clear_fixmap();
+ if (!split)
+ pte_clear_fixmap();
out:
return ret;
@@ -255,15 +368,29 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
{
unsigned long next;
int ret = 0;
+ bool split = flags & SPLIT_MAPPINGS;
+ bool cont;
do {
pmd_t old_pmd = READ_ONCE(*pmdp);
next = pmd_addr_end(addr, end);
+ if (split && pmd_leaf(old_pmd)) {
+ cont = pgprot_val(pmd_pgprot(old_pmd)) & PTE_CONT;
+ if (cont)
+ split_cont_pmd(pmdp);
+
+ /* The PMD is fully contained in the range */
+ if (((addr | next) & ~PMD_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0)
+ continue;
+ }
+
/* try section mapping first */
if (((addr | next | phys) & ~PMD_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
+ (flags & NO_BLOCK_MAPPINGS) == 0 &&
+ (flags & SPLIT_MAPPINGS) == 0) {
pmd_set_huge(pmdp, phys, prot);
/*
@@ -278,7 +405,7 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
if (ret)
break;
- BUG_ON(pmd_val(old_pmd) != 0 &&
+ BUG_ON(!split && pmd_val(old_pmd) != 0 &&
pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
}
phys += next - addr;
@@ -296,14 +423,23 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
int ret = 0;
pud_t pud = READ_ONCE(*pudp);
pmd_t *pmdp;
+ bool split = flags & SPLIT_MAPPINGS;
+ pudval_t pudval;
+ phys_addr_t pmd_phys;
/*
* Check for initial section mappings in the pgd/pud.
*/
- BUG_ON(pud_sect(pud));
- if (pud_none(pud)) {
- pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
- phys_addr_t pmd_phys;
+ if (!split)
+ BUG_ON(pud_sect(pud));
+
+ if (pud_none(pud) && split) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (pud_none(pud) || (split && pud_leaf(pud))) {
+ pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
if (flags & NO_EXEC_MAPPINGS)
pudval |= PUD_TABLE_PXN;
@@ -313,6 +449,18 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
ret = -ENOMEM;
goto out;
}
+ }
+
+ if (split) {
+ if (pud_leaf(pud)) {
+ split_pud(pud, pmd_phys, flags);
+ __pud_populate(pudp, pmd_phys, pudval);
+ }
+ pmdp = pmd_offset(pudp, addr);
+ goto split_pgtable;
+ }
+
+ if (pud_none(pud)) {
pmdp = pmd_set_fixmap(pmd_phys);
init_clear_pgtable(pmdp);
pmdp += pmd_index(addr);
@@ -322,11 +470,22 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
pmdp = pmd_set_fixmap_offset(pudp, addr);
}
+split_pgtable:
do {
pgprot_t __prot = prot;
next = pmd_cont_addr_end(addr, end);
+ if (split) {
+ pmd_t pmdval = READ_ONCE(*pmdp);
+ bool cont = pgprot_val(pmd_pgprot(pmdval)) & PTE_CONT;
+
+ if (cont &&
+ ((addr | next) & ~CONT_PMD_MASK) == 0 &&
+ (flags & NO_CONT_MAPPINGS) == 0)
+ continue;
+ }
+
/* use a contiguous mapping if the range is suitably aligned */
if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
(flags & NO_CONT_MAPPINGS) == 0)
@@ -340,7 +499,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
phys += next - addr;
} while (addr = next, addr != end);
- pmd_clear_fixmap();
+ if (!split)
+ pmd_clear_fixmap();
out:
return ret;
@@ -355,6 +515,16 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
int ret = 0;
p4d_t p4d = READ_ONCE(*p4dp);
pud_t *pudp;
+ bool split = flags & SPLIT_MAPPINGS;
+
+ if (split) {
+ if (p4d_none(p4d)) {
+ ret= -EINVAL;
+ goto out;
+ }
+ pudp = pud_offset(p4dp, addr);
+ goto split_pgtable;
+ }
if (p4d_none(p4d)) {
p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN | P4D_TABLE_AF;
@@ -377,17 +547,26 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
pudp = pud_set_fixmap_offset(p4dp, addr);
}
+split_pgtable:
do {
pud_t old_pud = READ_ONCE(*pudp);
next = pud_addr_end(addr, end);
+ if (split && pud_leaf(old_pud)) {
+ /* The PUD is fully contained in the range */
+ if (((addr | next) & ~PUD_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0)
+ continue;
+ }
+
/*
* For 4K granule only, attempt to put down a 1GB block
*/
if (pud_sect_supported() &&
((addr | next | phys) & ~PUD_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
+ (flags & NO_BLOCK_MAPPINGS) == 0 &&
+ (flags & SPLIT_MAPPINGS) == 0) {
pud_set_huge(pudp, phys, prot);
/*
@@ -402,13 +581,14 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
if (ret)
break;
- BUG_ON(pud_val(old_pud) != 0 &&
+ BUG_ON(!split && pud_val(old_pud) != 0 &&
pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
}
phys += next - addr;
} while (pudp++, addr = next, addr != end);
- pud_clear_fixmap();
+ if (!split)
+ pud_clear_fixmap();
out:
return ret;
@@ -423,6 +603,16 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
int ret = 0;
pgd_t pgd = READ_ONCE(*pgdp);
p4d_t *p4dp;
+ bool split = flags & SPLIT_MAPPINGS;
+
+ if (split) {
+ if (pgd_none(pgd)) {
+ ret = -EINVAL;
+ goto out;
+ }
+ p4dp = p4d_offset(pgdp, addr);
+ goto split_pgtable;
+ }
if (pgd_none(pgd)) {
pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN | PGD_TABLE_AF;
@@ -445,6 +635,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
p4dp = p4d_set_fixmap_offset(pgdp, addr);
}
+split_pgtable:
do {
p4d_t old_p4d = READ_ONCE(*p4dp);
@@ -461,7 +652,8 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys += next - addr;
} while (p4dp++, addr = next, addr != end);
- p4d_clear_fixmap();
+ if (!split)
+ p4d_clear_fixmap();
out:
return ret;
@@ -557,6 +749,25 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
return pa;
}
+int split_linear_mapping(unsigned long start, unsigned long end)
+{
+ int ret = 0;
+
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ mmap_write_lock(&init_mm);
+ /* NO_EXEC_MAPPINGS is needed when splitting linear map */
+ ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
+ start, (end - start), __pgprot(0),
+ __pgd_pgtable_alloc,
+ NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
+ mmap_write_unlock(&init_mm);
+ flush_tlb_kernel_range(start, end);
+
+ return ret;
+}
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
@@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+static inline bool force_pte_mapping(void)
+{
+ /*
+ * Can't use cpufeature API to determine whether BBML2 supported
+ * or not since cpufeature have not been finalized yet.
+ *
+ * Checking the boot CPU only for now. If the boot CPU has
+ * BBML2, paint linear mapping with block mapping. If it turns
+ * out the secondary CPUs don't support BBML2 once cpufeature is
+ * fininalized, the linear mapping will be repainted with PTE
+ * mapping.
+ */
+ return (rodata_full && !bbml2_noabort_available()) ||
+ debug_pagealloc_enabled() ||
+ arm64_kfence_can_set_direct_map() ||
+ is_realm_world();
+}
+
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 39fd1f7ff02a..25c068712cb5 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -10,6 +10,7 @@
#include <linux/vmalloc.h>
#include <asm/cacheflush.h>
+#include <asm/mmu.h>
#include <asm/pgtable-prot.h>
#include <asm/set_memory.h>
#include <asm/tlbflush.h>
@@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
struct page_change_data *cdata = data;
pte_t pte = __ptep_get(ptep);
+ BUG_ON(pte_cont(pte));
+
pte = clear_pte_bit(pte, cdata->clear_mask);
pte = set_pte_bit(pte, cdata->set_mask);
@@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
unsigned long start = addr;
unsigned long size = PAGE_SIZE * numpages;
unsigned long end = start + size;
+ unsigned long l_start;
struct vm_struct *area;
- int i;
+ int i, ret;
if (!PAGE_ALIGNED(addr)) {
start &= PAGE_MASK;
@@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
pgprot_val(clear_mask) == PTE_RDONLY)) {
for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ l_start = (u64)page_address(area->pages[i]);
+ ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
+ __change_memory_common(l_start,
PAGE_SIZE, set_mask, clear_mask);
}
}
@@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
int set_direct_map_invalid_noflush(struct page *page)
{
+ unsigned long l_start;
+ int ret;
+
struct page_change_data data = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(PTE_VALID),
@@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
if (!can_set_direct_map())
return 0;
+ l_start = (unsigned long)page_address(page);
+ ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ l_start, PAGE_SIZE, change_page_range,
+ &data);
}
int set_direct_map_default_noflush(struct page *page)
{
+ unsigned long l_start;
+ int ret;
+
struct page_change_data data = {
.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
.clear_mask = __pgprot(PTE_RDONLY),
@@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
if (!can_set_direct_map())
return 0;
+ l_start = (unsigned long)page_address(page);
+ ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ l_start, PAGE_SIZE, change_page_range,
+ &data);
}
static int __set_memory_enc_dec(unsigned long addr,
--
2.48.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-05-31 2:41 ` [PATCH 3/4] arm64: mm: support " Yang Shi
@ 2025-06-16 11:58 ` Ryan Roberts
2025-06-16 12:33 ` Ryan Roberts
2025-06-16 16:24 ` Ryan Roberts
1 sibling, 1 reply; 33+ messages in thread
From: Ryan Roberts @ 2025-06-16 11:58 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
cl
Cc: linux-arm-kernel, linux-kernel
On 31/05/2025 03:41, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
>
> This resulted in a couple of problems:
> - performance degradation
> - more TLB pressure
> - memory waste for kernel page table
>
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
>
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full. When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
>
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
>
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
>
> * Memory use after boot
> Before:
> MemTotal: 258988984 kB
> MemFree: 254821700 kB
>
> After:
> MemTotal: 259505132 kB
> MemFree: 255410264 kB
>
> Around 500MB more memory are free to use. The larger the machine, the
> more memory saved.
>
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
> --name=iops-test-job --eta-newline=1 --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/cpufeature.h | 26 +++
> arch/arm64/include/asm/mmu.h | 1 +
> arch/arm64/include/asm/pgtable.h | 12 +-
> arch/arm64/kernel/cpufeature.c | 2 +-
> arch/arm64/mm/mmu.c | 269 +++++++++++++++++++++++++---
> arch/arm64/mm/pageattr.c | 37 +++-
> 6 files changed, 319 insertions(+), 28 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 8f36ffa16b73..a95806980298 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
> #endif
> }
>
> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
> +
> +static inline bool has_nobbml2_override(void)
> +{
> + u64 mmfr2;
> + unsigned int bbm;
> +
> + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> + mmfr2 &= ~id_aa64mmfr2_override.mask;
> + mmfr2 |= id_aa64mmfr2_override.val;
> + bbm = cpuid_feature_extract_unsigned_field(mmfr2,
> + ID_AA64MMFR2_EL1_BBM_SHIFT);
> + return bbm == 0;
> +}
> +
> +/*
> + * Called at early boot stage on boot CPU before cpu info and cpu feature
> + * are ready.
> + */
> +static inline bool bbml2_noabort_available(void)
> +{
> + return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
> + cpu_has_bbml2_noabort(read_cpuid_id()) &&
> + !has_nobbml2_override();
Based on Will's feedback, The Kconfig and the cmdline override will both
disappear in Miko's next version and we will only use the MIDR list to decide
BBML2_NOABORT status, so this will significantly simplify. Sorry about the churn
here.
> +}
> +
> #endif /* __ASSEMBLY__ */
>
> #endif
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 6e8aa8e72601..2693d63bf837 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> pgprot_t prot, bool page_mappings_only);
> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
> extern void mark_linear_text_alias_ro(void);
> +extern int split_linear_mapping(unsigned long start, unsigned long end);
nit: Perhaps split_leaf_mapping() or split_kernel_pgtable_mapping() or something
similar is more generic which will benefit us in future when using this for
vmalloc too?
>
> /*
> * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index d3b538be1500..bf3cef31d243 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
> }
>
> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
> +{
> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
> +}
> +
> static inline pte_t pte_mkdevmap(pte_t pte)
> {
> return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
> @@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
> ((unsigned long)swapper_pg_dir & PAGE_MASK);
> }
>
> -static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> +static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
> {
> #ifdef __PAGETABLE_PMD_FOLDED
> if (in_swapper_pgdir(pmdp)) {
> @@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> #endif /* __PAGETABLE_PMD_FOLDED */
>
> WRITE_ONCE(*pmdp, pmd);
> +}
> +
> +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> +{
> + __set_pmd_nosync(pmdp, pmd);
>
> if (pmd_valid(pmd)) {
> dsb(ishst);
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e879bfcf853b..5fc2a4a804de 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
> return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
> }
>
> -static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
> {
> /*
> * We want to allow usage of bbml2 in as wide a range of kernel contexts
[...] I'll send a separate response for the mmu.c table walker changes.
>
> +int split_linear_mapping(unsigned long start, unsigned long end)
> +{
> + int ret = 0;
> +
> + if (!system_supports_bbml2_noabort())
> + return 0;
Hmm... I guess the thinking here is that for !BBML2_NOABORT you are expecting
this function should only be called in the first place if we know we are
pte-mapped. So I guess this is ok... it just means that if we are not
pte-mapped, warnings will be emitted while walking the pgtables (as is the case
today). So I think this approach is ok.
> +
> + mmap_write_lock(&init_mm);
What is the lock protecting? I was orignally thinking no locking should be
needed because it's not needed for permission changes today; But I think you are
right here and we do need locking; multiple owners could share a large leaf
mapping, I guess? And in that case you could get concurrent attempts to split
from both owners.
I'm not really a fan of adding the extra locking though; It might introduce a
new bottleneck. I wonder if there is a way we could do this locklessly? i.e.
allocate the new table, then cmpxchg to insert and the loser has to free? That
doesn't work for contiguous mappings though...
> + /* NO_EXEC_MAPPINGS is needed when splitting linear map */
> + ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
> + start, (end - start), __pgprot(0),
> + __pgd_pgtable_alloc,
> + NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
> + mmap_write_unlock(&init_mm);
> + flush_tlb_kernel_range(start, end);
I don't believe we should need to flush the TLB when only changing entry sizes
when BBML2 is supported. Miko's series has a massive comment explaining the
reasoning. That applies to user space though. We should consider if this all
works safely for kernel space too, and hopefully remove the flush.
> +
> + return ret;
> +}
> +
> /*
> * This function can only be used to modify existing table entries,
> * without allocating new levels of table. Note that this permits the
> @@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> +static inline bool force_pte_mapping(void)
> +{
> + /*
> + * Can't use cpufeature API to determine whether BBML2 supported
> + * or not since cpufeature have not been finalized yet.
> + *
> + * Checking the boot CPU only for now. If the boot CPU has
> + * BBML2, paint linear mapping with block mapping. If it turns
> + * out the secondary CPUs don't support BBML2 once cpufeature is
> + * fininalized, the linear mapping will be repainted with PTE
> + * mapping.
> + */
> + return (rodata_full && !bbml2_noabort_available()) ||
So this is the case where we don't have BBML2 and need to modify protections at
page granularity - I agree we need to force pte mappings here.
> + debug_pagealloc_enabled() ||
This is the case where every page is made invalid on free and valid on
allocation, so no point in having block mappings because it will soon degenerate
into page mappings because we will have to split on every allocation. Agree here
too.
> + arm64_kfence_can_set_direct_map() ||
After looking into how kfence works, I don't agree with this one. It has a
dedicated pool where it allocates from. That pool may be allocated early by the
arch or may be allocated late by the core code. Either way, kfence will only
modify protections within that pool. You current approach is forcing pte
mappings if the pool allocation is late (i.e. not performed by the arch code
during boot). But I think "late" is the most common case; kfence is compiled
into the kernel but not active at boot. Certainly that's how my Ubuntu kernel is
configured. So I think we should just ignore kfence here. If it's "early" then
we map the pool with page granularity (as an optimization). If it's "late" your
splitter will degenerate the whole kfence pool to page mappings over time as
kfence_protect_page() -> set_memory_valid() is called. But the bulk of the
linear map will remain mapped with large blocks.
> + is_realm_world();
I think the only reason this requires pte mappings is for
__set_memory_enc_dec(). But that can now deal with block mappings given the
ability to split the mappings as needed. So I think this condition can be
removed too.
> +}
Additionally, for can_set_direct_map(); at minimum it's comment should be tidied
up, but really I think it should return true if "BBML2_NOABORT ||
force_pte_mapping()". Because they are the conditions under which we can now
safely modify the linear map.
> +
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> early_kfence_pool = arm64_kfence_alloc_pool();
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> @@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 39fd1f7ff02a..25c068712cb5 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -10,6 +10,7 @@
> #include <linux/vmalloc.h>
>
> #include <asm/cacheflush.h>
> +#include <asm/mmu.h>
> #include <asm/pgtable-prot.h>
> #include <asm/set_memory.h>
> #include <asm/tlbflush.h>
> @@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
> struct page_change_data *cdata = data;
> pte_t pte = __ptep_get(ptep);
>
> + BUG_ON(pte_cont(pte));
I don't think this is required; We want to enable using contiguous mappings
where it makes sense. As long as we have BBML2, we can update contiguous pte
mappings in place, as long as we update all of the ptes in the contiguous block.
split_linear_map() should either have converted to non-cont mappings if the
contiguous block straddled the split point, or would have left as is (or
downgraded a PMD-block to a contpte block) if fully contained within the split
range.
> +
> pte = clear_pte_bit(pte, cdata->clear_mask);
> pte = set_pte_bit(pte, cdata->set_mask);
>
> @@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long start = addr;
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> + unsigned long l_start;
> struct vm_struct *area;
> - int i;
> + int i, ret;
>
> if (!PAGE_ALIGNED(addr)) {
> start &= PAGE_MASK;
> @@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + l_start = (u64)page_address(area->pages[i]);
> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
> + if (WARN_ON_ONCE(ret))
> + return ret;
I don't think this is the right place to integrate; I think the split should be
done inside __change_memory_common(). Then it caters to all possibilities (i.e.
set_memory_valid() and __set_memory_enc_dec()). This means it will run for
vmalloc too, but for now, that will be a nop because everything should already
be split as required on entry and in future we will get that for free.
Once you have integrated Dev's series, the hook becomes
___change_memory_common() (3 underscores)...
> +
> + __change_memory_common(l_start,
> PAGE_SIZE, set_mask, clear_mask);
> }
> }
> @@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
>
> int set_direct_map_invalid_noflush(struct page *page)
> {
> + unsigned long l_start;
> + int ret;
> +
> struct page_change_data data = {
> .set_mask = __pgprot(0),
> .clear_mask = __pgprot(PTE_VALID),
> @@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
> if (!can_set_direct_map())
> return 0;
>
> + l_start = (unsigned long)page_address(page);
> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
> + if (WARN_ON_ONCE(ret))
> + return ret;
> +
> return apply_to_page_range(&init_mm,
> - (unsigned long)page_address(page),
> - PAGE_SIZE, change_page_range, &data);
> + l_start, PAGE_SIZE, change_page_range,
> + &data);
...and once integrated with Dev's series you don't need any changes here...
> }
>
> int set_direct_map_default_noflush(struct page *page)
> {
> + unsigned long l_start;
> + int ret;
> +
> struct page_change_data data = {
> .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
> .clear_mask = __pgprot(PTE_RDONLY),
> @@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
> if (!can_set_direct_map())
> return 0;
>
> + l_start = (unsigned long)page_address(page);
> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
> + if (WARN_ON_ONCE(ret))
> + return ret;
> +
> return apply_to_page_range(&init_mm,
> - (unsigned long)page_address(page),
> - PAGE_SIZE, change_page_range, &data);
> + l_start, PAGE_SIZE, change_page_range,
> + &data);
...or here.
Thanks,
Ryan
> }
>
> static int __set_memory_enc_dec(unsigned long addr,
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-16 11:58 ` Ryan Roberts
@ 2025-06-16 12:33 ` Ryan Roberts
2025-06-17 21:01 ` Yang Shi
0 siblings, 1 reply; 33+ messages in thread
From: Ryan Roberts @ 2025-06-16 12:33 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
cl
Cc: linux-arm-kernel, linux-kernel
On 16/06/2025 12:58, Ryan Roberts wrote:
> On 31/05/2025 03:41, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>> - performance degradation
>> - more TLB pressure
>> - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full. When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal: 258988984 kB
>> MemFree: 254821700 kB
>>
>> After:
>> MemTotal: 259505132 kB
>> MemFree: 255410264 kB
>>
>> Around 500MB more memory are free to use. The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>> --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/cpufeature.h | 26 +++
>> arch/arm64/include/asm/mmu.h | 1 +
>> arch/arm64/include/asm/pgtable.h | 12 +-
>> arch/arm64/kernel/cpufeature.c | 2 +-
>> arch/arm64/mm/mmu.c | 269 +++++++++++++++++++++++++---
>> arch/arm64/mm/pageattr.c | 37 +++-
>> 6 files changed, 319 insertions(+), 28 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 8f36ffa16b73..a95806980298 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>> #endif
>> }
>>
>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
>> +
>> +static inline bool has_nobbml2_override(void)
>> +{
>> + u64 mmfr2;
>> + unsigned int bbm;
>> +
>> + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
>> + mmfr2 &= ~id_aa64mmfr2_override.mask;
>> + mmfr2 |= id_aa64mmfr2_override.val;
>> + bbm = cpuid_feature_extract_unsigned_field(mmfr2,
>> + ID_AA64MMFR2_EL1_BBM_SHIFT);
>> + return bbm == 0;
>> +}
>> +
>> +/*
>> + * Called at early boot stage on boot CPU before cpu info and cpu feature
>> + * are ready.
>> + */
>> +static inline bool bbml2_noabort_available(void)
>> +{
>> + return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
>> + cpu_has_bbml2_noabort(read_cpuid_id()) &&
>> + !has_nobbml2_override();
>
> Based on Will's feedback, The Kconfig and the cmdline override will both
> disappear in Miko's next version and we will only use the MIDR list to decide
> BBML2_NOABORT status, so this will significantly simplify. Sorry about the churn
> here.
>
>> +}
>> +
>> #endif /* __ASSEMBLY__ */
>>
>> #endif
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 6e8aa8e72601..2693d63bf837 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>> pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>> extern void mark_linear_text_alias_ro(void);
>> +extern int split_linear_mapping(unsigned long start, unsigned long end);
>
> nit: Perhaps split_leaf_mapping() or split_kernel_pgtable_mapping() or something
> similar is more generic which will benefit us in future when using this for
> vmalloc too?
>
>>
>> /*
>> * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index d3b538be1500..bf3cef31d243 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>> }
>>
>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>> +{
>> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>> +}
>> +
>> static inline pte_t pte_mkdevmap(pte_t pte)
>> {
>> return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
>> @@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
>> ((unsigned long)swapper_pg_dir & PAGE_MASK);
>> }
>>
>> -static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>> +static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
>> {
>> #ifdef __PAGETABLE_PMD_FOLDED
>> if (in_swapper_pgdir(pmdp)) {
>> @@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>> #endif /* __PAGETABLE_PMD_FOLDED */
>>
>> WRITE_ONCE(*pmdp, pmd);
>> +}
>> +
>> +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>> +{
>> + __set_pmd_nosync(pmdp, pmd);
>>
>> if (pmd_valid(pmd)) {
>> dsb(ishst);
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index e879bfcf853b..5fc2a4a804de 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>> return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
>> }
>>
>> -static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>> {
>> /*
>> * We want to allow usage of bbml2 in as wide a range of kernel contexts
>
>
> [...] I'll send a separate response for the mmu.c table walker changes.
>
>>
>> +int split_linear_mapping(unsigned long start, unsigned long end)
>> +{
>> + int ret = 0;
>> +
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>
> Hmm... I guess the thinking here is that for !BBML2_NOABORT you are expecting
> this function should only be called in the first place if we know we are
> pte-mapped. So I guess this is ok... it just means that if we are not
> pte-mapped, warnings will be emitted while walking the pgtables (as is the case
> today). So I think this approach is ok.
>
>> +
>> + mmap_write_lock(&init_mm);
>
> What is the lock protecting? I was orignally thinking no locking should be
> needed because it's not needed for permission changes today; But I think you are
> right here and we do need locking; multiple owners could share a large leaf
> mapping, I guess? And in that case you could get concurrent attempts to split
> from both owners.
>
> I'm not really a fan of adding the extra locking though; It might introduce a
> new bottleneck. I wonder if there is a way we could do this locklessly? i.e.
> allocate the new table, then cmpxchg to insert and the loser has to free? That
> doesn't work for contiguous mappings though...
>
>> + /* NO_EXEC_MAPPINGS is needed when splitting linear map */
>> + ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
>> + start, (end - start), __pgprot(0),
>> + __pgd_pgtable_alloc,
>> + NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
>> + mmap_write_unlock(&init_mm);
>> + flush_tlb_kernel_range(start, end);
>
> I don't believe we should need to flush the TLB when only changing entry sizes
> when BBML2 is supported. Miko's series has a massive comment explaining the
> reasoning. That applies to user space though. We should consider if this all
> works safely for kernel space too, and hopefully remove the flush.
>
>> +
>> + return ret;
>> +}
>> +
>> /*
>> * This function can only be used to modify existing table entries,
>> * without allocating new levels of table. Note that this permits the
>> @@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>
>> #endif /* CONFIG_KFENCE */
>>
>> +static inline bool force_pte_mapping(void)
>> +{
>> + /*
>> + * Can't use cpufeature API to determine whether BBML2 supported
>> + * or not since cpufeature have not been finalized yet.
>> + *
>> + * Checking the boot CPU only for now. If the boot CPU has
>> + * BBML2, paint linear mapping with block mapping. If it turns
>> + * out the secondary CPUs don't support BBML2 once cpufeature is
>> + * fininalized, the linear mapping will be repainted with PTE
>> + * mapping.
>> + */
>> + return (rodata_full && !bbml2_noabort_available()) ||
>
> So this is the case where we don't have BBML2 and need to modify protections at
> page granularity - I agree we need to force pte mappings here.
>
>> + debug_pagealloc_enabled() ||
>
> This is the case where every page is made invalid on free and valid on
> allocation, so no point in having block mappings because it will soon degenerate
> into page mappings because we will have to split on every allocation. Agree here
> too.
>
>> + arm64_kfence_can_set_direct_map() ||
>
> After looking into how kfence works, I don't agree with this one. It has a
> dedicated pool where it allocates from. That pool may be allocated early by the
> arch or may be allocated late by the core code. Either way, kfence will only
> modify protections within that pool. You current approach is forcing pte
> mappings if the pool allocation is late (i.e. not performed by the arch code
> during boot). But I think "late" is the most common case; kfence is compiled
> into the kernel but not active at boot. Certainly that's how my Ubuntu kernel is
> configured. So I think we should just ignore kfence here. If it's "early" then
> we map the pool with page granularity (as an optimization). If it's "late" your
> splitter will degenerate the whole kfence pool to page mappings over time as
> kfence_protect_page() -> set_memory_valid() is called. But the bulk of the
> linear map will remain mapped with large blocks.
>
>> + is_realm_world();
>
> I think the only reason this requires pte mappings is for
> __set_memory_enc_dec(). But that can now deal with block mappings given the
> ability to split the mappings as needed. So I think this condition can be
> removed too.
To clarify; the latter 2 would still be needed for the !BBML2_NOABORT case. So I
think the expression becomes:
return (!bbml2_noabort_available() && (rodata_full ||
arm64_kfence_can_set_direct_map() || is_realm_world())) ||
debug_pagealloc_enabled();
Thanks,
Ryan
>
>> +}
>
> Additionally, for can_set_direct_map(); at minimum it's comment should be tidied
> up, but really I think it should return true if "BBML2_NOABORT ||
> force_pte_mapping()". Because they are the conditions under which we can now
> safely modify the linear map.
>
>> +
>> static void __init map_mem(pgd_t *pgdp)
>> {
>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
>>
>> early_kfence_pool = arm64_kfence_alloc_pool();
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> /*
>> @@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>
>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 39fd1f7ff02a..25c068712cb5 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -10,6 +10,7 @@
>> #include <linux/vmalloc.h>
>>
>> #include <asm/cacheflush.h>
>> +#include <asm/mmu.h>
>> #include <asm/pgtable-prot.h>
>> #include <asm/set_memory.h>
>> #include <asm/tlbflush.h>
>> @@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
>> struct page_change_data *cdata = data;
>> pte_t pte = __ptep_get(ptep);
>>
>> + BUG_ON(pte_cont(pte));
>
> I don't think this is required; We want to enable using contiguous mappings
> where it makes sense. As long as we have BBML2, we can update contiguous pte
> mappings in place, as long as we update all of the ptes in the contiguous block.
> split_linear_map() should either have converted to non-cont mappings if the
> contiguous block straddled the split point, or would have left as is (or
> downgraded a PMD-block to a contpte block) if fully contained within the split
> range.
>
>> +
>> pte = clear_pte_bit(pte, cdata->clear_mask);
>> pte = set_pte_bit(pte, cdata->set_mask);
>>
>> @@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>> unsigned long start = addr;
>> unsigned long size = PAGE_SIZE * numpages;
>> unsigned long end = start + size;
>> + unsigned long l_start;
>> struct vm_struct *area;
>> - int i;
>> + int i, ret;
>>
>> if (!PAGE_ALIGNED(addr)) {
>> start &= PAGE_MASK;
>> @@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + l_start = (u64)page_address(area->pages[i]);
>> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>> + if (WARN_ON_ONCE(ret))
>> + return ret;
>
> I don't think this is the right place to integrate; I think the split should be
> done inside __change_memory_common(). Then it caters to all possibilities (i.e.
> set_memory_valid() and __set_memory_enc_dec()). This means it will run for
> vmalloc too, but for now, that will be a nop because everything should already
> be split as required on entry and in future we will get that for free.
>
> Once you have integrated Dev's series, the hook becomes
> ___change_memory_common() (3 underscores)...
>
>> +
>> + __change_memory_common(l_start,
>> PAGE_SIZE, set_mask, clear_mask);
>> }
>> }
>> @@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
>>
>> int set_direct_map_invalid_noflush(struct page *page)
>> {
>> + unsigned long l_start;
>> + int ret;
>> +
>> struct page_change_data data = {
>> .set_mask = __pgprot(0),
>> .clear_mask = __pgprot(PTE_VALID),
>> @@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
>> if (!can_set_direct_map())
>> return 0;
>>
>> + l_start = (unsigned long)page_address(page);
>> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>> + if (WARN_ON_ONCE(ret))
>> + return ret;
>> +
>> return apply_to_page_range(&init_mm,
>> - (unsigned long)page_address(page),
>> - PAGE_SIZE, change_page_range, &data);
>> + l_start, PAGE_SIZE, change_page_range,
>> + &data);
>
> ...and once integrated with Dev's series you don't need any changes here...
>
>> }
>>
>> int set_direct_map_default_noflush(struct page *page)
>> {
>> + unsigned long l_start;
>> + int ret;
>> +
>> struct page_change_data data = {
>> .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
>> .clear_mask = __pgprot(PTE_RDONLY),
>> @@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
>> if (!can_set_direct_map())
>> return 0;
>>
>> + l_start = (unsigned long)page_address(page);
>> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>> + if (WARN_ON_ONCE(ret))
>> + return ret;
>> +
>> return apply_to_page_range(&init_mm,
>> - (unsigned long)page_address(page),
>> - PAGE_SIZE, change_page_range, &data);
>> + l_start, PAGE_SIZE, change_page_range,
>> + &data);
>
> ...or here.
>
> Thanks,
> Ryan
>
>> }
>>
>> static int __set_memory_enc_dec(unsigned long addr,
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-05-31 2:41 ` [PATCH 3/4] arm64: mm: support " Yang Shi
2025-06-16 11:58 ` Ryan Roberts
@ 2025-06-16 16:24 ` Ryan Roberts
2025-06-17 21:09 ` Yang Shi
1 sibling, 1 reply; 33+ messages in thread
From: Ryan Roberts @ 2025-06-16 16:24 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
cl
Cc: linux-arm-kernel, linux-kernel
On 31/05/2025 03:41, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
>
> This resulted in a couple of problems:
> - performance degradation
> - more TLB pressure
> - memory waste for kernel page table
>
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
>
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full. When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
>
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
>
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
>
> * Memory use after boot
> Before:
> MemTotal: 258988984 kB
> MemFree: 254821700 kB
>
> After:
> MemTotal: 259505132 kB
> MemFree: 255410264 kB
>
> Around 500MB more memory are free to use. The larger the machine, the
> more memory saved.
>
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
> --name=iops-test-job --eta-newline=1 --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/cpufeature.h | 26 +++
> arch/arm64/include/asm/mmu.h | 1 +
> arch/arm64/include/asm/pgtable.h | 12 +-
> arch/arm64/kernel/cpufeature.c | 2 +-
> arch/arm64/mm/mmu.c | 269 +++++++++++++++++++++++++---
> arch/arm64/mm/pageattr.c | 37 +++-
> 6 files changed, 319 insertions(+), 28 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 8f36ffa16b73..a95806980298 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
> #endif
> }
>
[...] (I gave comments on this part in previous reply)
I'm focussing on teh table walker in mmu.c here - i.e. implementation of
split_linear_mapping()...
> index 775c0536b194..4c5d3aa35d62 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -45,6 +45,7 @@
> #define NO_BLOCK_MAPPINGS BIT(0)
> #define NO_CONT_MAPPINGS BIT(1)
> #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
> +#define SPLIT_MAPPINGS BIT(3)
>
> u64 kimage_voffset __ro_after_init;
> EXPORT_SYMBOL(kimage_voffset);
> @@ -166,12 +167,91 @@ static void init_clear_pgtable(void *table)
> dsb(ishst);
> }
>
> +static void split_cont_pte(pte_t *ptep)
> +{
> + pte_t *_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> + pte_t _pte;
> +
> + for (int i = 0; i < CONT_PTES; i++, _ptep++) {
> + _pte = READ_ONCE(*_ptep);
> + _pte = pte_mknoncont(_pte);
> + __set_pte_nosync(_ptep, _pte);
This is not atomic but I don't think that matters for kernel mappings since we
don't care about HW-modified access/dirty bits.
> + }
> +
> + dsb(ishst);
> + isb();
I think we can use lazy_mmu_mode here to potentially batch the barriers for
multiple levels. This also avoids the need for adding __set_pmd_nosync().
> +}
> +
> +static void split_cont_pmd(pmd_t *pmdp)
> +{
> + pmd_t *_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> + pmd_t _pmd;
> +
> + for (int i = 0; i < CONT_PMDS; i++, _pmdp++) {
> + _pmd = READ_ONCE(*_pmdp);
> + _pmd = pmd_mknoncont(_pmd);
> + set_pmd(_pmdp, _pmd);
Without lazy_mmu_mode this is issuing barriers per entry. With lazy_mmu_mode
this will defer the barriers until we exit the mode so this will get a bit
faster. (in practice it will be a bit like what you have done for contpte but
potentially even better because we can batch across levels.
> + }
> +}
> +
> +static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
> +{
> + pte_t *ptep;
> + unsigned long pfn;
> + pgprot_t prot;
> +
> + pfn = pmd_pfn(pmd);
> + prot = pmd_pgprot(pmd);
> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PTE_TYPE_PAGE);
> +
> + ptep = (pte_t *)phys_to_virt(pte_phys);
> +
> + /* It must be naturally aligned if PMD is leaf */
> + if ((flags & NO_CONT_MAPPINGS) == 0)
I'm not sure we have a use case for avoiding CONT mappings? Suggest doing it
unconditionally.
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> + for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> + __set_pte_nosync(ptep, pfn_pte(pfn, prot));
> +
> + dsb(ishst);
> +}
> +
> +static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
> +{
> + pmd_t *pmdp;
> + unsigned long pfn;
> + pgprot_t prot;
> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> + pfn = pud_pfn(pud);
> + prot = pud_pgprot(pud);
> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +
> + /* It must be naturally aligned if PUD is leaf */
> + if ((flags & NO_CONT_MAPPINGS) == 0)
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> + for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> + __set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
> + pfn += step;
> + }
> +
> + dsb(ishst);
> +}
> +
> static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
> - phys_addr_t phys, pgprot_t prot)
> + phys_addr_t phys, pgprot_t prot, int flags)
> {
> do {
> pte_t old_pte = __ptep_get(ptep);
>
> + if (flags & SPLIT_MAPPINGS) {
> + if (pte_cont(old_pte))
> + split_cont_pte(ptep);
> +
> + continue;
> + }
> +
> /*
> * Required barriers to make this visible to the table walker
> * are deferred to the end of alloc_init_cont_pte().
> @@ -199,11 +279,20 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> pmd_t pmd = READ_ONCE(*pmdp);
> pte_t *ptep;
> int ret = 0;
> + bool split = flags & SPLIT_MAPPINGS;
> + pmdval_t pmdval;
> + phys_addr_t pte_phys;
>
> - BUG_ON(pmd_sect(pmd));
> - if (pmd_none(pmd)) {
> - pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> - phys_addr_t pte_phys;
> + if (!split)
> + BUG_ON(pmd_sect(pmd));
> +
> + if (pmd_none(pmd) && split) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + if (pmd_none(pmd) || (split && pmd_leaf(pmd))) {
> + pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>
> if (flags & NO_EXEC_MAPPINGS)
> pmdval |= PMD_TABLE_PXN;
> @@ -213,6 +302,18 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> ret = -ENOMEM;
> goto out;
> }
> + }
> +
> + if (split) {
> + if (pmd_leaf(pmd)) {
> + split_pmd(pmd, pte_phys, flags);
> + __pmd_populate(pmdp, pte_phys, pmdval);
> + }
> + ptep = pte_offset_kernel(pmdp, addr);
> + goto split_pgtable;
> + }
> +
> + if (pmd_none(pmd)) {
> ptep = pte_set_fixmap(pte_phys);
> init_clear_pgtable(ptep);
> ptep += pte_index(addr);
> @@ -222,17 +323,28 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> ptep = pte_set_fixmap_offset(pmdp, addr);
> }
>
> +split_pgtable:
> do {
> pgprot_t __prot = prot;
>
> next = pte_cont_addr_end(addr, end);
>
> + if (split) {
> + pte_t pteval = READ_ONCE(*ptep);
> + bool cont = pte_cont(pteval);
> +
> + if (cont &&
> + ((addr | next) & ~CONT_PTE_MASK) == 0 &&
> + (flags & NO_CONT_MAPPINGS) == 0)
> + continue;
> + }
> +
> /* use a contiguous mapping if the range is suitably aligned */
> if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
> (flags & NO_CONT_MAPPINGS) == 0)
> __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>
> - init_pte(ptep, addr, next, phys, __prot);
> + init_pte(ptep, addr, next, phys, __prot, flags);
>
> ptep += pte_index(next) - pte_index(addr);
> phys += next - addr;
> @@ -243,7 +355,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> * ensure that all previous pgtable writes are visible to the table
> * walker.
> */
> - pte_clear_fixmap();
> + if (!split)
> + pte_clear_fixmap();
>
> out:
> return ret;
> @@ -255,15 +368,29 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
> {
> unsigned long next;
> int ret = 0;
> + bool split = flags & SPLIT_MAPPINGS;
> + bool cont;
>
> do {
> pmd_t old_pmd = READ_ONCE(*pmdp);
>
> next = pmd_addr_end(addr, end);
>
> + if (split && pmd_leaf(old_pmd)) {
> + cont = pgprot_val(pmd_pgprot(old_pmd)) & PTE_CONT;
> + if (cont)
> + split_cont_pmd(pmdp);
> +
> + /* The PMD is fully contained in the range */
> + if (((addr | next) & ~PMD_MASK) == 0 &&
> + (flags & NO_BLOCK_MAPPINGS) == 0)
> + continue;
> + }
> +
> /* try section mapping first */
> if (((addr | next | phys) & ~PMD_MASK) == 0 &&
> - (flags & NO_BLOCK_MAPPINGS) == 0) {
> + (flags & NO_BLOCK_MAPPINGS) == 0 &&
> + (flags & SPLIT_MAPPINGS) == 0) {
> pmd_set_huge(pmdp, phys, prot);
>
> /*
> @@ -278,7 +405,7 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
> if (ret)
> break;
>
> - BUG_ON(pmd_val(old_pmd) != 0 &&
> + BUG_ON(!split && pmd_val(old_pmd) != 0 &&
> pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
> }
> phys += next - addr;
> @@ -296,14 +423,23 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> int ret = 0;
> pud_t pud = READ_ONCE(*pudp);
> pmd_t *pmdp;
> + bool split = flags & SPLIT_MAPPINGS;
> + pudval_t pudval;
> + phys_addr_t pmd_phys;
>
> /*
> * Check for initial section mappings in the pgd/pud.
> */
> - BUG_ON(pud_sect(pud));
> - if (pud_none(pud)) {
> - pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> - phys_addr_t pmd_phys;
> + if (!split)
> + BUG_ON(pud_sect(pud));
> +
> + if (pud_none(pud) && split) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + if (pud_none(pud) || (split && pud_leaf(pud))) {
> + pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>
> if (flags & NO_EXEC_MAPPINGS)
> pudval |= PUD_TABLE_PXN;
> @@ -313,6 +449,18 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> ret = -ENOMEM;
> goto out;
> }
> + }
> +
> + if (split) {
> + if (pud_leaf(pud)) {
> + split_pud(pud, pmd_phys, flags);
> + __pud_populate(pudp, pmd_phys, pudval);
> + }
> + pmdp = pmd_offset(pudp, addr);
> + goto split_pgtable;
> + }
> +
> + if (pud_none(pud)) {
> pmdp = pmd_set_fixmap(pmd_phys);
> init_clear_pgtable(pmdp);
> pmdp += pmd_index(addr);
> @@ -322,11 +470,22 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> pmdp = pmd_set_fixmap_offset(pudp, addr);
> }
>
> +split_pgtable:
> do {
> pgprot_t __prot = prot;
>
> next = pmd_cont_addr_end(addr, end);
>
> + if (split) {
> + pmd_t pmdval = READ_ONCE(*pmdp);
> + bool cont = pgprot_val(pmd_pgprot(pmdval)) & PTE_CONT;
> +
> + if (cont &&
> + ((addr | next) & ~CONT_PMD_MASK) == 0 &&
> + (flags & NO_CONT_MAPPINGS) == 0)
> + continue;
> + }
> +
> /* use a contiguous mapping if the range is suitably aligned */
> if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
> (flags & NO_CONT_MAPPINGS) == 0)
> @@ -340,7 +499,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> phys += next - addr;
> } while (addr = next, addr != end);
>
> - pmd_clear_fixmap();
> + if (!split)
> + pmd_clear_fixmap();
>
> out:
> return ret;
> @@ -355,6 +515,16 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
> int ret = 0;
> p4d_t p4d = READ_ONCE(*p4dp);
> pud_t *pudp;
> + bool split = flags & SPLIT_MAPPINGS;
> +
> + if (split) {
> + if (p4d_none(p4d)) {
> + ret= -EINVAL;
> + goto out;
> + }
> + pudp = pud_offset(p4dp, addr);
> + goto split_pgtable;
> + }
>
> if (p4d_none(p4d)) {
> p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN | P4D_TABLE_AF;
> @@ -377,17 +547,26 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
> pudp = pud_set_fixmap_offset(p4dp, addr);
> }
>
> +split_pgtable:
> do {
> pud_t old_pud = READ_ONCE(*pudp);
>
> next = pud_addr_end(addr, end);
>
> + if (split && pud_leaf(old_pud)) {
> + /* The PUD is fully contained in the range */
> + if (((addr | next) & ~PUD_MASK) == 0 &&
> + (flags & NO_BLOCK_MAPPINGS) == 0)
> + continue;
> + }
> +
> /*
> * For 4K granule only, attempt to put down a 1GB block
> */
> if (pud_sect_supported() &&
> ((addr | next | phys) & ~PUD_MASK) == 0 &&
> - (flags & NO_BLOCK_MAPPINGS) == 0) {
> + (flags & NO_BLOCK_MAPPINGS) == 0 &&
> + (flags & SPLIT_MAPPINGS) == 0) {
> pud_set_huge(pudp, phys, prot);
>
> /*
> @@ -402,13 +581,14 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
> if (ret)
> break;
>
> - BUG_ON(pud_val(old_pud) != 0 &&
> + BUG_ON(!split && pud_val(old_pud) != 0 &&
> pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
> }
> phys += next - addr;
> } while (pudp++, addr = next, addr != end);
>
> - pud_clear_fixmap();
> + if (!split)
> + pud_clear_fixmap();
>
> out:
> return ret;
> @@ -423,6 +603,16 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
> int ret = 0;
> pgd_t pgd = READ_ONCE(*pgdp);
> p4d_t *p4dp;
> + bool split = flags & SPLIT_MAPPINGS;
> +
> + if (split) {
> + if (pgd_none(pgd)) {
> + ret = -EINVAL;
> + goto out;
> + }
> + p4dp = p4d_offset(pgdp, addr);
> + goto split_pgtable;
> + }
I really don't like the way the split logic has been added to the existing table
walker; there are so many conditionals, it's not clear that there is really any
advantage. I know I proposed it originally, but I changed my mind last cycle and
made the case for keeping it separate. That's still my opinon I'm afraid; I'm
proposing a patch below showing how I would prefer to see this implemented.
>
> if (pgd_none(pgd)) {
> pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN | PGD_TABLE_AF;
> @@ -445,6 +635,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
> p4dp = p4d_set_fixmap_offset(pgdp, addr);
> }
>
> +split_pgtable:
> do {
> p4d_t old_p4d = READ_ONCE(*p4dp);
>
> @@ -461,7 +652,8 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
> phys += next - addr;
> } while (p4dp++, addr = next, addr != end);
>
> - p4d_clear_fixmap();
> + if (!split)
> + p4d_clear_fixmap();
>
> out:
> return ret;
> @@ -557,6 +749,25 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
> return pa;
> }
>
> +int split_linear_mapping(unsigned long start, unsigned long end)
> +{
> + int ret = 0;
> +
> + if (!system_supports_bbml2_noabort())
> + return 0;
> +
> + mmap_write_lock(&init_mm);
> + /* NO_EXEC_MAPPINGS is needed when splitting linear map */
> + ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
> + start, (end - start), __pgprot(0),
> + __pgd_pgtable_alloc,
> + NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
Implementing this on top of __create_pgd_mapping_locked() is problematic because
(I think) it assumes that the virtual range is physically contiguous? That's
fine for the linear map, but I'd like to reuse this primitive for vmalloc too.
> + mmap_write_unlock(&init_mm);
As already mentioned, I don't like this locking. I think we can make it work
locklessly as long as we are only ever splitting and not collapsing.
> + flush_tlb_kernel_range(start, end);
> +
> + return ret;
> +}
I had a go at creating a version to try to illustrate how I have been thinking
about this. What do you think? I've only compile tested it (and it fails because
I don't have pmd_mknoncont() and system_supports_bbml2_noabort() in my tree -
but the rest looks ok). It's on top of v6.16-rc1, where the pgtable allocation
functions have changed a bit. And I don't think you need patch 2 from your
series with this change. I haven't implemented the cmpxchg part that I think
would make it safe to be used locklessly yet, but I've marked the sites up with
TODO. Once implemented, the idea is that concurrent threads trying to split on
addresses that all lie within the same block/contig mapping should be safe.
---8<---
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8fcf59ba39db..22a09cc7a2aa 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -480,6 +480,9 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt,
int flags);
#endif
+/* Sentinel used to represent failure to allocate for phys_addr_t type. */
+#define INVALID_PHYS_ADDR -1
+
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
enum pgtable_type pgtable_type)
{
@@ -487,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
phys_addr_t pa;
- BUG_ON(!ptdesc);
+ if (!ptdesc)
+ return INVALID_PHYS_ADDR;
+
pa = page_to_phys(ptdesc_page(ptdesc));
switch (pgtable_type) {
@@ -509,15 +514,27 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t __maybe_unused
-pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
return __pgd_pgtable_alloc(&init_mm, pgtable_type);
}
+static phys_addr_t __maybe_unused
+pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+ phys_addr_t pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+
+ BUG_ON(!pa);
+ return pa;
+}
+
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(NULL, pgtable_type);
+ phys_addr_t pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+
+ BUG_ON(!pa);
+ return pa;
}
/*
@@ -1616,3 +1633,202 @@ int arch_set_user_pkey_access(struct task_struct *tsk,
int pkey, unsigned long i
return 0;
}
#endif
+
+static void split_contpte(pte_t *ptep)
+{
+ int i;
+
+ ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ for (i = 0; i < CONT_PTES; i++, ptep++)
+ __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
+}
+
+static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
+ unsigned long pfn = pmd_pfn(pmd);
+ pgprot_t prot = pmd_pgprot(pmd);
+ phys_addr_t pte_phys;
+ pte_t *ptep;
+ int i;
+
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ if (pte_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte(ptep, pfn_pte(pfn, prot));
+
+ /*
+ * Ensure the pte entries are visible to the table walker by the time
+ * the pmd entry that points to the ptes is visible.
+ */
+ dsb(ishst);
+
+ // TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
+ __pmd_populate(pmdp, pte_phys, tableprot);
+
+ return 0;
+}
+
+static void split_contpmd(pmd_t *pmdp)
+{
+ int i;
+
+ pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ for (i = 0; i < CONT_PMDS; i++, pmdp++)
+ set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
+}
+
+static int split_pud(pud_t *pudp, pud_t pud)
+{
+ pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+ unsigned long pfn = pud_pfn(pud);
+ pgprot_t prot = pud_pgprot(pud);
+ phys_addr_t pmd_phys;
+ pmd_t *pmdp;
+ int i;
+
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ if (pmd_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
+ set_pmd(pmdp, pfn_pmd(pfn, prot));
+
+ /*
+ * Ensure the pmd entries are visible to the table walker by the time
+ * the pud entry that points to the pmds is visible.
+ */
+ dsb(ishst);
+
+ // TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
+ __pud_populate(pudp, pmd_phys, tableprot);
+
+ return 0;
+}
+
+int split_leaf_mapping(unsigned long addr)
+{
+ pgd_t *pgdp, pgd;
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+ int ret = 0;
+
+ /*
+ * !BBML2_NOABORT systems should not be trying to change permissions on
+ * anything that is not pte-mapped in the first place. Just return early
+ * and let the permission change code raise a warning if not already
+ * pte-mapped.
+ */
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ /*
+ * Ensure addr is at least page-aligned since this is the finest
+ * granularity we can split to.
+ */
+ if (addr != PAGE_ALIGN(addr))
+ return -EINVAL;
+
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * PGD: If addr is PGD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
+ goto out;
+ pgdp = pgd_offset_k(addr);
+ pgd = pgdp_get(pgdp);
+ if (!pgd_present(pgd))
+ goto out;
+
+ /*
+ * P4D: If addr is P4D aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
+ goto out;
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = p4dp_get(p4dp);
+ if (!p4d_present(p4d))
+ goto out;
+
+ /*
+ * PUD: If addr is PUD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split. Otherwise,
+ * if we have a pud leaf, split to contpmd.
+ */
+ if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
+ goto out;
+ pudp = pud_offset(p4dp, addr);
+ pud = pudp_get(pudp);
+ if (!pud_present(pud))
+ goto out;
+ if (pud_leaf(pud)) {
+ ret = split_pud(pudp, pud);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPMD: If addr is CONTPMD aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpmd leaf, split to pmd.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ pmdp = pmd_offset(pudp, addr);
+ pmd = pmdp_get(pmdp);
+ if (!pmd_present(pmd))
+ goto out;
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ /*
+ * PMD: If addr is PMD aligned then addr already describes a
+ * leaf boundary. Otherwise, split to contpte.
+ */
+ if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
+ goto out;
+ ret = split_pmd(pmdp, pmd);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPTE: If addr is CONTPTE aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpte leaf, split to pte.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ ptep = pte_offset_kernel(pmdp, addr);
+ pte = __ptep_get(ptep);
+ if (!pte_present(pte))
+ goto out;
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+out:
+ arch_leave_lazy_mmu_mode();
+ return ret;
+}
---8<---
Thanks,
Ryan
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-16 12:33 ` Ryan Roberts
@ 2025-06-17 21:01 ` Yang Shi
0 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-06-17 21:01 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 6/16/25 5:33 AM, Ryan Roberts wrote:
> On 16/06/2025 12:58, Ryan Roberts wrote:
>> On 31/05/2025 03:41, Yang Shi wrote:
>>> When rodata=full is specified, kernel linear mapping has to be mapped at
>>> PTE level since large page table can't be split due to break-before-make
>>> rule on ARM64.
>>>
>>> This resulted in a couple of problems:
>>> - performance degradation
>>> - more TLB pressure
>>> - memory waste for kernel page table
>>>
>>> With FEAT_BBM level 2 support, splitting large block page table to
>>> smaller ones doesn't need to make the page table entry invalid anymore.
>>> This allows kernel split large block mapping on the fly.
>>>
>>> Add kernel page table split support and use large block mapping by
>>> default when FEAT_BBM level 2 is supported for rodata=full. When
>>> changing permissions for kernel linear mapping, the page table will be
>>> split to smaller size.
>>>
>>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>>> mapping PTE-mapped when rodata=full.
>>>
>>> With this we saw significant performance boost with some benchmarks and
>>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>>> 256GB memory.
>>>
>>> * Memory use after boot
>>> Before:
>>> MemTotal: 258988984 kB
>>> MemFree: 254821700 kB
>>>
>>> After:
>>> MemTotal: 259505132 kB
>>> MemFree: 255410264 kB
>>>
>>> Around 500MB more memory are free to use. The larger the machine, the
>>> more memory saved.
>>>
>>> * Memcached
>>> We saw performance degradation when running Memcached benchmark with
>>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>>> latency is reduced by around 9.6%.
>>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>>> MPKI is reduced by 28.5%.
>>>
>>> The benchmark data is now on par with rodata=on too.
>>>
>>> * Disk encryption (dm-crypt) benchmark
>>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>>> encryption (by dm-crypt).
>>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>>> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>>> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>>> --name=iops-test-job --eta-newline=1 --size 100G
>>>
>>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>>> number of good case is around 90% more than the best number of bad case).
>>> The bandwidth is increased and the avg clat is reduced proportionally.
>>>
>>> * Sequential file read
>>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>>> The bandwidth is increased by 150%.
>>>
>>> Signed-off-by: Yang Shi<yang@os.amperecomputing.com>
>>> ---
>>> arch/arm64/include/asm/cpufeature.h | 26 +++
>>> arch/arm64/include/asm/mmu.h | 1 +
>>> arch/arm64/include/asm/pgtable.h | 12 +-
>>> arch/arm64/kernel/cpufeature.c | 2 +-
>>> arch/arm64/mm/mmu.c | 269 +++++++++++++++++++++++++---
>>> arch/arm64/mm/pageattr.c | 37 +++-
>>> 6 files changed, 319 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>> index 8f36ffa16b73..a95806980298 100644
>>> --- a/arch/arm64/include/asm/cpufeature.h
>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>>> #endif
>>> }
>>>
>>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr);
>>> +
>>> +static inline bool has_nobbml2_override(void)
>>> +{
>>> + u64 mmfr2;
>>> + unsigned int bbm;
>>> +
>>> + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
>>> + mmfr2 &= ~id_aa64mmfr2_override.mask;
>>> + mmfr2 |= id_aa64mmfr2_override.val;
>>> + bbm = cpuid_feature_extract_unsigned_field(mmfr2,
>>> + ID_AA64MMFR2_EL1_BBM_SHIFT);
>>> + return bbm == 0;
>>> +}
>>> +
>>> +/*
>>> + * Called at early boot stage on boot CPU before cpu info and cpu feature
>>> + * are ready.
>>> + */
>>> +static inline bool bbml2_noabort_available(void)
>>> +{
>>> + return IS_ENABLED(CONFIG_ARM64_BBML2_NOABORT) &&
>>> + cpu_has_bbml2_noabort(read_cpuid_id()) &&
>>> + !has_nobbml2_override();
>> Based on Will's feedback, The Kconfig and the cmdline override will both
>> disappear in Miko's next version and we will only use the MIDR list to decide
>> BBML2_NOABORT status, so this will significantly simplify. Sorry about the churn
>> here.
Good news! I just saw Miko's v7 series, will rebase on to it.
>>> +}
>>> +
>>> #endif /* __ASSEMBLY__ */
>>>
>>> #endif
>>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>>> index 6e8aa8e72601..2693d63bf837 100644
>>> --- a/arch/arm64/include/asm/mmu.h
>>> +++ b/arch/arm64/include/asm/mmu.h
>>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>> pgprot_t prot, bool page_mappings_only);
>>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>> extern void mark_linear_text_alias_ro(void);
>>> +extern int split_linear_mapping(unsigned long start, unsigned long end);
>> nit: Perhaps split_leaf_mapping() or split_kernel_pgtable_mapping() or something
>> similar is more generic which will benefit us in future when using this for
>> vmalloc too?
Yeah, sure. Will use split_kernel_pgtable_mapping().
>>>
>>> /*
>>> * This check is triggered during the early boot before the cpufeature
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index d3b538be1500..bf3cef31d243 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -293,6 +293,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>>> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>>> }
>>>
>>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>>> +{
>>> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>>> +}
>>> +
>>> static inline pte_t pte_mkdevmap(pte_t pte)
>>> {
>>> return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL));
>>> @@ -769,7 +774,7 @@ static inline bool in_swapper_pgdir(void *addr)
>>> ((unsigned long)swapper_pg_dir & PAGE_MASK);
>>> }
>>>
>>> -static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>> +static inline void __set_pmd_nosync(pmd_t *pmdp, pmd_t pmd)
>>> {
>>> #ifdef __PAGETABLE_PMD_FOLDED
>>> if (in_swapper_pgdir(pmdp)) {
>>> @@ -779,6 +784,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>> #endif /* __PAGETABLE_PMD_FOLDED */
>>>
>>> WRITE_ONCE(*pmdp, pmd);
>>> +}
>>> +
>>> +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>>> +{
>>> + __set_pmd_nosync(pmdp, pmd);
>>>
>>> if (pmd_valid(pmd)) {
>>> dsb(ishst);
>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>> index e879bfcf853b..5fc2a4a804de 100644
>>> --- a/arch/arm64/kernel/cpufeature.c
>>> +++ b/arch/arm64/kernel/cpufeature.c
>>> @@ -2209,7 +2209,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>>> return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
>>> }
>>>
>>> -static bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>>> +bool cpu_has_bbml2_noabort(unsigned int cpu_midr)
>>> {
>>> /*
>>> * We want to allow usage of bbml2 in as wide a range of kernel contexts
>> [...] I'll send a separate response for the mmu.c table walker changes.
>>
>>>
>>> +int split_linear_mapping(unsigned long start, unsigned long end)
>>> +{
>>> + int ret = 0;
>>> +
>>> + if (!system_supports_bbml2_noabort())
>>> + return 0;
>> Hmm... I guess the thinking here is that for !BBML2_NOABORT you are expecting
>> this function should only be called in the first place if we know we are
>> pte-mapped. So I guess this is ok... it just means that if we are not
>> pte-mapped, warnings will be emitted while walking the pgtables (as is the case
>> today). So I think this approach is ok.
Yes, we can't split kernel page table on the fly w/o BBML2_NOABORT and
kernel linear mapping should be always PTE-mapped so it returns 0
instead of errno.
>>> +
>>> + mmap_write_lock(&init_mm);
>> What is the lock protecting? I was orignally thinking no locking should be
>> needed because it's not needed for permission changes today; But I think you are
>> right here and we do need locking; multiple owners could share a large leaf
>> mapping, I guess? And in that case you could get concurrent attempts to split
>> from both owners.
Yes, for example, [addr, addr + 4K) and [addr + 4K, addr + 8K) may
belong to two different owners. There may be race when both the owners
want to split the page table.
>> I'm not really a fan of adding the extra locking though; It might introduce a
>> new bottleneck. I wonder if there is a way we could do this locklessly? i.e.
>> allocate the new table, then cmpxchg to insert and the loser has to free? That
>> doesn't work for contiguous mappings though...
I'm not sure whether it is going to be a real bottleneck or not. I saw
x86 uses a dedicated lock, called cpa_lock. So it seems the lock
contention is not a real problem.
I used init_mm mmap_lock instead of inventing a new lock. My thought is
we can start from something simple, we can optimize it if it turns out
to be a problem.
>>> + /* NO_EXEC_MAPPINGS is needed when splitting linear map */
>>> + ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
>>> + start, (end - start), __pgprot(0),
>>> + __pgd_pgtable_alloc,
>>> + NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
>>> + mmap_write_unlock(&init_mm);
>>> + flush_tlb_kernel_range(start, end);
>> I don't believe we should need to flush the TLB when only changing entry sizes
>> when BBML2 is supported. Miko's series has a massive comment explaining the
>> reasoning. That applies to user space though. We should consider if this all
>> works safely for kernel space too, and hopefully remove the flush.
I think it should be same with userspace. The point is hardware will
handle TLB conflict gracefully with BBML2_NOABORT, and hardware does the
same thing for userspace address and kernel address, so the tlb flush
should be not necessary. The TLB pressure or implicit invalidation for
conflict TLB will do the job.
>>> +
>>> + return ret;
>>> +}
>>> +
>>> /*
>>> * This function can only be used to modify existing table entries,
>>> * without allocating new levels of table. Note that this permits the
>>> @@ -676,6 +887,24 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>>
>>> #endif /* CONFIG_KFENCE */
>>>
>>> +static inline bool force_pte_mapping(void)
>>> +{
>>> + /*
>>> + * Can't use cpufeature API to determine whether BBML2 supported
>>> + * or not since cpufeature have not been finalized yet.
>>> + *
>>> + * Checking the boot CPU only for now. If the boot CPU has
>>> + * BBML2, paint linear mapping with block mapping. If it turns
>>> + * out the secondary CPUs don't support BBML2 once cpufeature is
>>> + * fininalized, the linear mapping will be repainted with PTE
>>> + * mapping.
>>> + */
>>> + return (rodata_full && !bbml2_noabort_available()) ||
>> So this is the case where we don't have BBML2 and need to modify protections at
>> page granularity - I agree we need to force pte mappings here.
>>
>>> + debug_pagealloc_enabled() ||
>> This is the case where every page is made invalid on free and valid on
>> allocation, so no point in having block mappings because it will soon degenerate
>> into page mappings because we will have to split on every allocation. Agree here
>> too.
>>
>>> + arm64_kfence_can_set_direct_map() ||
>> After looking into how kfence works, I don't agree with this one. It has a
>> dedicated pool where it allocates from. That pool may be allocated early by the
>> arch or may be allocated late by the core code. Either way, kfence will only
>> modify protections within that pool. You current approach is forcing pte
>> mappings if the pool allocation is late (i.e. not performed by the arch code
>> during boot). But I think "late" is the most common case; kfence is compiled
>> into the kernel but not active at boot. Certainly that's how my Ubuntu kernel is
>> configured. So I think we should just ignore kfence here. If it's "early" then
>> we map the pool with page granularity (as an optimization). If it's "late" your
>> splitter will degenerate the whole kfence pool to page mappings over time as
>> kfence_protect_page() -> set_memory_valid() is called. But the bulk of the
>> linear map will remain mapped with large blocks.
OK, thanks for looking into this. I misunderstood how the late pool works.
>>> + is_realm_world();
>> I think the only reason this requires pte mappings is for
>> __set_memory_enc_dec(). But that can now deal with block mappings given the
>> ability to split the mappings as needed. So I think this condition can be
>> removed too.
Sure
> To clarify; the latter 2 would still be needed for the !BBML2_NOABORT case. So I
> think the expression becomes:
>
> return (!bbml2_noabort_available() && (rodata_full ||
> arm64_kfence_can_set_direct_map() || is_realm_world())) ||
> debug_pagealloc_enabled();
Thanks for coming up with this expression.
Thanks,
Yang
> Thanks,
> Ryan
>
>>> +}
>> Additionally, for can_set_direct_map(); at minimum it's comment should be tidied
>> up, but really I think it should return true if "BBML2_NOABORT ||
>> force_pte_mapping()". Because they are the conditions under which we can now
>> safely modify the linear map.
>>
>>> +
>>> static void __init map_mem(pgd_t *pgdp)
>>> {
>>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>>> @@ -701,7 +930,7 @@ static void __init map_mem(pgd_t *pgdp)
>>>
>>> early_kfence_pool = arm64_kfence_alloc_pool();
>>>
>>> - if (can_set_direct_map())
>>> + if (force_pte_mapping())
>>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>>
>>> /*
>>> @@ -1402,7 +1631,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>
>>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>>
>>> - if (can_set_direct_map())
>>> + if (force_pte_mapping())
>>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>>
>>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> index 39fd1f7ff02a..25c068712cb5 100644
>>> --- a/arch/arm64/mm/pageattr.c
>>> +++ b/arch/arm64/mm/pageattr.c
>>> @@ -10,6 +10,7 @@
>>> #include <linux/vmalloc.h>
>>>
>>> #include <asm/cacheflush.h>
>>> +#include <asm/mmu.h>
>>> #include <asm/pgtable-prot.h>
>>> #include <asm/set_memory.h>
>>> #include <asm/tlbflush.h>
>>> @@ -42,6 +43,8 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
>>> struct page_change_data *cdata = data;
>>> pte_t pte = __ptep_get(ptep);
>>>
>>> + BUG_ON(pte_cont(pte));
>> I don't think this is required; We want to enable using contiguous mappings
>> where it makes sense. As long as we have BBML2, we can update contiguous pte
>> mappings in place, as long as we update all of the ptes in the contiguous block.
>> split_linear_map() should either have converted to non-cont mappings if the
>> contiguous block straddled the split point, or would have left as is (or
>> downgraded a PMD-block to a contpte block) if fully contained within the split
>> range.
>>
>>> +
>>> pte = clear_pte_bit(pte, cdata->clear_mask);
>>> pte = set_pte_bit(pte, cdata->set_mask);
>>>
>>> @@ -80,8 +83,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>>> unsigned long start = addr;
>>> unsigned long size = PAGE_SIZE * numpages;
>>> unsigned long end = start + size;
>>> + unsigned long l_start;
>>> struct vm_struct *area;
>>> - int i;
>>> + int i, ret;
>>>
>>> if (!PAGE_ALIGNED(addr)) {
>>> start &= PAGE_MASK;
>>> @@ -118,7 +122,12 @@ static int change_memory_common(unsigned long addr, int numpages,
>>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>>> for (i = 0; i < area->nr_pages; i++) {
>>> - __change_memory_common((u64)page_address(area->pages[i]),
>>> + l_start = (u64)page_address(area->pages[i]);
>>> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>>> + if (WARN_ON_ONCE(ret))
>>> + return ret;
>> I don't think this is the right place to integrate; I think the split should be
>> done inside __change_memory_common(). Then it caters to all possibilities (i.e.
>> set_memory_valid() and __set_memory_enc_dec()). This means it will run for
>> vmalloc too, but for now, that will be a nop because everything should already
>> be split as required on entry and in future we will get that for free.
>>
>> Once you have integrated Dev's series, the hook becomes
>> ___change_memory_common() (3 underscores)...
>>
>>> +
>>> + __change_memory_common(l_start,
>>> PAGE_SIZE, set_mask, clear_mask);
>>> }
>>> }
>>> @@ -174,6 +183,9 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
>>>
>>> int set_direct_map_invalid_noflush(struct page *page)
>>> {
>>> + unsigned long l_start;
>>> + int ret;
>>> +
>>> struct page_change_data data = {
>>> .set_mask = __pgprot(0),
>>> .clear_mask = __pgprot(PTE_VALID),
>>> @@ -182,13 +194,21 @@ int set_direct_map_invalid_noflush(struct page *page)
>>> if (!can_set_direct_map())
>>> return 0;
>>>
>>> + l_start = (unsigned long)page_address(page);
>>> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>>> + if (WARN_ON_ONCE(ret))
>>> + return ret;
>>> +
>>> return apply_to_page_range(&init_mm,
>>> - (unsigned long)page_address(page),
>>> - PAGE_SIZE, change_page_range, &data);
>>> + l_start, PAGE_SIZE, change_page_range,
>>> + &data);
>> ...and once integrated with Dev's series you don't need any changes here...
>>
>>> }
>>>
>>> int set_direct_map_default_noflush(struct page *page)
>>> {
>>> + unsigned long l_start;
>>> + int ret;
>>> +
>>> struct page_change_data data = {
>>> .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
>>> .clear_mask = __pgprot(PTE_RDONLY),
>>> @@ -197,9 +217,14 @@ int set_direct_map_default_noflush(struct page *page)
>>> if (!can_set_direct_map())
>>> return 0;
>>>
>>> + l_start = (unsigned long)page_address(page);
>>> + ret = split_linear_mapping(l_start, l_start + PAGE_SIZE);
>>> + if (WARN_ON_ONCE(ret))
>>> + return ret;
>>> +
>>> return apply_to_page_range(&init_mm,
>>> - (unsigned long)page_address(page),
>>> - PAGE_SIZE, change_page_range, &data);
>>> + l_start, PAGE_SIZE, change_page_range,
>>> + &data);
>> ...or here.
>>
>> Thanks,
>> Ryan
>>
>>> }
>>>
>>> static int __set_memory_enc_dec(unsigned long addr,
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-16 16:24 ` Ryan Roberts
@ 2025-06-17 21:09 ` Yang Shi
2025-06-23 13:26 ` Ryan Roberts
0 siblings, 1 reply; 33+ messages in thread
From: Yang Shi @ 2025-06-17 21:09 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 6/16/25 9:24 AM, Ryan Roberts wrote:
> On 31/05/2025 03:41, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>> - performance degradation
>> - more TLB pressure
>> - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full. When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal: 258988984 kB
>> MemFree: 254821700 kB
>>
>> After:
>> MemTotal: 259505132 kB
>> MemFree: 255410264 kB
>>
>> Around 500MB more memory are free to use. The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>> --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi<yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/cpufeature.h | 26 +++
>> arch/arm64/include/asm/mmu.h | 1 +
>> arch/arm64/include/asm/pgtable.h | 12 +-
>> arch/arm64/kernel/cpufeature.c | 2 +-
>> arch/arm64/mm/mmu.c | 269 +++++++++++++++++++++++++---
>> arch/arm64/mm/pageattr.c | 37 +++-
>> 6 files changed, 319 insertions(+), 28 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 8f36ffa16b73..a95806980298 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -1053,6 +1053,32 @@ static inline bool cpu_has_lpa2(void)
>> #endif
>> }
>>
> [...] (I gave comments on this part in previous reply)
>
> I'm focussing on teh table walker in mmu.c here - i.e. implementation of
> split_linear_mapping()...
>
>> index 775c0536b194..4c5d3aa35d62 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -45,6 +45,7 @@
>> #define NO_BLOCK_MAPPINGS BIT(0)
>> #define NO_CONT_MAPPINGS BIT(1)
>> #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
>> +#define SPLIT_MAPPINGS BIT(3)
>>
>> u64 kimage_voffset __ro_after_init;
>> EXPORT_SYMBOL(kimage_voffset);
>> @@ -166,12 +167,91 @@ static void init_clear_pgtable(void *table)
>> dsb(ishst);
>> }
>>
>> +static void split_cont_pte(pte_t *ptep)
>> +{
>> + pte_t *_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> + pte_t _pte;
>> +
>> + for (int i = 0; i < CONT_PTES; i++, _ptep++) {
>> + _pte = READ_ONCE(*_ptep);
>> + _pte = pte_mknoncont(_pte);
>> + __set_pte_nosync(_ptep, _pte);
> This is not atomic but I don't think that matters for kernel mappings since we
> don't care about HW-modified access/dirty bits.
Yes, access bit should be always set for kernel mappings if I remember
correctly. Kernel mapping doesn't care about dirty bit.
>> + }
>> +
>> + dsb(ishst);
>> + isb();
> I think we can use lazy_mmu_mode here to potentially batch the barriers for
> multiple levels. This also avoids the need for adding __set_pmd_nosync().
>
>> +}
>> +
>> +static void split_cont_pmd(pmd_t *pmdp)
>> +{
>> + pmd_t *_pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> + pmd_t _pmd;
>> +
>> + for (int i = 0; i < CONT_PMDS; i++, _pmdp++) {
>> + _pmd = READ_ONCE(*_pmdp);
>> + _pmd = pmd_mknoncont(_pmd);
>> + set_pmd(_pmdp, _pmd);
> Without lazy_mmu_mode this is issuing barriers per entry. With lazy_mmu_mode
> this will defer the barriers until we exit the mode so this will get a bit
> faster. (in practice it will be a bit like what you have done for contpte but
> potentially even better because we can batch across levels.
OK, lazy mmu should work IIUC. The concurrent page table walker should
see either the old entry or the new entry. Both the old entry and the
new entry point to the same physical address and have the same
permission, just different page size. BBML2_NOABORT can handle this
gracefully.
>> + }
>> +}
>> +
>> +static void split_pmd(pmd_t pmd, phys_addr_t pte_phys, int flags)
>> +{
>> + pte_t *ptep;
>> + unsigned long pfn;
>> + pgprot_t prot;
>> +
>> + pfn = pmd_pfn(pmd);
>> + prot = pmd_pgprot(pmd);
>> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PTE_TYPE_PAGE);
>> +
>> + ptep = (pte_t *)phys_to_virt(pte_phys);
>> +
>> + /* It must be naturally aligned if PMD is leaf */
>> + if ((flags & NO_CONT_MAPPINGS) == 0)
> I'm not sure we have a use case for avoiding CONT mappings? Suggest doing it
> unconditionally.
Repainting linear mapping does avoid CONT mappings.
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +
>> + for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> + __set_pte_nosync(ptep, pfn_pte(pfn, prot));
>> +
>> + dsb(ishst);
>> +}
>> +
>> +static void split_pud(pud_t pud, phys_addr_t pmd_phys, int flags)
>> +{
>> + pmd_t *pmdp;
>> + unsigned long pfn;
>> + pgprot_t prot;
>> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> + pfn = pud_pfn(pud);
>> + prot = pud_pgprot(pud);
>> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +
>> + /* It must be naturally aligned if PUD is leaf */
>> + if ((flags & NO_CONT_MAPPINGS) == 0)
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +
>> + for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> + __set_pmd_nosync(pmdp, pfn_pmd(pfn, prot));
>> + pfn += step;
>> + }
>> +
>> + dsb(ishst);
>> +}
>> +
>> static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
>> - phys_addr_t phys, pgprot_t prot)
>> + phys_addr_t phys, pgprot_t prot, int flags)
>> {
>> do {
>> pte_t old_pte = __ptep_get(ptep);
>>
>> + if (flags & SPLIT_MAPPINGS) {
>> + if (pte_cont(old_pte))
>> + split_cont_pte(ptep);
>> +
>> + continue;
>> + }
>> +
>> /*
>> * Required barriers to make this visible to the table walker
>> * are deferred to the end of alloc_init_cont_pte().
>> @@ -199,11 +279,20 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>> pmd_t pmd = READ_ONCE(*pmdp);
>> pte_t *ptep;
>> int ret = 0;
>> + bool split = flags & SPLIT_MAPPINGS;
>> + pmdval_t pmdval;
>> + phys_addr_t pte_phys;
>>
>> - BUG_ON(pmd_sect(pmd));
>> - if (pmd_none(pmd)) {
>> - pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>> - phys_addr_t pte_phys;
>> + if (!split)
>> + BUG_ON(pmd_sect(pmd));
>> +
>> + if (pmd_none(pmd) && split) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> +
>> + if (pmd_none(pmd) || (split && pmd_leaf(pmd))) {
>> + pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>>
>> if (flags & NO_EXEC_MAPPINGS)
>> pmdval |= PMD_TABLE_PXN;
>> @@ -213,6 +302,18 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>> ret = -ENOMEM;
>> goto out;
>> }
>> + }
>> +
>> + if (split) {
>> + if (pmd_leaf(pmd)) {
>> + split_pmd(pmd, pte_phys, flags);
>> + __pmd_populate(pmdp, pte_phys, pmdval);
>> + }
>> + ptep = pte_offset_kernel(pmdp, addr);
>> + goto split_pgtable;
>> + }
>> +
>> + if (pmd_none(pmd)) {
>> ptep = pte_set_fixmap(pte_phys);
>> init_clear_pgtable(ptep);
>> ptep += pte_index(addr);
>> @@ -222,17 +323,28 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>> ptep = pte_set_fixmap_offset(pmdp, addr);
>> }
>>
>> +split_pgtable:
>> do {
>> pgprot_t __prot = prot;
>>
>> next = pte_cont_addr_end(addr, end);
>>
>> + if (split) {
>> + pte_t pteval = READ_ONCE(*ptep);
>> + bool cont = pte_cont(pteval);
>> +
>> + if (cont &&
>> + ((addr | next) & ~CONT_PTE_MASK) == 0 &&
>> + (flags & NO_CONT_MAPPINGS) == 0)
>> + continue;
>> + }
>> +
>> /* use a contiguous mapping if the range is suitably aligned */
>> if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
>> (flags & NO_CONT_MAPPINGS) == 0)
>> __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>
>> - init_pte(ptep, addr, next, phys, __prot);
>> + init_pte(ptep, addr, next, phys, __prot, flags);
>>
>> ptep += pte_index(next) - pte_index(addr);
>> phys += next - addr;
>> @@ -243,7 +355,8 @@ static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>> * ensure that all previous pgtable writes are visible to the table
>> * walker.
>> */
>> - pte_clear_fixmap();
>> + if (!split)
>> + pte_clear_fixmap();
>>
>> out:
>> return ret;
>> @@ -255,15 +368,29 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>> {
>> unsigned long next;
>> int ret = 0;
>> + bool split = flags & SPLIT_MAPPINGS;
>> + bool cont;
>>
>> do {
>> pmd_t old_pmd = READ_ONCE(*pmdp);
>>
>> next = pmd_addr_end(addr, end);
>>
>> + if (split && pmd_leaf(old_pmd)) {
>> + cont = pgprot_val(pmd_pgprot(old_pmd)) & PTE_CONT;
>> + if (cont)
>> + split_cont_pmd(pmdp);
>> +
>> + /* The PMD is fully contained in the range */
>> + if (((addr | next) & ~PMD_MASK) == 0 &&
>> + (flags & NO_BLOCK_MAPPINGS) == 0)
>> + continue;
>> + }
>> +
>> /* try section mapping first */
>> if (((addr | next | phys) & ~PMD_MASK) == 0 &&
>> - (flags & NO_BLOCK_MAPPINGS) == 0) {
>> + (flags & NO_BLOCK_MAPPINGS) == 0 &&
>> + (flags & SPLIT_MAPPINGS) == 0) {
>> pmd_set_huge(pmdp, phys, prot);
>>
>> /*
>> @@ -278,7 +405,7 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
>> if (ret)
>> break;
>>
>> - BUG_ON(pmd_val(old_pmd) != 0 &&
>> + BUG_ON(!split && pmd_val(old_pmd) != 0 &&
>> pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
>> }
>> phys += next - addr;
>> @@ -296,14 +423,23 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>> int ret = 0;
>> pud_t pud = READ_ONCE(*pudp);
>> pmd_t *pmdp;
>> + bool split = flags & SPLIT_MAPPINGS;
>> + pudval_t pudval;
>> + phys_addr_t pmd_phys;
>>
>> /*
>> * Check for initial section mappings in the pgd/pud.
>> */
>> - BUG_ON(pud_sect(pud));
>> - if (pud_none(pud)) {
>> - pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>> - phys_addr_t pmd_phys;
>> + if (!split)
>> + BUG_ON(pud_sect(pud));
>> +
>> + if (pud_none(pud) && split) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> +
>> + if (pud_none(pud) || (split && pud_leaf(pud))) {
>> + pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>>
>> if (flags & NO_EXEC_MAPPINGS)
>> pudval |= PUD_TABLE_PXN;
>> @@ -313,6 +449,18 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>> ret = -ENOMEM;
>> goto out;
>> }
>> + }
>> +
>> + if (split) {
>> + if (pud_leaf(pud)) {
>> + split_pud(pud, pmd_phys, flags);
>> + __pud_populate(pudp, pmd_phys, pudval);
>> + }
>> + pmdp = pmd_offset(pudp, addr);
>> + goto split_pgtable;
>> + }
>> +
>> + if (pud_none(pud)) {
>> pmdp = pmd_set_fixmap(pmd_phys);
>> init_clear_pgtable(pmdp);
>> pmdp += pmd_index(addr);
>> @@ -322,11 +470,22 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>> pmdp = pmd_set_fixmap_offset(pudp, addr);
>> }
>>
>> +split_pgtable:
>> do {
>> pgprot_t __prot = prot;
>>
>> next = pmd_cont_addr_end(addr, end);
>>
>> + if (split) {
>> + pmd_t pmdval = READ_ONCE(*pmdp);
>> + bool cont = pgprot_val(pmd_pgprot(pmdval)) & PTE_CONT;
>> +
>> + if (cont &&
>> + ((addr | next) & ~CONT_PMD_MASK) == 0 &&
>> + (flags & NO_CONT_MAPPINGS) == 0)
>> + continue;
>> + }
>> +
>> /* use a contiguous mapping if the range is suitably aligned */
>> if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
>> (flags & NO_CONT_MAPPINGS) == 0)
>> @@ -340,7 +499,8 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>> phys += next - addr;
>> } while (addr = next, addr != end);
>>
>> - pmd_clear_fixmap();
>> + if (!split)
>> + pmd_clear_fixmap();
>>
>> out:
>> return ret;
>> @@ -355,6 +515,16 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>> int ret = 0;
>> p4d_t p4d = READ_ONCE(*p4dp);
>> pud_t *pudp;
>> + bool split = flags & SPLIT_MAPPINGS;
>> +
>> + if (split) {
>> + if (p4d_none(p4d)) {
>> + ret= -EINVAL;
>> + goto out;
>> + }
>> + pudp = pud_offset(p4dp, addr);
>> + goto split_pgtable;
>> + }
>>
>> if (p4d_none(p4d)) {
>> p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN | P4D_TABLE_AF;
>> @@ -377,17 +547,26 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>> pudp = pud_set_fixmap_offset(p4dp, addr);
>> }
>>
>> +split_pgtable:
>> do {
>> pud_t old_pud = READ_ONCE(*pudp);
>>
>> next = pud_addr_end(addr, end);
>>
>> + if (split && pud_leaf(old_pud)) {
>> + /* The PUD is fully contained in the range */
>> + if (((addr | next) & ~PUD_MASK) == 0 &&
>> + (flags & NO_BLOCK_MAPPINGS) == 0)
>> + continue;
>> + }
>> +
>> /*
>> * For 4K granule only, attempt to put down a 1GB block
>> */
>> if (pud_sect_supported() &&
>> ((addr | next | phys) & ~PUD_MASK) == 0 &&
>> - (flags & NO_BLOCK_MAPPINGS) == 0) {
>> + (flags & NO_BLOCK_MAPPINGS) == 0 &&
>> + (flags & SPLIT_MAPPINGS) == 0) {
>> pud_set_huge(pudp, phys, prot);
>>
>> /*
>> @@ -402,13 +581,14 @@ static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
>> if (ret)
>> break;
>>
>> - BUG_ON(pud_val(old_pud) != 0 &&
>> + BUG_ON(!split && pud_val(old_pud) != 0 &&
>> pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
>> }
>> phys += next - addr;
>> } while (pudp++, addr = next, addr != end);
>>
>> - pud_clear_fixmap();
>> + if (!split)
>> + pud_clear_fixmap();
>>
>> out:
>> return ret;
>> @@ -423,6 +603,16 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>> int ret = 0;
>> pgd_t pgd = READ_ONCE(*pgdp);
>> p4d_t *p4dp;
>> + bool split = flags & SPLIT_MAPPINGS;
>> +
>> + if (split) {
>> + if (pgd_none(pgd)) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> + p4dp = p4d_offset(pgdp, addr);
>> + goto split_pgtable;
>> + }
> I really don't like the way the split logic has been added to the existing table
> walker; there are so many conditionals, it's not clear that there is really any
> advantage. I know I proposed it originally, but I changed my mind last cycle and
> made the case for keeping it separate. That's still my opinon I'm afraid; I'm
> proposing a patch below showing how I would prefer to see this implemented.
Yes, it added a lot conditionals because split does something reverse,
we need conditionals to tell it is create or split. But I don't recall
you mentioned this in v3 discussion. Maybe I misunderstood you or missed
the point because our discussion was focused on keep block mapping
instead of splitting all the way down to PTE all the time.
But anyway this basically rolled back to my v2 implementation which has
dedicated separate functions for split.
>>
>> if (pgd_none(pgd)) {
>> pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN | PGD_TABLE_AF;
>> @@ -445,6 +635,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>> p4dp = p4d_set_fixmap_offset(pgdp, addr);
>> }
>>
>> +split_pgtable:
>> do {
>> p4d_t old_p4d = READ_ONCE(*p4dp);
>>
>> @@ -461,7 +652,8 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
>> phys += next - addr;
>> } while (p4dp++, addr = next, addr != end);
>>
>> - p4d_clear_fixmap();
>> + if (!split)
>> + p4d_clear_fixmap();
>>
>> out:
>> return ret;
>> @@ -557,6 +749,25 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>> return pa;
>> }
>>
>> +int split_linear_mapping(unsigned long start, unsigned long end)
>> +{
>> + int ret = 0;
>> +
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>> +
>> + mmap_write_lock(&init_mm);
>> + /* NO_EXEC_MAPPINGS is needed when splitting linear map */
>> + ret = __create_pgd_mapping_locked(init_mm.pgd, virt_to_phys((void *)start),
>> + start, (end - start), __pgprot(0),
>> + __pgd_pgtable_alloc,
>> + NO_EXEC_MAPPINGS | SPLIT_MAPPINGS);
> Implementing this on top of __create_pgd_mapping_locked() is problematic because
> (I think) it assumes that the virtual range is physically contiguous? That's
> fine for the linear map, but I'd like to reuse this primitive for vmalloc too.
That assumption is for creating page table. But split doesn't care
whether it is physically contiguous or not, the phys is not actually
used by split primitive.
>> + mmap_write_unlock(&init_mm);
> As already mentioned, I don't like this locking. I think we can make it work
> locklessly as long as we are only ever splitting and not collapsing.
I don't disagree lockless is good. I'm just not sure whether it is worth
the extra complexity or not.
>> + flush_tlb_kernel_range(start, end);
>> +
>> + return ret;
>> +}
> I had a go at creating a version to try to illustrate how I have been thinking
> about this. What do you think? I've only compile tested it (and it fails because
> I don't have pmd_mknoncont() and system_supports_bbml2_noabort() in my tree -
> but the rest looks ok). It's on top of v6.16-rc1, where the pgtable allocation
> functions have changed a bit. And I don't think you need patch 2 from your
> series with this change. I haven't implemented the cmpxchg part that I think
Yes, patch #2 is not needed anymore for this series if we have split
primitive in a separate function. But it should be a prerequisite for
fixing the memory hotplug bug.
> would make it safe to be used locklessly yet, but I've marked the sites up with
> TODO. Once implemented, the idea is that concurrent threads trying to split on
> addresses that all lie within the same block/contig mapping should be safe.
>
> ---8<---
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 8fcf59ba39db..22a09cc7a2aa 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,9 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys,
> unsigned long virt,
> int flags);
> #endif
>
> +/* Sentinel used to represent failure to allocate for phys_addr_t type. */
> +#define INVALID_PHYS_ADDR -1
> +
> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> enum pgtable_type pgtable_type)
> {
> @@ -487,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> phys_addr_t pa;
>
> - BUG_ON(!ptdesc);
> + if (!ptdesc)
> + return INVALID_PHYS_ADDR;
> +
> pa = page_to_phys(ptdesc_page(ptdesc));
>
> switch (pgtable_type) {
> @@ -509,15 +514,27 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> }
>
> static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> {
> return __pgd_pgtable_alloc(&init_mm, pgtable_type);
> }
>
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> + phys_addr_t pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +
> + BUG_ON(!pa);
> + return pa;
> +}
> +
> static phys_addr_t
> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
> {
> - return __pgd_pgtable_alloc(NULL, pgtable_type);
> + phys_addr_t pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> +
> + BUG_ON(!pa);
> + return pa;
> }
>
> /*
> @@ -1616,3 +1633,202 @@ int arch_set_user_pkey_access(struct task_struct *tsk,
> int pkey, unsigned long i
> return 0;
> }
> #endif
> +
> +static void split_contpte(pte_t *ptep)
> +{
> + int i;
> +
> + ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> + for (i = 0; i < CONT_PTES; i++, ptep++)
> + __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> +}
> +
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd)
> +{
> + pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> + unsigned long pfn = pmd_pfn(pmd);
> + pgprot_t prot = pmd_pgprot(pmd);
> + phys_addr_t pte_phys;
> + pte_t *ptep;
> + int i;
> +
> + pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
> + if (pte_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> + ptep = (pte_t *)phys_to_virt(pte_phys);
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + tableprot |= PMD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> + for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> + __set_pte(ptep, pfn_pte(pfn, prot));
> +
> + /*
> + * Ensure the pte entries are visible to the table walker by the time
> + * the pmd entry that points to the ptes is visible.
> + */
> + dsb(ishst);
> +
> + // TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
> + __pmd_populate(pmdp, pte_phys, tableprot);
> +
> + return 0;
> +}
> +
> +static void split_contpmd(pmd_t *pmdp)
> +{
> + int i;
> +
> + pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> + for (i = 0; i < CONT_PMDS; i++, pmdp++)
> + set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
> +}
> +
> +static int split_pud(pud_t *pudp, pud_t pud)
> +{
> + pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> + unsigned long pfn = pud_pfn(pud);
> + pgprot_t prot = pud_pgprot(pud);
> + phys_addr_t pmd_phys;
> + pmd_t *pmdp;
> + int i;
> +
> + pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
> + if (pmd_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + tableprot |= PUD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> + for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
> + set_pmd(pmdp, pfn_pmd(pfn, prot));
> +
> + /*
> + * Ensure the pmd entries are visible to the table walker by the time
> + * the pud entry that points to the pmds is visible.
> + */
> + dsb(ishst);
> +
> + // TODO: THIS NEEDS TO BE CMPXCHG THEN FREE THE TABLE IF WE LOST.
> + __pud_populate(pudp, pmd_phys, tableprot);
> +
> + return 0;
> +}
> +
> +int split_leaf_mapping(unsigned long addr)
Thanks for coming up with the code. It does help to understand your
idea. Now I see why you suggested "split_mapping(start);
split_mapping(end);" model. It does make the implementation easier
because we don't need a loop anymore. But this may have a couple of
problems:
1. We need walk the page table twice instead of once. It sounds
expensive.
2. How should we handle repainting? We need split all the page tables
all the way down to PTE for repainting between start and end rather than
keeping block mappings. This model doesn't work, right? For example,
repaint a 2G block. The first 1G is mapped by a PUD, the second 1G is
mapped by 511 PMD and 512 PTEs. split_mapping(start) will split the
first 1G, but split_mapping(end) will do nothing, the 511 PMDs are kept
intact. In addition, I think we also prefer reuse the split primitive
for repainting instead of inventing another one.
Thanks,
Yang
> +{
> + pgd_t *pgdp, pgd;
> + p4d_t *p4dp, p4d;
> + pud_t *pudp, pud;
> + pmd_t *pmdp, pmd;
> + pte_t *ptep, pte;
> + int ret = 0;
> +
> + /*
> + * !BBML2_NOABORT systems should not be trying to change permissions on
> + * anything that is not pte-mapped in the first place. Just return early
> + * and let the permission change code raise a warning if not already
> + * pte-mapped.
> + */
> + if (!system_supports_bbml2_noabort())
> + return 0;
> +
> + /*
> + * Ensure addr is at least page-aligned since this is the finest
> + * granularity we can split to.
> + */
> + if (addr != PAGE_ALIGN(addr))
> + return -EINVAL;
> +
> + arch_enter_lazy_mmu_mode();
> +
> + /*
> + * PGD: If addr is PGD aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split.
> + */
> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
> + goto out;
> + pgdp = pgd_offset_k(addr);
> + pgd = pgdp_get(pgdp);
> + if (!pgd_present(pgd))
> + goto out;
> +
> + /*
> + * P4D: If addr is P4D aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split.
> + */
> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
> + goto out;
> + p4dp = p4d_offset(pgdp, addr);
> + p4d = p4dp_get(p4dp);
> + if (!p4d_present(p4d))
> + goto out;
> +
> + /*
> + * PUD: If addr is PUD aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split. Otherwise,
> + * if we have a pud leaf, split to contpmd.
> + */
> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
> + goto out;
> + pudp = pud_offset(p4dp, addr);
> + pud = pudp_get(pudp);
> + if (!pud_present(pud))
> + goto out;
> + if (pud_leaf(pud)) {
> + ret = split_pud(pudp, pud);
> + if (ret)
> + goto out;
> + }
> +
> + /*
> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
> + * leaf boundary. If not present then there is nothing to split.
> + * Otherwise, if we have a contpmd leaf, split to pmd.
> + */
> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> + goto out;
> + pmdp = pmd_offset(pudp, addr);
> + pmd = pmdp_get(pmdp);
> + if (!pmd_present(pmd))
> + goto out;
> + if (pmd_leaf(pmd)) {
> + if (pmd_cont(pmd))
> + split_contpmd(pmdp);
> + /*
> + * PMD: If addr is PMD aligned then addr already describes a
> + * leaf boundary. Otherwise, split to contpte.
> + */
> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> + goto out;
> + ret = split_pmd(pmdp, pmd);
> + if (ret)
> + goto out;
> + }
> +
> + /*
> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
> + * leaf boundary. If not present then there is nothing to split.
> + * Otherwise, if we have a contpte leaf, split to pte.
> + */
> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> + goto out;
> + ptep = pte_offset_kernel(pmdp, addr);
> + pte = __ptep_get(ptep);
> + if (!pte_present(pte))
> + goto out;
> + if (pte_cont(pte))
> + split_contpte(ptep);
> +
> +out:
> + arch_leave_lazy_mmu_mode();
> + return ret;
> +}
> ---8<---
>
> Thanks,
> Ryan
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-17 21:09 ` Yang Shi
@ 2025-06-23 13:26 ` Ryan Roberts
2025-06-23 19:12 ` Yang Shi
` (2 more replies)
0 siblings, 3 replies; 33+ messages in thread
From: Ryan Roberts @ 2025-06-23 13:26 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, Miko.Lenczewski, dev.jain, scott,
cl
Cc: linux-arm-kernel, linux-kernel
[...]
>> +
>> +int split_leaf_mapping(unsigned long addr)
>
> Thanks for coming up with the code. It does help to understand your idea. Now I
> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
> make the implementation easier because we don't need a loop anymore. But this
> may have a couple of problems:
> 1. We need walk the page table twice instead of once. It sounds expensive.
Yes we need to walk twice. That may be more expensive or less expensive,
depending on the size of the range that you are splitting. If the range is large
then your approach loops through every leaf mapping between the start and end
which will be more expensive than just doing 2 walks. If the range is small then
your approach can avoid the second walk, but at the expense of all the extra
loop overhead.
My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
Personally I think this is probably acceptable? Perhaps we need some other
voices here.
> 2. How should we handle repainting? We need split all the page tables all the
> way down to PTE for repainting between start and end rather than keeping block
> mappings. This model doesn't work, right? For example, repaint a 2G block. The
> first 1G is mapped by a PUD, the second 1G is mapped by 511 PMD and 512 PTEs.
> split_mapping(start) will split the first 1G, but split_mapping(end) will do
> nothing, the 511 PMDs are kept intact. In addition, I think we also prefer reuse
> the split primitive for repainting instead of inventing another one.
I agree my approach doesn't work for the repainting case. But I think what I'm
trying to say is that the 2 things are different operations;
split_leaf_mapping() is just trying to ensure that the start and end of a ragion
are on leaf boundaries. Repainting is trying to ensure that all leaf mappings
within a range are PTE-size. I've implemented the former and you've implemented
that latter. Your implementation looks like meets the former's requirements
because you are only testing it for the case where the range is 1 page. But
actually it is splitting everything in the range to PTEs.
Thanks,
Ryan
>
> Thanks,
> Yang
>
>> +{
>> + pgd_t *pgdp, pgd;
>> + p4d_t *p4dp, p4d;
>> + pud_t *pudp, pud;
>> + pmd_t *pmdp, pmd;
>> + pte_t *ptep, pte;
>> + int ret = 0;
>> +
>> + /*
>> + * !BBML2_NOABORT systems should not be trying to change permissions on
>> + * anything that is not pte-mapped in the first place. Just return early
>> + * and let the permission change code raise a warning if not already
>> + * pte-mapped.
>> + */
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>> +
>> + /*
>> + * Ensure addr is at least page-aligned since this is the finest
>> + * granularity we can split to.
>> + */
>> + if (addr != PAGE_ALIGN(addr))
>> + return -EINVAL;
>> +
>> + arch_enter_lazy_mmu_mode();
>> +
>> + /*
>> + * PGD: If addr is PGD aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split.
>> + */
>> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>> + goto out;
>> + pgdp = pgd_offset_k(addr);
>> + pgd = pgdp_get(pgdp);
>> + if (!pgd_present(pgd))
>> + goto out;
>> +
>> + /*
>> + * P4D: If addr is P4D aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split.
>> + */
>> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>> + goto out;
>> + p4dp = p4d_offset(pgdp, addr);
>> + p4d = p4dp_get(p4dp);
>> + if (!p4d_present(p4d))
>> + goto out;
>> +
>> + /*
>> + * PUD: If addr is PUD aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split. Otherwise,
>> + * if we have a pud leaf, split to contpmd.
>> + */
>> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>> + goto out;
>> + pudp = pud_offset(p4dp, addr);
>> + pud = pudp_get(pudp);
>> + if (!pud_present(pud))
>> + goto out;
>> + if (pud_leaf(pud)) {
>> + ret = split_pud(pudp, pud);
>> + if (ret)
>> + goto out;
>> + }
>> +
>> + /*
>> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>> + * leaf boundary. If not present then there is nothing to split.
>> + * Otherwise, if we have a contpmd leaf, split to pmd.
>> + */
>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> + goto out;
>> + pmdp = pmd_offset(pudp, addr);
>> + pmd = pmdp_get(pmdp);
>> + if (!pmd_present(pmd))
>> + goto out;
>> + if (pmd_leaf(pmd)) {
>> + if (pmd_cont(pmd))
>> + split_contpmd(pmdp);
>> + /*
>> + * PMD: If addr is PMD aligned then addr already describes a
>> + * leaf boundary. Otherwise, split to contpte.
>> + */
>> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>> + goto out;
>> + ret = split_pmd(pmdp, pmd);
>> + if (ret)
>> + goto out;
>> + }
>> +
>> + /*
>> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>> + * leaf boundary. If not present then there is nothing to split.
>> + * Otherwise, if we have a contpte leaf, split to pte.
>> + */
>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> + goto out;
>> + ptep = pte_offset_kernel(pmdp, addr);
>> + pte = __ptep_get(ptep);
>> + if (!pte_present(pte))
>> + goto out;
>> + if (pte_cont(pte))
>> + split_contpte(ptep);
>> +
>> +out:
>> + arch_leave_lazy_mmu_mode();
>> + return ret;
>> +}
>> ---8<---
>>
>> Thanks,
>> Ryan
>>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-23 13:26 ` Ryan Roberts
@ 2025-06-23 19:12 ` Yang Shi
2025-06-26 22:39 ` Yang Shi
2025-07-23 17:38 ` Dev Jain
2 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-06-23 19:12 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 6/23/25 6:26 AM, Ryan Roberts wrote:
> [...]
>
>>> +
>>> +int split_leaf_mapping(unsigned long addr)
>> Thanks for coming up with the code. It does help to understand your idea. Now I
>> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
>> make the implementation easier because we don't need a loop anymore. But this
>> may have a couple of problems:
>> 1. We need walk the page table twice instead of once. It sounds expensive.
> Yes we need to walk twice. That may be more expensive or less expensive,
> depending on the size of the range that you are splitting. If the range is large
> then your approach loops through every leaf mapping between the start and end
> which will be more expensive than just doing 2 walks. If the range is small then
> your approach can avoid the second walk, but at the expense of all the extra
> loop overhead.
Yes, it depends on the page table layout (the more fragmented the more
loads) and the range passed in by the callers. But AFAICT, the most
existing callers just try to change permission on page basis. I know you
are looking at adding more block/cont mapping support for vmalloc, but
will the large range case dominate?
>
> My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
> Personally I think this is probably acceptable? Perhaps we need some other
> voices here.
Doesn't it require 10 loads for both start and end together? The 5 loads
for end may be fast since they are likely cached if they fall into the
same PGD/P4D/PUD/PMD.
>
>
>> 2. How should we handle repainting? We need split all the page tables all the
>> way down to PTE for repainting between start and end rather than keeping block
>> mappings. This model doesn't work, right? For example, repaint a 2G block. The
>> first 1G is mapped by a PUD, the second 1G is mapped by 511 PMD and 512 PTEs.
>> split_mapping(start) will split the first 1G, but split_mapping(end) will do
>> nothing, the 511 PMDs are kept intact. In addition, I think we also prefer reuse
>> the split primitive for repainting instead of inventing another one.
> I agree my approach doesn't work for the repainting case. But I think what I'm
> trying to say is that the 2 things are different operations;
> split_leaf_mapping() is just trying to ensure that the start and end of a ragion
> are on leaf boundaries. Repainting is trying to ensure that all leaf mappings
> within a range are PTE-size. I've implemented the former and you've implemented
> that latter. Your implementation looks like meets the former's requirements
> because you are only testing it for the case where the range is 1 page. But
> actually it is splitting everything in the range to PTEs.
I can understand why you saw they are two different operations. And the
repainting is basically one-off thing. However they share a lot of
common logic (for example, allocate page table, populate new page table
entries, etc) from code point of view. Repainting is just a special case
of split (no block and cont mappings) in this perspective. If we
implement them separately, I can see there will be a lot of duplicate
code. I'm not sure whether this is preferred or not.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
>>> +{
>>> + pgd_t *pgdp, pgd;
>>> + p4d_t *p4dp, p4d;
>>> + pud_t *pudp, pud;
>>> + pmd_t *pmdp, pmd;
>>> + pte_t *ptep, pte;
>>> + int ret = 0;
>>> +
>>> + /*
>>> + * !BBML2_NOABORT systems should not be trying to change permissions on
>>> + * anything that is not pte-mapped in the first place. Just return early
>>> + * and let the permission change code raise a warning if not already
>>> + * pte-mapped.
>>> + */
>>> + if (!system_supports_bbml2_noabort())
>>> + return 0;
>>> +
>>> + /*
>>> + * Ensure addr is at least page-aligned since this is the finest
>>> + * granularity we can split to.
>>> + */
>>> + if (addr != PAGE_ALIGN(addr))
>>> + return -EINVAL;
>>> +
>>> + arch_enter_lazy_mmu_mode();
>>> +
>>> + /*
>>> + * PGD: If addr is PGD aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split.
>>> + */
>>> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>>> + goto out;
>>> + pgdp = pgd_offset_k(addr);
>>> + pgd = pgdp_get(pgdp);
>>> + if (!pgd_present(pgd))
>>> + goto out;
>>> +
>>> + /*
>>> + * P4D: If addr is P4D aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split.
>>> + */
>>> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>>> + goto out;
>>> + p4dp = p4d_offset(pgdp, addr);
>>> + p4d = p4dp_get(p4dp);
>>> + if (!p4d_present(p4d))
>>> + goto out;
>>> +
>>> + /*
>>> + * PUD: If addr is PUD aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split. Otherwise,
>>> + * if we have a pud leaf, split to contpmd.
>>> + */
>>> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>>> + goto out;
>>> + pudp = pud_offset(p4dp, addr);
>>> + pud = pudp_get(pudp);
>>> + if (!pud_present(pud))
>>> + goto out;
>>> + if (pud_leaf(pud)) {
>>> + ret = split_pud(pudp, pud);
>>> + if (ret)
>>> + goto out;
>>> + }
>>> +
>>> + /*
>>> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>>> + * leaf boundary. If not present then there is nothing to split.
>>> + * Otherwise, if we have a contpmd leaf, split to pmd.
>>> + */
>>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> + goto out;
>>> + pmdp = pmd_offset(pudp, addr);
>>> + pmd = pmdp_get(pmdp);
>>> + if (!pmd_present(pmd))
>>> + goto out;
>>> + if (pmd_leaf(pmd)) {
>>> + if (pmd_cont(pmd))
>>> + split_contpmd(pmdp);
>>> + /*
>>> + * PMD: If addr is PMD aligned then addr already describes a
>>> + * leaf boundary. Otherwise, split to contpte.
>>> + */
>>> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> + goto out;
>>> + ret = split_pmd(pmdp, pmd);
>>> + if (ret)
>>> + goto out;
>>> + }
>>> +
>>> + /*
>>> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>>> + * leaf boundary. If not present then there is nothing to split.
>>> + * Otherwise, if we have a contpte leaf, split to pte.
>>> + */
>>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> + goto out;
>>> + ptep = pte_offset_kernel(pmdp, addr);
>>> + pte = __ptep_get(ptep);
>>> + if (!pte_present(pte))
>>> + goto out;
>>> + if (pte_cont(pte))
>>> + split_contpte(ptep);
>>> +
>>> +out:
>>> + arch_leave_lazy_mmu_mode();
>>> + return ret;
>>> +}
>>> ---8<---
>>>
>>> Thanks,
>>> Ryan
>>>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-23 13:26 ` Ryan Roberts
2025-06-23 19:12 ` Yang Shi
@ 2025-06-26 22:39 ` Yang Shi
2025-07-23 17:38 ` Dev Jain
2 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-06-26 22:39 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 6/23/25 6:26 AM, Ryan Roberts wrote:
> [...]
>
>>> +
>>> +int split_leaf_mapping(unsigned long addr)
>> Thanks for coming up with the code. It does help to understand your idea. Now I
>> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
>> make the implementation easier because we don't need a loop anymore. But this
>> may have a couple of problems:
>> 1. We need walk the page table twice instead of once. It sounds expensive.
> Yes we need to walk twice. That may be more expensive or less expensive,
> depending on the size of the range that you are splitting. If the range is large
> then your approach loops through every leaf mapping between the start and end
> which will be more expensive than just doing 2 walks. If the range is small then
> your approach can avoid the second walk, but at the expense of all the extra
> loop overhead.
Thinking about this further. Although there is some extra loop overhead,
but there should be not extra loads. We can check whether the start and
end are properly aligned or not, it they are aligned, we just continue
the loop without loading page table entry.
And we can optimize the loop by advancing multiple PUD/PMD/CONT size at
a time instead of one at a time. The pseudo code (for example, pmd
level) looks like:
do {
next = pmd_addr_end(start, end);
if (next < end)
nr = ((end - next) / PMD_SIZE) + 1;
if (((start | next) & ~PMD_MASK) == 0)
continue;
split_pmd(start, next);
} while (pmdp += nr, start = next * nr, start != end)
For repainting case, we just need do:
do {
nr = 1;
next = pmd_addr_end(start, end);
if (next < end && !repainting)
nr = ((end - next) / PMD_SIZE) + 1;
if (((start | next) & ~PMD_MASK) == 0 && !repainting)
continue;
split_pmd(start, next);
} while (pmdp += nr, start = next * nr, start != end)
This should reduce loop overhead and duplicate code for repainting.
Thanks,
Yang
>
> My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
> Personally I think this is probably acceptable? Perhaps we need some other
> voices here.
>
>
>> 2. How should we handle repainting? We need split all the page tables all the
>> way down to PTE for repainting between start and end rather than keeping block
>> mappings. This model doesn't work, right? For example, repaint a 2G block. The
>> first 1G is mapped by a PUD, the second 1G is mapped by 511 PMD and 512 PTEs.
>> split_mapping(start) will split the first 1G, but split_mapping(end) will do
>> nothing, the 511 PMDs are kept intact. In addition, I think we also prefer reuse
>> the split primitive for repainting instead of inventing another one.
> I agree my approach doesn't work for the repainting case. But I think what I'm
> trying to say is that the 2 things are different operations;
> split_leaf_mapping() is just trying to ensure that the start and end of a ragion
> are on leaf boundaries. Repainting is trying to ensure that all leaf mappings
> within a range are PTE-size. I've implemented the former and you've implemented
> that latter. Your implementation looks like meets the former's requirements
> because you are only testing it for the case where the range is 1 page. But
> actually it is splitting everything in the range to PTEs.
>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
>>> +{
>>> + pgd_t *pgdp, pgd;
>>> + p4d_t *p4dp, p4d;
>>> + pud_t *pudp, pud;
>>> + pmd_t *pmdp, pmd;
>>> + pte_t *ptep, pte;
>>> + int ret = 0;
>>> +
>>> + /*
>>> + * !BBML2_NOABORT systems should not be trying to change permissions on
>>> + * anything that is not pte-mapped in the first place. Just return early
>>> + * and let the permission change code raise a warning if not already
>>> + * pte-mapped.
>>> + */
>>> + if (!system_supports_bbml2_noabort())
>>> + return 0;
>>> +
>>> + /*
>>> + * Ensure addr is at least page-aligned since this is the finest
>>> + * granularity we can split to.
>>> + */
>>> + if (addr != PAGE_ALIGN(addr))
>>> + return -EINVAL;
>>> +
>>> + arch_enter_lazy_mmu_mode();
>>> +
>>> + /*
>>> + * PGD: If addr is PGD aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split.
>>> + */
>>> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>>> + goto out;
>>> + pgdp = pgd_offset_k(addr);
>>> + pgd = pgdp_get(pgdp);
>>> + if (!pgd_present(pgd))
>>> + goto out;
>>> +
>>> + /*
>>> + * P4D: If addr is P4D aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split.
>>> + */
>>> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>>> + goto out;
>>> + p4dp = p4d_offset(pgdp, addr);
>>> + p4d = p4dp_get(p4dp);
>>> + if (!p4d_present(p4d))
>>> + goto out;
>>> +
>>> + /*
>>> + * PUD: If addr is PUD aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split. Otherwise,
>>> + * if we have a pud leaf, split to contpmd.
>>> + */
>>> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>>> + goto out;
>>> + pudp = pud_offset(p4dp, addr);
>>> + pud = pudp_get(pudp);
>>> + if (!pud_present(pud))
>>> + goto out;
>>> + if (pud_leaf(pud)) {
>>> + ret = split_pud(pudp, pud);
>>> + if (ret)
>>> + goto out;
>>> + }
>>> +
>>> + /*
>>> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>>> + * leaf boundary. If not present then there is nothing to split.
>>> + * Otherwise, if we have a contpmd leaf, split to pmd.
>>> + */
>>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> + goto out;
>>> + pmdp = pmd_offset(pudp, addr);
>>> + pmd = pmdp_get(pmdp);
>>> + if (!pmd_present(pmd))
>>> + goto out;
>>> + if (pmd_leaf(pmd)) {
>>> + if (pmd_cont(pmd))
>>> + split_contpmd(pmdp);
>>> + /*
>>> + * PMD: If addr is PMD aligned then addr already describes a
>>> + * leaf boundary. Otherwise, split to contpte.
>>> + */
>>> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> + goto out;
>>> + ret = split_pmd(pmdp, pmd);
>>> + if (ret)
>>> + goto out;
>>> + }
>>> +
>>> + /*
>>> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>>> + * leaf boundary. If not present then there is nothing to split.
>>> + * Otherwise, if we have a contpte leaf, split to pte.
>>> + */
>>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> + goto out;
>>> + ptep = pte_offset_kernel(pmdp, addr);
>>> + pte = __ptep_get(ptep);
>>> + if (!pte_present(pte))
>>> + goto out;
>>> + if (pte_cont(pte))
>>> + split_contpte(ptep);
>>> +
>>> +out:
>>> + arch_leave_lazy_mmu_mode();
>>> + return ret;
>>> +}
>>> ---8<---
>>>
>>> Thanks,
>>> Ryan
>>>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-06-23 13:26 ` Ryan Roberts
2025-06-23 19:12 ` Yang Shi
2025-06-26 22:39 ` Yang Shi
@ 2025-07-23 17:38 ` Dev Jain
2025-07-23 20:51 ` Yang Shi
2 siblings, 1 reply; 33+ messages in thread
From: Dev Jain @ 2025-07-23 17:38 UTC (permalink / raw)
To: Ryan Roberts, Yang Shi, will, catalin.marinas, Miko.Lenczewski,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 23/06/25 6:56 pm, Ryan Roberts wrote:
> [...]
>
>>> +
>>> +int split_leaf_mapping(unsigned long addr)
>> Thanks for coming up with the code. It does help to understand your idea. Now I
>> see why you suggested "split_mapping(start); split_mapping(end);" model. It does
>> make the implementation easier because we don't need a loop anymore. But this
>> may have a couple of problems:
>> 1. We need walk the page table twice instead of once. It sounds expensive.
> Yes we need to walk twice. That may be more expensive or less expensive,
> depending on the size of the range that you are splitting. If the range is large
> then your approach loops through every leaf mapping between the start and end
> which will be more expensive than just doing 2 walks. If the range is small then
> your approach can avoid the second walk, but at the expense of all the extra
> loop overhead.
>
> My suggestion requires 5 loads (assuming the maximum of 5 levels of lookup).
> Personally I think this is probably acceptable? Perhaps we need some other
> voices here.
Hello all,
I am starting to implement vmalloc-huge by default with BBML2 no-abort on arm64.
I see that there is some disagreement related to the way the splitting needs to
be implemented - I skimmed through the discussions and it will require some work
to understand what is going on :) hopefully I'll be back soon to give some of
my opinions.
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-23 17:38 ` Dev Jain
@ 2025-07-23 20:51 ` Yang Shi
2025-07-24 11:43 ` Dev Jain
0 siblings, 1 reply; 33+ messages in thread
From: Yang Shi @ 2025-07-23 20:51 UTC (permalink / raw)
To: Dev Jain, Ryan Roberts, will, catalin.marinas, Miko.Lenczewski,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 7/23/25 10:38 AM, Dev Jain wrote:
>
> On 23/06/25 6:56 pm, Ryan Roberts wrote:
>> [...]
>>
>>>> +
>>>> +int split_leaf_mapping(unsigned long addr)
>>> Thanks for coming up with the code. It does help to understand your
>>> idea. Now I
>>> see why you suggested "split_mapping(start); split_mapping(end);"
>>> model. It does
>>> make the implementation easier because we don't need a loop anymore.
>>> But this
>>> may have a couple of problems:
>>> 1. We need walk the page table twice instead of once. It sounds
>>> expensive.
>> Yes we need to walk twice. That may be more expensive or less expensive,
>> depending on the size of the range that you are splitting. If the
>> range is large
>> then your approach loops through every leaf mapping between the start
>> and end
>> which will be more expensive than just doing 2 walks. If the range is
>> small then
>> your approach can avoid the second walk, but at the expense of all
>> the extra
>> loop overhead.
>>
>> My suggestion requires 5 loads (assuming the maximum of 5 levels of
>> lookup).
>> Personally I think this is probably acceptable? Perhaps we need some
>> other
>> voices here.
>
> Hello all,
>
> I am starting to implement vmalloc-huge by default with BBML2 no-abort
> on arm64.
> I see that there is some disagreement related to the way the splitting
> needs to
> be implemented - I skimmed through the discussions and it will require
> some work
> to understand what is going on :) hopefully I'll be back soon to give
> some of
> my opinions.
Hi Dev,
Thanks for the heads up.
In the last email I suggested skip the leaf mappings in the split range
in order to reduce page table walk overhead for split_mapping(start,
end). In this way we can achieve:
- reuse the most split code for repainting (just need
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS flag for repainting to split page
table to PTEs)
- just walk page table once
- have similar page table walk overhead with
split_mapping(start)/split_mapping(end) if the split range is large
I'm basically done on a new spin to implement it and solve all the
review comments from v4. I should be able to post the new spin by the
end of this week.
Regards,
Yang
>
>>
>>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-23 20:51 ` Yang Shi
@ 2025-07-24 11:43 ` Dev Jain
2025-07-24 17:59 ` Yang Shi
0 siblings, 1 reply; 33+ messages in thread
From: Dev Jain @ 2025-07-24 11:43 UTC (permalink / raw)
To: Yang Shi, Ryan Roberts, will, catalin.marinas, Miko.Lenczewski,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 24/07/25 2:21 am, Yang Shi wrote:
>
>
> On 7/23/25 10:38 AM, Dev Jain wrote:
>>
>> On 23/06/25 6:56 pm, Ryan Roberts wrote:
>>> [...]
>>>
>>>>> +
>>>>> +int split_leaf_mapping(unsigned long addr)
>>>> Thanks for coming up with the code. It does help to understand your
>>>> idea. Now I
>>>> see why you suggested "split_mapping(start); split_mapping(end);"
>>>> model. It does
>>>> make the implementation easier because we don't need a loop
>>>> anymore. But this
>>>> may have a couple of problems:
>>>> 1. We need walk the page table twice instead of once. It sounds
>>>> expensive.
>>> Yes we need to walk twice. That may be more expensive or less
>>> expensive,
>>> depending on the size of the range that you are splitting. If the
>>> range is large
>>> then your approach loops through every leaf mapping between the
>>> start and end
>>> which will be more expensive than just doing 2 walks. If the range
>>> is small then
>>> your approach can avoid the second walk, but at the expense of all
>>> the extra
>>> loop overhead.
>>>
>>> My suggestion requires 5 loads (assuming the maximum of 5 levels of
>>> lookup).
>>> Personally I think this is probably acceptable? Perhaps we need some
>>> other
>>> voices here.
>>
>> Hello all,
>>
>> I am starting to implement vmalloc-huge by default with BBML2
>> no-abort on arm64.
>> I see that there is some disagreement related to the way the
>> splitting needs to
>> be implemented - I skimmed through the discussions and it will
>> require some work
>> to understand what is going on :) hopefully I'll be back soon to give
>> some of
>> my opinions.
>
> Hi Dev,
>
> Thanks for the heads up.
>
> In the last email I suggested skip the leaf mappings in the split
> range in order to reduce page table walk overhead for
> split_mapping(start, end). In this way we can achieve:
> - reuse the most split code for repainting (just need
> NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS flag for repainting to split page
> table to PTEs)
> - just walk page table once
> - have similar page table walk overhead with
> split_mapping(start)/split_mapping(end) if the split range is large
>
> I'm basically done on a new spin to implement it and solve all the
> review comments from v4. I should be able to post the new spin by the
> end of this week.
Great! As Catalin notes on my huge-perm change series, that series
doesn't have any user so it does not make sense for that to go in
without your
series - can you merge that series into your work for the new version?
>
> Regards,
> Yang
>
>>
>>>
>>>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-24 11:43 ` Dev Jain
@ 2025-07-24 17:59 ` Yang Shi
0 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-07-24 17:59 UTC (permalink / raw)
To: Dev Jain, Ryan Roberts, will, catalin.marinas, Miko.Lenczewski,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 7/24/25 4:43 AM, Dev Jain wrote:
>
> On 24/07/25 2:21 am, Yang Shi wrote:
>>
>>
>> On 7/23/25 10:38 AM, Dev Jain wrote:
>>>
>>> On 23/06/25 6:56 pm, Ryan Roberts wrote:
>>>> [...]
>>>>
>>>>>> +
>>>>>> +int split_leaf_mapping(unsigned long addr)
>>>>> Thanks for coming up with the code. It does help to understand
>>>>> your idea. Now I
>>>>> see why you suggested "split_mapping(start); split_mapping(end);"
>>>>> model. It does
>>>>> make the implementation easier because we don't need a loop
>>>>> anymore. But this
>>>>> may have a couple of problems:
>>>>> 1. We need walk the page table twice instead of once. It sounds
>>>>> expensive.
>>>> Yes we need to walk twice. That may be more expensive or less
>>>> expensive,
>>>> depending on the size of the range that you are splitting. If the
>>>> range is large
>>>> then your approach loops through every leaf mapping between the
>>>> start and end
>>>> which will be more expensive than just doing 2 walks. If the range
>>>> is small then
>>>> your approach can avoid the second walk, but at the expense of all
>>>> the extra
>>>> loop overhead.
>>>>
>>>> My suggestion requires 5 loads (assuming the maximum of 5 levels of
>>>> lookup).
>>>> Personally I think this is probably acceptable? Perhaps we need
>>>> some other
>>>> voices here.
>>>
>>> Hello all,
>>>
>>> I am starting to implement vmalloc-huge by default with BBML2
>>> no-abort on arm64.
>>> I see that there is some disagreement related to the way the
>>> splitting needs to
>>> be implemented - I skimmed through the discussions and it will
>>> require some work
>>> to understand what is going on :) hopefully I'll be back soon to
>>> give some of
>>> my opinions.
>>
>> Hi Dev,
>>
>> Thanks for the heads up.
>>
>> In the last email I suggested skip the leaf mappings in the split
>> range in order to reduce page table walk overhead for
>> split_mapping(start, end). In this way we can achieve:
>> - reuse the most split code for repainting (just need
>> NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS flag for repainting to split
>> page table to PTEs)
>> - just walk page table once
>> - have similar page table walk overhead with
>> split_mapping(start)/split_mapping(end) if the split range is large
>>
>> I'm basically done on a new spin to implement it and solve all the
>> review comments from v4. I should be able to post the new spin by the
>> end of this week.
>
> Great! As Catalin notes on my huge-perm change series, that series
> doesn't have any user so it does not make sense for that to go in
> without your
>
> series - can you merge that series into your work for the new version?
Yeah, sure. But I saw Andrew had some nits on the generic mm code part.
The other way is you can have your patch applied on top of my series so
that we don't interlock each other. Anyway I will still keep it as
prerequisite of my series for now.
Yang
>
>
>>
>> Regards,
>> Yang
>>
>>>
>>>>
>>>>
>>
^ permalink raw reply [flat|nested] 33+ messages in thread
* [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
@ 2025-07-24 22:11 Yang Shi
2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
` (3 more replies)
0 siblings, 4 replies; 33+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel
Changelog
=========
v5:
* Rebased to v6.16 rc.
* Based on Miko's v8 BBML2 cpufeature patch (https://lore.kernel.org/lkml/20250625113435.26849-3-miko.lenczewski@arm.com/).
* Based on Dev's "v4 enable permission change on kernel block mapping" patch (https://lore.kernel.org/lkml/20250703151441.60325-1-dev.jain@arm.com/),
also included this patch in this series per the sugestion from Catalin.
Andrew had some nits on the generic mm part for that patch, but I didn't change anything.
* Implemented split in dedicated function per Ryan.
* Dropped "make __create_pgd_mapping() and helpers non-void" patch. It is not
needed anymore.
* Optimized page table walk to reduce loop overhead if the range spans
multiple leaf mappings.
* Used dedicated lock to serialize split instead of init_mm's mmap_lock
per Ryan.
* Allowed split page table for kfence and realm per Ryan.
* Used lazy mmu to reduce unnecessary barriers per Ryan.
* Removed unnecessary TLB flush per Ryan.
* Cleaned up "wait for boot cpu" logic for repainting per Ryan.
* Misc clean up.
v4:
* Rebased to v6.15-rc4.
* Based on Miko's latest BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250428153514.55772-4-miko.lenczewski@arm.com/).
* Keep block mappings rather than splitting to PTEs if it is fully contained
per Ryan.
* Return -EINVAL if page table allocation failed instead of BUG_ON per Ryan.
* When page table allocation failed, return -1 instead of 0 per Ryan.
* Allocate page table with GFP_ATOMIC for repainting per Ryan.
* Use idmap to wait for repainting is done per Ryan.
* Some minor fixes per the discussion for v3.
* Some clean up to reduce redundant code.
v3:
* Rebased to v6.14-rc4.
* Based on Miko's BBML2 cpufeature patch (https://lore.kernel.org/linux-arm-kernel/20250228182403.6269-3-miko.lenczewski@arm.com/).
Also included in this series in order to have the complete patchset.
* Enhanced __create_pgd_mapping() to handle split as well per Ryan.
* Supported CONT mappings per Ryan.
* Supported asymmetric system by splitting kernel linear mapping if such
system is detected per Ryan. I don't have such system to test, so the
testing is done by hacking kernel to call linear mapping repainting
unconditionally. The linear mapping doesn't have any block and cont
mappings after booting.
RFC v2:
* Used allowlist to advertise BBM lv2 on the CPUs which can handle TLB
conflict gracefully per Will Deacon
* Rebased onto v6.13-rc5
* https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/
v4: https://lore.kernel.org/lkml/20250531024545.1101304-1-yang@os.amperecomputing.com/
v3: https://lore.kernel.org/linux-arm-kernel/20250304222018.615808-1-yang@os.amperecomputing.com/
RFC v2: https://lore.kernel.org/linux-arm-kernel/20250103011822.1257189-1-yang@os.amperecomputing.com/
RFC v1: https://lore.kernel.org/lkml/20241118181711.962576-1-yang@os.amperecomputing.com/
Description
===========
When rodata=full kernel linear mapping is mapped by PTE due to arm's
break-before-make rule.
A number of performance issues arise when the kernel linear map is using
PTE entries due to arm's break-before-make rule:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
These issues can be avoided by specifying rodata=on the kernel command
line but this disables the alias checks on page table permissions and
therefore compromises security somewhat.
With FEAT_BBM level 2 support it is no longer necessary to invalidate the
page table entry when changing page sizes. This allows the kernel to
split large mappings after boot is complete.
This patch adds support for splitting large mappings when FEAT_BBM level 2
is available and rodata=full is used. This functionality will be used
when modifying page permissions for individual page frames.
Without FEAT_BBM level 2 we will keep the kernel linear map using PTEs
only.
If the system is asymmetric, the kernel linear mapping may be repainted once
the BBML2 capability is finalized on all CPUs. See patch #4 for more details.
We saw significant performance increases in some benchmarks with
rodata=full without compromising the security features of the kernel.
Testing
=======
The test was done on AmpereOne machine (192 cores, 1P) with 256GB memory and
4K page size + 48 bit VA.
Function test (4K/16K/64K page size)
- Kernel boot. Kernel needs change kernel linear mapping permission at
boot stage, if the patch didn't work, kernel typically didn't boot.
- Module stress from stress-ng. Kernel module load change permission for
linear mapping.
- A test kernel module which allocates 80% of total memory via vmalloc(),
then change the vmalloc area permission to RO, this also change linear
mapping permission to RO, then change it back before vfree(). Then launch
a VM which consumes almost all physical memory.
- VM with the patchset applied in guest kernel too.
- Kernel build in VM with guest kernel which has this series applied.
- rodata=on. Make sure other rodata mode is not broken.
- Boot on the machine which doesn't support BBML2.
Performance
===========
Memory consumption
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
Performance benchmarking
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt with no read/write workqueue).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
--status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
--iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
--name=iops-test-job --eta-newline=1 --size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.
Dev Jain (1):
arm64: Enable permission change on arm64 kernel block mappings
Yang Shi (3):
arm64: cpufeature: add AmpereOne to BBML2 allow list
arm64: mm: support large block mapping when rodata=full
arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
arch/arm64/include/asm/cpufeature.h | 34 ++++++++++++++
arch/arm64/include/asm/mmu.h | 5 +++
arch/arm64/include/asm/pgtable.h | 5 +++
arch/arm64/kernel/cpufeature.c | 37 ++++-----------
arch/arm64/mm/mmu.c | 408 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
arch/arm64/mm/pageattr.c | 159 +++++++++++++++++++++++++++++++++++++++++++++++++++-------------
arch/arm64/mm/proc.S | 57 ++++++++++++++++-------
include/linux/pagewalk.h | 3 ++
mm/pagewalk.c | 24 ++++++++++
9 files changed, 648 insertions(+), 84 deletions(-)
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings
2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
` (2 subsequent siblings)
3 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel
From: Dev Jain <dev.jain@arm.com>
This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we can
handle any permission games on the kernel (init_mm) pagetable. Currently,
__change_memory_common() uses apply_to_page_range() which does not support
changing permissions for block mappings. We attempt to move away from this
by using the pagewalk API, similar to what riscv does right now; however,
it is the responsibility of the caller to ensure that we do not pass a
range overlapping a partial block mapping or cont mapping; in such a case,
the system must be able to support range splitting.
This patch is tied with Yang Shi's attempt [1] at using huge mappings
in the linear mapping in case the system supports BBML2, in which case
we will be able to split the linear mapping if needed without
break-before-make. Thus, Yang's series, IIUC, will be one such user of my
patch; suppose we are changing permissions on a range of the linear map
backed by PMD-hugepages, then the sequence of operations should look
like the following:
split_range(start)
split_range(end);
__change_memory_common(start, end);
However, this patch can be used independently of Yang's; since currently
permission games are being played only on pte mappings (due to
apply_to_page_range not supporting otherwise), this patch provides the
mechanism for enabling huge mappings for various kernel mappings
like linear map and vmalloc.
---------------------
Implementation
---------------------
arm64 currently changes permissions on vmalloc objects locklessly, via
apply_to_page_range, whose limitation is to deny changing permissions for
block mappings. Therefore, we move away to use the generic pagewalk API,
thus paving the path for enabling huge mappings by default on kernel space
mappings, thus leading to more efficient TLB usage. However, the API
currently enforces the init_mm.mmap_lock to be held. To avoid the
unnecessary bottleneck of the mmap_lock for our usecase, this patch
extends this generic API to be used locklessly, so as to retain the
existing behaviour for changing permissions. Apart from this reason, it is
noted at [2] that KFENCE can manipulate kernel pgtable entries during
softirqs. It does this by calling set_memory_valid() -> __change_memory_common().
This being a non-sleepable context, we cannot take the init_mm mmap lock.
Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive control
over the range, thus guaranteeing no concurrent access.
We require that the start and end of a given range do not partially overlap
block mappings, or cont mappings. Return -EINVAL in case a partial block
mapping is detected in any of the PGD/P4D/PUD/PMD levels; add a
corresponding comment in update_range_prot() to warn that eliminating
such a condition is the responsibility of the caller.
Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to
an atomic operation for the block mappings. This is fine as any access
will decode either the old or the new permission until the TLBI.
apply_to_page_range() currently performs all pte level callbacks while in
lazy mmu mode. Since arm64 can optimize performance by batching barriers
when modifying kernel pgtables in lazy mmu mode, we would like to continue
to benefit from this optimisation. Unfortunately walk_kernel_page_table_range()
does not use lazy mmu mode. However, since the pagewalk framework is not
allocating any memory, we can safely bracket the whole operation inside
lazy mmu mode ourselves. Therefore, wrap the call to
walk_kernel_page_table_range() with the lazy MMU helpers.
[1] https://lore.kernel.org/all/20250304222018.615808-1-yang@os.amperecomputing.com/
[2] https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
arch/arm64/mm/pageattr.c | 155 +++++++++++++++++++++++++++++++--------
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 24 ++++++
3 files changed, 150 insertions(+), 32 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..c6a85000fa0e 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
#include <linux/mem_encrypt.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
#include <asm/cacheflush.h>
#include <asm/pgtable-prot.h>
@@ -20,6 +21,99 @@ struct page_change_data {
pgprot_t clear_mask;
};
+static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk)
+{
+ struct page_change_data *masks = walk->private;
+
+ val &= ~(pgprot_val(masks->clear_mask));
+ val |= (pgprot_val(masks->set_mask));
+
+ return val;
+}
+
+static int pageattr_pgd_entry(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pgd_t val = pgdp_get(pgd);
+
+ if (pgd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PGDIR_SIZE))
+ return -EINVAL;
+ val = __pgd(set_pageattr_masks(pgd_val(val), walk));
+ set_pgd(pgd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ p4d_t val = p4dp_get(p4d);
+
+ if (p4d_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != P4D_SIZE))
+ return -EINVAL;
+ val = __p4d(set_pageattr_masks(p4d_val(val), walk));
+ set_p4d(p4d, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pud_t val = pudp_get(pud);
+
+ if (pud_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+ return -EINVAL;
+ val = __pud(set_pageattr_masks(pud_val(val), walk));
+ set_pud(pud, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pmd_t val = pmdp_get(pmd);
+
+ if (pmd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+ return -EINVAL;
+ val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+ set_pmd(pmd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pte_t val = __ptep_get(pte);
+
+ val = __pte(set_pageattr_masks(pte_val(val), walk));
+ __set_pte(pte, val);
+
+ return 0;
+}
+
+static const struct mm_walk_ops pageattr_ops = {
+ .pgd_entry = pageattr_pgd_entry,
+ .p4d_entry = pageattr_p4d_entry,
+ .pud_entry = pageattr_pud_entry,
+ .pmd_entry = pageattr_pmd_entry,
+ .pte_entry = pageattr_pte_entry,
+};
+
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
bool can_set_direct_map(void)
@@ -37,33 +131,35 @@ bool can_set_direct_map(void)
arm64_kfence_can_set_direct_map() || is_realm_world();
}
-static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
+static int update_range_prot(unsigned long start, unsigned long size,
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data *cdata = data;
- pte_t pte = __ptep_get(ptep);
+ struct page_change_data data;
+ int ret;
- pte = clear_pte_bit(pte, cdata->clear_mask);
- pte = set_pte_bit(pte, cdata->set_mask);
+ data.set_mask = set_mask;
+ data.clear_mask = clear_mask;
- __set_pte(ptep, pte);
- return 0;
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The caller must ensure that the range we are operating on does not
+ * partially overlap a block mapping, or a cont mapping. Any such case
+ * must be eliminated by splitting the mapping.
+ */
+ ret = walk_kernel_page_table_range_lockless(start, start + size,
+ &pageattr_ops, &data);
+ arch_leave_lazy_mmu_mode();
+
+ return ret;
}
-/*
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
static int __change_memory_common(unsigned long start, unsigned long size,
- pgprot_t set_mask, pgprot_t clear_mask)
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data data;
int ret;
- data.set_mask = set_mask;
- data.clear_mask = clear_mask;
-
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
-
+ ret = update_range_prot(start, size, set_mask, clear_mask);
/*
* If the memory is being made valid without changing any other bits
* then a TLBI isn't required as a non-valid entry cannot be cached in
@@ -71,6 +167,7 @@ static int __change_memory_common(unsigned long start, unsigned long size,
*/
if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask))
flush_tlb_kernel_range(start, start + size);
+
return ret;
}
@@ -174,32 +271,26 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
int set_direct_map_invalid_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(0),
- .clear_mask = __pgprot(PTE_VALID),
- };
+ pgprot_t clear_mask = __pgprot(PTE_VALID);
+ pgprot_t set_mask = __pgprot(0);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
int set_direct_map_default_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
- .clear_mask = __pgprot(PTE_RDONLY),
- };
+ pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+ pgprot_t clear_mask = __pgprot(PTE_RDONLY);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
static int __set_memory_enc_dec(unsigned long addr,
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 682472c15495..8212e8f2d2d5 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
pgd_t *pgd, void *private);
+int walk_kernel_page_table_range_lockless(unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ void *private);
int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 648038247a8d..18a675ab87cf 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -633,6 +633,30 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
return walk_pgd_range(start, end, &walk);
}
+/*
+ * Use this function to walk the kernel page tables locklessly. It should be
+ * guaranteed that the caller has exclusive access over the range they are
+ * operating on - that there should be no concurrent access, for example,
+ * changing permissions for vmalloc objects.
+ */
+int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
+ const struct mm_walk_ops *ops, void *private)
+{
+ struct mm_walk walk = {
+ .ops = ops,
+ .mm = &init_mm,
+ .private = private,
+ .no_vma = true
+ };
+
+ if (start >= end)
+ return -EINVAL;
+ if (!check_ops_valid(ops))
+ return -EINVAL;
+
+ return walk_pgd_range(start, end, &walk);
+}
+
/**
* walk_page_range_debug - walk a range of pagetables not backed by a vma
* @mm: mm_struct representing the target process of page table walk
--
2.50.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
2025-08-01 14:36 ` Ryan Roberts
2025-08-04 23:20 ` Christoph Lameter (Ampere)
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
3 siblings, 2 replies; 33+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel
AmpereOne supports BBML2 without conflict abort, add to the allow list.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/kernel/cpufeature.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7a3c404288d5..d2a8a509a58e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2230,6 +2230,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
static const struct midr_range supports_bbml2_noabort_list[] = {
MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
{}
};
--
2.50.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
2025-07-29 12:34 ` Dev Jain
2025-08-01 14:35 ` Ryan Roberts
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
3 siblings, 2 replies; 33+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel
When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.
This resulted in a couple of problems:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.
Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full. When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.
The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.
With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P) with
256GB memory.
* Memory use after boot
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
--status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
--iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
--name=iops-test-job --eta-newline=1 --size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad case).
The bandwidth is increased and the avg clat is reduced proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache populated).
The bandwidth is increased by 150%.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/cpufeature.h | 34 ++++
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 31 +--
arch/arm64/mm/mmu.c | 293 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 4 +
6 files changed, 333 insertions(+), 35 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..d0d394cc837d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
return cpus_have_final_cap(ARM64_HAS_PMUV3);
}
+static inline bool bbml2_noabort_available(void)
+{
+ /*
+ * We want to allow usage of BBML2 in as wide a range of kernel contexts
+ * as possible. This list is therefore an allow-list of known-good
+ * implementations that both support BBML2 and additionally, fulfill the
+ * extra constraint of never generating TLB conflict aborts when using
+ * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
+ * kernel contexts difficult to prove safe against recursive aborts).
+ *
+ * Note that implementations can only be considered "known-good" if their
+ * implementors attest to the fact that the implementation never raises
+ * TLB conflict aborts for BBML2 mapping granularity changes.
+ */
+ static const struct midr_range supports_bbml2_noabort_list[] = {
+ MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
+ MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
+ {}
+ };
+
+ /* Does our cpu guarantee to never raise TLB conflict aborts? */
+ if (!is_midr_in_range_list(supports_bbml2_noabort_list))
+ return false;
+
+ /*
+ * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
+ * about whether the MIDR check passes.
+ */
+
+ return true;
+}
+
static inline bool system_supports_bbml2_noabort(void)
{
return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..57f4b25e6f33 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ba63c8736666..ad2a6a7e86b0 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
}
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
static inline int pte_uffd_wp(pte_t pte)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d2a8a509a58e..1c96016a7a41 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
{
- /*
- * We want to allow usage of BBML2 in as wide a range of kernel contexts
- * as possible. This list is therefore an allow-list of known-good
- * implementations that both support BBML2 and additionally, fulfill the
- * extra constraint of never generating TLB conflict aborts when using
- * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
- * kernel contexts difficult to prove safe against recursive aborts).
- *
- * Note that implementations can only be considered "known-good" if their
- * implementors attest to the fact that the implementation never raises
- * TLB conflict aborts for BBML2 mapping granularity changes.
- */
- static const struct midr_range supports_bbml2_noabort_list[] = {
- MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
- MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
- MIDR_ALL_VERSIONS(MIDR_AMPERE1),
- MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
- {}
- };
-
- /* Does our cpu guarantee to never raise TLB conflict aborts? */
- if (!is_midr_in_range_list(supports_bbml2_noabort_list))
- return false;
-
- /*
- * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
- * about whether the MIDR check passes.
- */
-
- return true;
+ return bbml2_noabort_available();
}
#ifdef CONFIG_ARM64_PAN
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3d5fb37424ab..f63b39613571 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
int flags);
#endif
+#define INVALID_PHYS_ADDR -1
+
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
enum pgtable_type pgtable_type)
{
@@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
phys_addr_t pa;
- BUG_ON(!ptdesc);
+ if (!ptdesc)
+ return INVALID_PHYS_ADDR;
+
pa = page_to_phys(ptdesc_page(ptdesc));
switch (pgtable_type) {
@@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t __maybe_unused
-pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+split_pgtable_alloc(enum pgtable_type pgtable_type)
{
return __pgd_pgtable_alloc(&init_mm, pgtable_type);
}
+static phys_addr_t __maybe_unused
+pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
+}
+
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(NULL, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
}
/*
@@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgd_pgtable_alloc_special_mm, flags);
}
+static DEFINE_MUTEX(pgtable_split_lock);
+
+static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
+{
+ pte_t *ptep;
+ unsigned long next;
+ unsigned int nr;
+ unsigned long span;
+
+ ptep = pte_offset_kernel(pmdp, addr);
+
+ do {
+ pte_t *_ptep;
+
+ nr = 0;
+ next = pte_cont_addr_end(addr, end);
+ if (next < end)
+ nr = max(nr, ((end - next) / CONT_PTE_SIZE));
+ span = nr * CONT_PTE_SIZE;
+
+ _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
+
+ if (((addr | next) & ~CONT_PTE_MASK) == 0)
+ continue;
+
+ if (!pte_cont(__ptep_get(_ptep)))
+ continue;
+
+ for (int i = 0; i < CONT_PTES; i++, _ptep++)
+ __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
+ } while (addr = next + span, addr != end);
+
+ return 0;
+}
+
+static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
+{
+ unsigned long next;
+ unsigned int nr;
+ unsigned long span;
+ int ret = 0;
+
+ do {
+ pmd_t pmd;
+
+ nr = 1;
+ next = pmd_addr_end(addr, end);
+ if (next < end)
+ nr = max(nr, ((end - next) / PMD_SIZE));
+ span = (nr - 1) * PMD_SIZE;
+
+ if (((addr | next) & ~PMD_MASK) == 0)
+ continue;
+
+ pmd = pmdp_get(pmdp);
+ if (pmd_leaf(pmd)) {
+ phys_addr_t pte_phys;
+ pte_t *ptep;
+ pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
+ PMD_TABLE_AF;
+ unsigned long pfn = pmd_pfn(pmd);
+ pgprot_t prot = pmd_pgprot(pmd);
+
+ pte_phys = split_pgtable_alloc(TABLE_PTE);
+ if (pte_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ pmdval |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
+ PTE_TYPE_PAGE);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+ for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte(ptep, pfn_pte(pfn, prot));
+
+ dsb(ishst);
+
+ __pmd_populate(pmdp, pte_phys, pmdval);
+ }
+
+ ret = split_cont_pte(pmdp, addr, next);
+ if (ret)
+ break;
+ } while (pmdp += nr, addr = next + span, addr != end);
+
+ return ret;
+}
+
+static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
+{
+ pmd_t *pmdp;
+ unsigned long next;
+ unsigned int nr;
+ unsigned long span;
+ int ret = 0;
+
+ pmdp = pmd_offset(pudp, addr);
+
+ do {
+ pmd_t *_pmdp;
+
+ nr = 0;
+ next = pmd_cont_addr_end(addr, end);
+ if (next < end)
+ nr = max(nr, ((end - next) / CONT_PMD_SIZE));
+ span = nr * CONT_PMD_SIZE;
+
+ if (((addr | next) & ~CONT_PMD_MASK) == 0) {
+ pmdp += pmd_index(next) - pmd_index(addr) +
+ nr * CONT_PMDS;
+ continue;
+ }
+
+ _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ if (!pmd_cont(pmdp_get(_pmdp)))
+ goto split;
+
+ for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
+ set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
+
+split:
+ ret = split_pmd(pmdp, addr, next);
+ if (ret)
+ break;
+
+ pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
+ } while (addr = next + span, addr != end);
+
+ return ret;
+}
+
+static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
+{
+ pud_t *pudp;
+ unsigned long next;
+ unsigned int nr;
+ unsigned long span;
+ int ret = 0;
+
+ pudp = pud_offset(p4dp, addr);
+
+ do {
+ pud_t pud;
+
+ nr = 1;
+ next = pud_addr_end(addr, end);
+ if (next < end)
+ nr = max(nr, ((end - next) / PUD_SIZE));
+ span = (nr - 1) * PUD_SIZE;
+
+ if (((addr | next) & ~PUD_MASK) == 0)
+ continue;
+
+ pud = pudp_get(pudp);
+ if (pud_leaf(pud)) {
+ phys_addr_t pmd_phys;
+ pmd_t *pmdp;
+ pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
+ PUD_TABLE_AF;
+ unsigned long pfn = pud_pfn(pud);
+ pgprot_t prot = pud_pgprot(pud);
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+
+ pmd_phys = split_pgtable_alloc(TABLE_PMD);
+ if (pmd_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ pudval |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
+ PMD_TYPE_SECT);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+ for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
+ set_pmd(pmdp, pfn_pmd(pfn, prot));
+ pfn += step;
+ }
+
+ dsb(ishst);
+
+ __pud_populate(pudp, pmd_phys, pudval);
+ }
+
+ ret = split_cont_pmd(pudp, addr, next);
+ if (ret)
+ break;
+ } while (pudp += nr, addr = next + span, addr != end);
+
+ return ret;
+}
+
+static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
+{
+ p4d_t *p4dp;
+ unsigned long next;
+ int ret = 0;
+
+ p4dp = p4d_offset(pgdp, addr);
+
+ do {
+ next = p4d_addr_end(addr, end);
+
+ ret = split_pud(p4dp, addr, next);
+ if (ret)
+ break;
+ } while (p4dp++, addr = next, addr != end);
+
+ return ret;
+}
+
+static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
+{
+ unsigned long next;
+ int ret = 0;
+
+ do {
+ next = pgd_addr_end(addr, end);
+ ret = split_p4d(pgdp, addr, next);
+ if (ret)
+ break;
+ } while (pgdp++, addr = next, addr != end);
+
+ return ret;
+}
+
+int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
+{
+ int ret;
+
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
+ return -EINVAL;
+
+ mutex_lock(&pgtable_split_lock);
+ arch_enter_lazy_mmu_mode();
+ ret = split_pgd(pgd_offset_k(start), start, end);
+ arch_leave_lazy_mmu_mode();
+ mutex_unlock(&pgtable_split_lock);
+
+ return ret;
+}
+
static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
phys_addr_t size, pgprot_t prot)
{
@@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+bool linear_map_requires_bbml2;
+
+static inline bool force_pte_mapping(void)
+{
+ /*
+ * Can't use cpufeature API to determine whether BBM level 2
+ * is supported or not since cpufeature have not been
+ * finalized yet.
+ */
+ return (!bbml2_noabort_available() && (rodata_full ||
+ arm64_kfence_can_set_direct_map() || is_realm_world())) ||
+ debug_pagealloc_enabled();
+}
+
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
- if (can_set_direct_map())
+ linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
+
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- if (can_set_direct_map())
+ if (force_pte_mapping() ||
+ (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c6a85000fa0e..6566aa9d8abb 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
+ ret = split_kernel_pgtable_mapping(start, start + size);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
arch_enter_lazy_mmu_mode();
/*
--
2.50.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
` (2 preceding siblings ...)
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-07-24 22:11 ` Yang Shi
2025-07-26 11:10 ` kernel test robot
` (2 more replies)
3 siblings, 3 replies; 33+ messages in thread
From: Yang Shi @ 2025-07-24 22:11 UTC (permalink / raw)
To: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel
The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point. So the linear
mapping is determined by the capability of boot CPU. If the boot CPU
supports BBML2, large block mapping will be used for linear mapping.
But the secondary CPUs may not support BBML2, so repaint the linear mapping
if large block mapping is used and the secondary CPUs don't support BBML2
once cpufeature is finalized on all CPUs.
If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/mmu.h | 6 +-
arch/arm64/kernel/cpufeature.c | 8 ++
arch/arm64/mm/mmu.c | 173 +++++++++++++++++++++++++++------
arch/arm64/mm/pageattr.c | 2 +-
arch/arm64/mm/proc.S | 57 ++++++++---
5 files changed, 196 insertions(+), 50 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 57f4b25e6f33..9bf50e8897e2 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -56,6 +56,8 @@ typedef struct {
*/
#define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
+extern bool linear_map_requires_bbml2;
+
static inline bool arm64_kernel_unmapped_at_el0(void)
{
return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
@@ -71,7 +73,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
-extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
+extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
+ unsigned int flags);
+extern int linear_map_split_to_ptes(void *__unused);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1c96016a7a41..23c01d679c40 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
#include <asm/insn.h>
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
+#include <asm/mmu.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
@@ -2009,6 +2010,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
return 0;
}
+static void __init linear_map_maybe_split_to_ptes(void)
+{
+ if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
+ stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
+}
+
static void __init kpti_install_ng_mappings(void)
{
/* Check whether KPTI is going to be used */
@@ -3855,6 +3862,7 @@ void __init setup_system_features(void)
{
setup_system_capabilities();
+ linear_map_maybe_split_to_ptes();
kpti_install_ng_mappings();
sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f63b39613571..22f2d0869fdd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -482,11 +482,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
#define INVALID_PHYS_ADDR -1
-static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
+static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
enum pgtable_type pgtable_type)
{
/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
- struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
+ struct ptdesc *ptdesc = pagetable_alloc(gfp, 0);
phys_addr_t pa;
if (!ptdesc)
@@ -513,9 +513,16 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t __maybe_unused
-split_pgtable_alloc(enum pgtable_type pgtable_type)
+split_pgtable_alloc(enum pgtable_type pgtable_type, int flags)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ gfp_t gfp;
+
+ if ((flags & (NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS)) == 0)
+ gfp = GFP_PGTABLE_KERNEL & ~__GFP_ZERO;
+ else
+ gfp = GFP_ATOMIC;
+
+ return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
}
static phys_addr_t __maybe_unused
@@ -523,7 +530,8 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL & ~__GFP_ZERO,
+ pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -533,7 +541,8 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL & ~__GFP_ZERO,
+ pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -572,7 +581,8 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
static DEFINE_MUTEX(pgtable_split_lock);
-static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
+static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ unsigned int flags)
{
pte_t *ptep;
unsigned long next;
@@ -586,14 +596,16 @@ static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
nr = 0;
next = pte_cont_addr_end(addr, end);
- if (next < end)
+ if (next < end &&
+ (flags & NO_CONT_MAPPINGS) == 0)
nr = max(nr, ((end - next) / CONT_PTE_SIZE));
span = nr * CONT_PTE_SIZE;
_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
- if (((addr | next) & ~CONT_PTE_MASK) == 0)
+ if (((addr | next) & ~CONT_PTE_MASK) == 0 &&
+ (flags & NO_CONT_MAPPINGS) == 0)
continue;
if (!pte_cont(__ptep_get(_ptep)))
@@ -606,7 +618,8 @@ static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
return 0;
}
-static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
+static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ unsigned int flags)
{
unsigned long next;
unsigned int nr;
@@ -618,11 +631,13 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
nr = 1;
next = pmd_addr_end(addr, end);
- if (next < end)
+ if (next < end &&
+ (flags & NO_BLOCK_MAPPINGS) == 0)
nr = max(nr, ((end - next) / PMD_SIZE));
span = (nr - 1) * PMD_SIZE;
- if (((addr | next) & ~PMD_MASK) == 0)
+ if (((addr | next) & ~PMD_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0)
continue;
pmd = pmdp_get(pmdp);
@@ -634,7 +649,7 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
unsigned long pfn = pmd_pfn(pmd);
pgprot_t prot = pmd_pgprot(pmd);
- pte_phys = split_pgtable_alloc(TABLE_PTE);
+ pte_phys = split_pgtable_alloc(TABLE_PTE, flags);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
@@ -643,7 +658,8 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
PTE_TYPE_PAGE);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ if ((flags & NO_CONT_MAPPINGS) == 0)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
ptep = (pte_t *)phys_to_virt(pte_phys);
for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
__set_pte(ptep, pfn_pte(pfn, prot));
@@ -653,7 +669,7 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
__pmd_populate(pmdp, pte_phys, pmdval);
}
- ret = split_cont_pte(pmdp, addr, next);
+ ret = split_cont_pte(pmdp, addr, next, flags);
if (ret)
break;
} while (pmdp += nr, addr = next + span, addr != end);
@@ -661,7 +677,8 @@ static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
return ret;
}
-static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
+static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+ unsigned int flags)
{
pmd_t *pmdp;
unsigned long next;
@@ -676,11 +693,13 @@ static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
nr = 0;
next = pmd_cont_addr_end(addr, end);
- if (next < end)
+ if (next < end &&
+ (flags & NO_CONT_MAPPINGS) == 0)
nr = max(nr, ((end - next) / CONT_PMD_SIZE));
span = nr * CONT_PMD_SIZE;
- if (((addr | next) & ~CONT_PMD_MASK) == 0) {
+ if (((addr | next) & ~CONT_PMD_MASK) == 0 &&
+ (flags & NO_CONT_MAPPINGS) == 0) {
pmdp += pmd_index(next) - pmd_index(addr) +
nr * CONT_PMDS;
continue;
@@ -694,7 +713,7 @@ static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
split:
- ret = split_pmd(pmdp, addr, next);
+ ret = split_pmd(pmdp, addr, next, flags);
if (ret)
break;
@@ -704,7 +723,8 @@ static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
return ret;
}
-static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
+static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
+ unsigned int flags)
{
pud_t *pudp;
unsigned long next;
@@ -719,11 +739,13 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
nr = 1;
next = pud_addr_end(addr, end);
- if (next < end)
+ if (next < end &&
+ (flags & NO_BLOCK_MAPPINGS) == 0)
nr = max(nr, ((end - next) / PUD_SIZE));
span = (nr - 1) * PUD_SIZE;
- if (((addr | next) & ~PUD_MASK) == 0)
+ if (((addr | next) & ~PUD_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0)
continue;
pud = pudp_get(pudp);
@@ -736,7 +758,7 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
pgprot_t prot = pud_pgprot(pud);
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
- pmd_phys = split_pgtable_alloc(TABLE_PMD);
+ pmd_phys = split_pgtable_alloc(TABLE_PMD, flags);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
@@ -745,7 +767,8 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
PMD_TYPE_SECT);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ if ((flags & NO_CONT_MAPPINGS) == 0)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
pmdp = (pmd_t *)phys_to_virt(pmd_phys);
for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
set_pmd(pmdp, pfn_pmd(pfn, prot));
@@ -757,7 +780,7 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
__pud_populate(pudp, pmd_phys, pudval);
}
- ret = split_cont_pmd(pudp, addr, next);
+ ret = split_cont_pmd(pudp, addr, next, flags);
if (ret)
break;
} while (pudp += nr, addr = next + span, addr != end);
@@ -765,7 +788,8 @@ static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
return ret;
}
-static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
+static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ unsigned int flags)
{
p4d_t *p4dp;
unsigned long next;
@@ -776,7 +800,7 @@ static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
do {
next = p4d_addr_end(addr, end);
- ret = split_pud(p4dp, addr, next);
+ ret = split_pud(p4dp, addr, next, flags);
if (ret)
break;
} while (p4dp++, addr = next, addr != end);
@@ -784,14 +808,15 @@ static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
return ret;
}
-static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
+static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ unsigned int flags)
{
unsigned long next;
int ret = 0;
do {
next = pgd_addr_end(addr, end);
- ret = split_p4d(pgdp, addr, next);
+ ret = split_p4d(pgdp, addr, next, flags);
if (ret)
break;
} while (pgdp++, addr = next, addr != end);
@@ -799,7 +824,8 @@ static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
return ret;
}
-int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
+int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
+ unsigned int flags)
{
int ret;
@@ -811,7 +837,7 @@ int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
mutex_lock(&pgtable_split_lock);
arch_enter_lazy_mmu_mode();
- ret = split_pgd(pgd_offset_k(start), start, end);
+ ret = split_pgd(pgd_offset_k(start), start, end, flags);
arch_leave_lazy_mmu_mode();
mutex_unlock(&pgtable_split_lock);
@@ -851,6 +877,75 @@ void __init mark_linear_text_alias_ro(void)
PAGE_KERNEL_RO);
}
+extern u32 repaint_done;
+
+int __init linear_map_split_to_ptes(void *__unused)
+{
+ typedef void (repaint_wait_fn)(void);
+ extern repaint_wait_fn bbml2_wait_for_repainting;
+ repaint_wait_fn *wait_fn;
+
+ int cpu = smp_processor_id();
+
+ wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
+
+ /*
+ * Repainting just can be run on CPU 0 because we just can be sure
+ * CPU 0 supports BBML2.
+ */
+ if (!cpu) {
+ phys_addr_t kernel_start = __pa_symbol(_stext);
+ phys_addr_t kernel_end = __pa_symbol(__init_begin);
+ phys_addr_t start, end;
+ unsigned long vstart, vend;
+ int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+ u64 i;
+ int ret;
+
+ /*
+ * Wait for all secondary CPUs get prepared for repainting
+ * the linear mapping.
+ */
+ smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
+
+ memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+ /* Split the whole linear mapping */
+ for_each_mem_range(i, &start, &end) {
+ if (start >= end)
+ return -EINVAL;
+
+ vstart = __phys_to_virt(start);
+ vend = __phys_to_virt(end);
+ ret = split_kernel_pgtable_mapping(vstart, vend, flags);
+ if (ret)
+ panic("Failed to split linear mappings\n");
+
+ flush_tlb_kernel_range(vstart, vend);
+ }
+ memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
+
+ /*
+ * Relies on dsb in flush_tlb_kernel_range() to avoid
+ * reordering before any page table split operations.
+ */
+ WRITE_ONCE(repaint_done, 0);
+ } else {
+ /*
+ * The secondary CPUs can't run in the same address space
+ * with CPU 0 because accessing the linear mapping address
+ * when CPU 0 is repainting it is not safe.
+ *
+ * Let the secondary CPUs run busy loop in idmap address
+ * space when repainting is ongoing.
+ */
+ cpu_install_idmap();
+ wait_fn();
+ cpu_uninstall_idmap();
+ }
+
+ return 0;
+}
+
#ifdef CONFIG_KFENCE
bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
@@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
- kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+ kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
+ bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
static void __init create_idmap(void)
{
@@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
}
+
+ /*
+ * Setup idmap mapping for repaint_done flag. It will be used if
+ * repainting the linear mapping is needed later.
+ */
+ if (linear_map_requires_bbml2) {
+ u64 pa = __pa_symbol(&repaint_done);
+ ptep = __pa_symbol(bbml2_ptes);
+
+ __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
+ IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
+ __phys_to_virt(ptep) - ptep);
+ }
}
void __init paging_init(void)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 6566aa9d8abb..4471d7e510a1 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
- ret = split_kernel_pgtable_mapping(start, start + size);
+ ret = split_kernel_pgtable_mapping(start, start + size, 0);
if (WARN_ON_ONCE(ret))
return ret;
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 80d470aa469d..f0f9c49a4466 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
dsb nshst
.endm
+ .macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
+ /* Increment the flag to let the boot CPU know we're ready */
+1: ldxr \tmp3, [\tmp2]
+ add \tmp3, \tmp3, #1
+ stxr \tmp4, \tmp3, [\tmp2]
+ cbnz \tmp4, 1b
+
+ /* Wait for the boot CPU to finish its job */
+ sevl
+1: wfe
+ ldxr \tmp3, [\tmp2]
+ cbnz \tmp3, 1b
+
+ /* All done, act like nothing happened */
+ msr ttbr1_el1, \tmp1
+ isb
+ ret
+ .endm
+
/*
* void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
* unsigned long temp_pte_va)
@@ -416,29 +435,35 @@ alternative_else_nop_endif
__idmap_kpti_secondary:
/* Uninstall swapper before surgery begins */
__idmap_cpu_set_reserved_ttbr1 x16, x17
+ wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
- /* Increment the flag to let the boot CPU we're ready */
-1: ldxr w16, [flag_ptr]
- add w16, w16, #1
- stxr w17, w16, [flag_ptr]
- cbnz w17, 1b
+ .unreq swapper_ttb
+ .unreq flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
- /* Wait for the boot CPU to finish messing around with swapper */
- sevl
-1: wfe
- ldxr w16, [flag_ptr]
- cbnz w16, 1b
+/*
+ * Wait for repainting is done. Run on secondary CPUs
+ * only.
+ */
+ .pushsection ".data", "aw", %progbits
+SYM_DATA(repaint_done, .long 1)
+ .popsection
- /* All done, act like nothing happened */
- msr ttbr1_el1, swapper_ttb
- isb
- ret
+ .pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
+ swapper_ttb .req x0
+ flag_ptr .req x1
+ mrs swapper_ttb, ttbr1_el1
+ adr_l flag_ptr, repaint_done
+ __idmap_cpu_set_reserved_ttbr1 x16, x17
+ wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
.unreq swapper_ttb
.unreq flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(bbml2_wait_for_repainting)
.popsection
-#endif
/*
* __cpu_setup
--
2.50.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
@ 2025-07-26 11:10 ` kernel test robot
2025-08-01 16:14 ` Ryan Roberts
2025-08-05 7:54 ` Ryan Roberts
2 siblings, 0 replies; 33+ messages in thread
From: kernel test robot @ 2025-07-26 11:10 UTC (permalink / raw)
To: Yang Shi, ryan.roberts, will, catalin.marinas, akpm,
Miko.Lenczewski, dev.jain, scott, cl
Cc: oe-kbuild-all, yang, linux-arm-kernel, linux-kernel
Hi Yang,
kernel test robot noticed the following build warnings:
[auto build test WARNING on next-20250724]
[cannot apply to arm64/for-next/core akpm-mm/mm-everything v6.16-rc7 v6.16-rc6 v6.16-rc5 linus/master v6.16-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Yang-Shi/arm64-Enable-permission-change-on-arm64-kernel-block-mappings/20250725-061534
base: next-20250724
patch link: https://lore.kernel.org/r/20250724221216.1998696-5-yang%40os.amperecomputing.com
patch subject: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
config: arm64-randconfig-001-20250726 (https://download.01.org/0day-ci/archive/20250726/202507261822.ikaBRFsG-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250726/202507261822.ikaBRFsG-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507261822.ikaBRFsG-lkp@intel.com/
All warnings (new ones prefixed by >>):
arch/arm64/mm/mmu.o: in function `linear_map_split_to_ptes':
mmu.c:(.init.text+0x23c): relocation truncated to fit: R_AARCH64_LDST32_ABS_LO12_NC against symbol `repaint_done' defined in .data section in arch/arm64/mm/proc.o
>> aarch64-linux-ld: mmu.c:(.init.text+0x23c): warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-07-29 12:34 ` Dev Jain
2025-08-05 21:28 ` Yang Shi
2025-08-01 14:35 ` Ryan Roberts
1 sibling, 1 reply; 33+ messages in thread
From: Dev Jain @ 2025-07-29 12:34 UTC (permalink / raw)
To: Yang Shi, ryan.roberts, will, catalin.marinas, akpm,
Miko.Lenczewski, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 25/07/25 3:41 am, Yang Shi wrote:
> [----- snip -----]
>
> #ifdef CONFIG_ARM64_PAN
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3d5fb37424ab..f63b39613571 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> int flags);
> #endif
>
> +#define INVALID_PHYS_ADDR -1
> +
> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> enum pgtable_type pgtable_type)
> {
> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> phys_addr_t pa;
>
> - BUG_ON(!ptdesc);
> + if (!ptdesc)
> + return INVALID_PHYS_ADDR;
> +
> pa = page_to_phys(ptdesc_page(ptdesc));
>
> switch (pgtable_type) {
> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> }
>
> static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +split_pgtable_alloc(enum pgtable_type pgtable_type)
> {
> return __pgd_pgtable_alloc(&init_mm, pgtable_type);
> }
>
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> + phys_addr_t pa;
> +
> + pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> + BUG_ON(pa == INVALID_PHYS_ADDR);
> + return pa;
> +}
> +
> static phys_addr_t
> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
> {
> - return __pgd_pgtable_alloc(NULL, pgtable_type);
> + phys_addr_t pa;
> +
> + pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> + BUG_ON(pa == INVALID_PHYS_ADDR);
> + return pa;
> }
>
> /*
> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> pgd_pgtable_alloc_special_mm, flags);
> }
>
> +static DEFINE_MUTEX(pgtable_split_lock);
Thanks for taking a separate lock.
> +
> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> + pte_t *ptep;
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> +
> + ptep = pte_offset_kernel(pmdp, addr);
> +
> + do {
> + pte_t *_ptep;
> +
> + nr = 0;
> + next = pte_cont_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / CONT_PTE_SIZE));
> + span = nr * CONT_PTE_SIZE;
> +
> + _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> + ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
> +
> + if (((addr | next) & ~CONT_PTE_MASK) == 0)
> + continue;
> +
> + if (!pte_cont(__ptep_get(_ptep)))
> + continue;
> +
> + for (int i = 0; i < CONT_PTES; i++, _ptep++)
> + __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
> + } while (addr = next + span, addr != end);
> +
> + return 0;
> +}
> +
> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> + int ret = 0;
> +
> + do {
> + pmd_t pmd;
> +
> + nr = 1;
> + next = pmd_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / PMD_SIZE));
> + span = (nr - 1) * PMD_SIZE;
> +
> + if (((addr | next) & ~PMD_MASK) == 0)
> + continue;
> +
> + pmd = pmdp_get(pmdp);
> + if (pmd_leaf(pmd)) {
> + phys_addr_t pte_phys;
> + pte_t *ptep;
> + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
> + PMD_TABLE_AF;
> + unsigned long pfn = pmd_pfn(pmd);
> + pgprot_t prot = pmd_pgprot(pmd);
> +
> + pte_phys = split_pgtable_alloc(TABLE_PTE);
> + if (pte_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + pmdval |= PMD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
> + PTE_TYPE_PAGE);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + ptep = (pte_t *)phys_to_virt(pte_phys);
> + for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> + __set_pte(ptep, pfn_pte(pfn, prot));
> +
> + dsb(ishst);
> +
> + __pmd_populate(pmdp, pte_phys, pmdval);
> + }
> +
> + ret = split_cont_pte(pmdp, addr, next);
> + if (ret)
> + break;
> + } while (pmdp += nr, addr = next + span, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
> +{
> + pmd_t *pmdp;
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> + int ret = 0;
> +
> + pmdp = pmd_offset(pudp, addr);
> +
> + do {
> + pmd_t *_pmdp;
> +
> + nr = 0;
> + next = pmd_cont_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / CONT_PMD_SIZE));
> + span = nr * CONT_PMD_SIZE;
> +
> + if (((addr | next) & ~CONT_PMD_MASK) == 0) {
> + pmdp += pmd_index(next) - pmd_index(addr) +
> + nr * CONT_PMDS;
> + continue;
> + }
> +
> + _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> + if (!pmd_cont(pmdp_get(_pmdp)))
> + goto split;
> +
> + for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
> + set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
> +
> +split:
> + ret = split_pmd(pmdp, addr, next);
> + if (ret)
> + break;
> +
> + pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
> + } while (addr = next + span, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
> +{
> + pud_t *pudp;
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> + int ret = 0;
> +
> + pudp = pud_offset(p4dp, addr);
> +
> + do {
> + pud_t pud;
> +
> + nr = 1;
> + next = pud_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / PUD_SIZE));
> + span = (nr - 1) * PUD_SIZE;
> +
> + if (((addr | next) & ~PUD_MASK) == 0)
> + continue;
> +
> + pud = pudp_get(pudp);
> + if (pud_leaf(pud)) {
> + phys_addr_t pmd_phys;
> + pmd_t *pmdp;
> + pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
> + PUD_TABLE_AF;
> + unsigned long pfn = pud_pfn(pud);
> + pgprot_t prot = pud_pgprot(pud);
> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> + pmd_phys = split_pgtable_alloc(TABLE_PMD);
> + if (pmd_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + pudval |= PUD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
> + PMD_TYPE_SECT);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> + for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> + set_pmd(pmdp, pfn_pmd(pfn, prot));
> + pfn += step;
> + }
> +
> + dsb(ishst);
> +
> + __pud_populate(pudp, pmd_phys, pudval);
> + }
> +
> + ret = split_cont_pmd(pudp, addr, next);
> + if (ret)
> + break;
> + } while (pudp += nr, addr = next + span, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> + p4d_t *p4dp;
> + unsigned long next;
> + int ret = 0;
> +
> + p4dp = p4d_offset(pgdp, addr);
> +
> + do {
> + next = p4d_addr_end(addr, end);
> +
> + ret = split_pud(p4dp, addr, next);
> + if (ret)
> + break;
> + } while (p4dp++, addr = next, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> + unsigned long next;
> + int ret = 0;
> +
> + do {
> + next = pgd_addr_end(addr, end);
> + ret = split_p4d(pgdp, addr, next);
> + if (ret)
> + break;
> + } while (pgdp++, addr = next, addr != end);
> +
> + return ret;
> +}
> +
> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
> +{
> + int ret;
> +
> + if (!system_supports_bbml2_noabort())
> + return 0;
> +
> + if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> + return -EINVAL;
> +
> + mutex_lock(&pgtable_split_lock);
> + arch_enter_lazy_mmu_mode();
> + ret = split_pgd(pgd_offset_k(start), start, end);
> + arch_leave_lazy_mmu_mode();
> + mutex_unlock(&pgtable_split_lock);
> +
> + return ret;
> +}
> +
> /*
--- snip ---
I'm afraid I'll have to agree with Ryan :) Looking at the signature of split_kernel_pgtable_mapping,
one would expect that this function splits all block mappings in this region. But that is just a
nit; it does not seem right to me that we are iterating over the entire space when we know *exactly* where
we have to make the split, just to save on pgd/p4d/pud loads - the effect of which is probably cancelled
out by unnecessary iterations your approach takes to skip the intermediate blocks.
If we are concerned that most change_memory_common() operations are for a single page, then
we can do something like:
unsigned long size = end - start;
bool end_split, start_split = false;
if (start not aligned to block mapping)
start_split = split(start);
/*
* split the end only if the start wasn't split, or
* if it cannot be guaranteed that start and end lie
* on the same contig block
*/
if (!start_split || (size > CONT_PTE_SIZE))
end_split = split(end);
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
2025-07-29 12:34 ` Dev Jain
@ 2025-08-01 14:35 ` Ryan Roberts
2025-08-04 10:07 ` Ryan Roberts
2025-08-05 18:53 ` Yang Shi
1 sibling, 2 replies; 33+ messages in thread
From: Ryan Roberts @ 2025-08-01 14:35 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 24/07/2025 23:11, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
>
> This resulted in a couple of problems:
> - performance degradation
> - more TLB pressure
> - memory waste for kernel page table
>
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
>
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full. When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
>
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
>
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
> 256GB memory.
>
> * Memory use after boot
> Before:
> MemTotal: 258988984 kB
> MemFree: 254821700 kB
>
> After:
> MemTotal: 259505132 kB
> MemFree: 255410264 kB
>
> Around 500MB more memory are free to use. The larger the machine, the
> more memory saved.
>
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
> encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
> --name=iops-test-job --eta-newline=1 --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad case).
> The bandwidth is increased and the avg clat is reduced proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
> The bandwidth is increased by 150%.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/cpufeature.h | 34 ++++
> arch/arm64/include/asm/mmu.h | 1 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 31 +--
> arch/arm64/mm/mmu.c | 293 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 4 +
> 6 files changed, 333 insertions(+), 35 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index bf13d676aae2..d0d394cc837d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
> return cpus_have_final_cap(ARM64_HAS_PMUV3);
> }
>
> +static inline bool bbml2_noabort_available(void)
> +{
> + /*
> + * We want to allow usage of BBML2 in as wide a range of kernel contexts
> + * as possible. This list is therefore an allow-list of known-good
> + * implementations that both support BBML2 and additionally, fulfill the
> + * extra constraint of never generating TLB conflict aborts when using
> + * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
> + * kernel contexts difficult to prove safe against recursive aborts).
> + *
> + * Note that implementations can only be considered "known-good" if their
> + * implementors attest to the fact that the implementation never raises
> + * TLB conflict aborts for BBML2 mapping granularity changes.
> + */
> + static const struct midr_range supports_bbml2_noabort_list[] = {
> + MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> + MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> + {}
> + };
> +
> + /* Does our cpu guarantee to never raise TLB conflict aborts? */
> + if (!is_midr_in_range_list(supports_bbml2_noabort_list))
> + return false;
> +
> + /*
> + * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
> + * about whether the MIDR check passes.
> + */
> +
> + return true;
> +}
I don't think this function should be inline. Won't we end up duplicating the
midr list everywhere? Suggest moving back to cpufeature.c.
> +
> static inline bool system_supports_bbml2_noabort(void)
> {
> return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 6e8aa8e72601..57f4b25e6f33 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> pgprot_t prot, bool page_mappings_only);
> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
> extern void mark_linear_text_alias_ro(void);
> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>
> /*
> * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index ba63c8736666..ad2a6a7e86b0 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
> }
>
> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
> +{
> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
> +}
> +
> #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
> static inline int pte_uffd_wp(pte_t pte)
> {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d2a8a509a58e..1c96016a7a41 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>
> static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
> {
> - /*
> - * We want to allow usage of BBML2 in as wide a range of kernel contexts
> - * as possible. This list is therefore an allow-list of known-good
> - * implementations that both support BBML2 and additionally, fulfill the
> - * extra constraint of never generating TLB conflict aborts when using
> - * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
> - * kernel contexts difficult to prove safe against recursive aborts).
> - *
> - * Note that implementations can only be considered "known-good" if their
> - * implementors attest to the fact that the implementation never raises
> - * TLB conflict aborts for BBML2 mapping granularity changes.
> - */
> - static const struct midr_range supports_bbml2_noabort_list[] = {
> - MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> - MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> - MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> - MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> - {}
> - };
> -
> - /* Does our cpu guarantee to never raise TLB conflict aborts? */
> - if (!is_midr_in_range_list(supports_bbml2_noabort_list))
> - return false;
> -
> - /*
> - * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
> - * about whether the MIDR check passes.
> - */
> -
> - return true;
> + return bbml2_noabort_available();
> }
>
> #ifdef CONFIG_ARM64_PAN
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3d5fb37424ab..f63b39613571 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> int flags);
> #endif
>
> +#define INVALID_PHYS_ADDR -1
> +
> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> enum pgtable_type pgtable_type)
> {
> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> phys_addr_t pa;
>
> - BUG_ON(!ptdesc);
> + if (!ptdesc)
> + return INVALID_PHYS_ADDR;
> +
> pa = page_to_phys(ptdesc_page(ptdesc));
>
> switch (pgtable_type) {
> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> }
>
> static phys_addr_t __maybe_unused
> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +split_pgtable_alloc(enum pgtable_type pgtable_type)
> {
> return __pgd_pgtable_alloc(&init_mm, pgtable_type);
> }
>
> +static phys_addr_t __maybe_unused
> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> + phys_addr_t pa;
> +
> + pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> + BUG_ON(pa == INVALID_PHYS_ADDR);
> + return pa;
> +}
> +
> static phys_addr_t
> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
> {
> - return __pgd_pgtable_alloc(NULL, pgtable_type);
> + phys_addr_t pa;
> +
> + pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> + BUG_ON(pa == INVALID_PHYS_ADDR);
> + return pa;
> }
The allocation all looks clean to me now. Thanks.
>
> /*
> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> pgd_pgtable_alloc_special_mm, flags);
> }
>
> +static DEFINE_MUTEX(pgtable_split_lock);
> +
> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> + pte_t *ptep;
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> +
> + ptep = pte_offset_kernel(pmdp, addr);
> +
> + do {
> + pte_t *_ptep;
> +
> + nr = 0;
> + next = pte_cont_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / CONT_PTE_SIZE));
> + span = nr * CONT_PTE_SIZE;
> +
> + _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> + ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
> +
> + if (((addr | next) & ~CONT_PTE_MASK) == 0)
> + continue;
> +
> + if (!pte_cont(__ptep_get(_ptep)))
> + continue;
> +
> + for (int i = 0; i < CONT_PTES; i++, _ptep++)
> + __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
> + } while (addr = next + span, addr != end);
> +
> + return 0;
> +}
> +
> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
> +{
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> + int ret = 0;
> +
> + do {
> + pmd_t pmd;
> +
> + nr = 1;
> + next = pmd_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / PMD_SIZE));
> + span = (nr - 1) * PMD_SIZE;
> +
> + if (((addr | next) & ~PMD_MASK) == 0)
> + continue;
> +
> + pmd = pmdp_get(pmdp);
> + if (pmd_leaf(pmd)) {
> + phys_addr_t pte_phys;
> + pte_t *ptep;
> + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
> + PMD_TABLE_AF;
> + unsigned long pfn = pmd_pfn(pmd);
> + pgprot_t prot = pmd_pgprot(pmd);
> +
> + pte_phys = split_pgtable_alloc(TABLE_PTE);
> + if (pte_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + pmdval |= PMD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
> + PTE_TYPE_PAGE);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + ptep = (pte_t *)phys_to_virt(pte_phys);
> + for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> + __set_pte(ptep, pfn_pte(pfn, prot));
> +
> + dsb(ishst);
> +
> + __pmd_populate(pmdp, pte_phys, pmdval);
> + }
> +
> + ret = split_cont_pte(pmdp, addr, next);
> + if (ret)
> + break;
> + } while (pmdp += nr, addr = next + span, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
> +{
> + pmd_t *pmdp;
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> + int ret = 0;
> +
> + pmdp = pmd_offset(pudp, addr);
> +
> + do {
> + pmd_t *_pmdp;
> +
> + nr = 0;
> + next = pmd_cont_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / CONT_PMD_SIZE));
> + span = nr * CONT_PMD_SIZE;
> +
> + if (((addr | next) & ~CONT_PMD_MASK) == 0) {
> + pmdp += pmd_index(next) - pmd_index(addr) +
> + nr * CONT_PMDS;
> + continue;
> + }
> +
> + _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> + if (!pmd_cont(pmdp_get(_pmdp)))
> + goto split;
> +
> + for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
> + set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
> +
> +split:
> + ret = split_pmd(pmdp, addr, next);
> + if (ret)
> + break;
> +
> + pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
> + } while (addr = next + span, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
> +{
> + pud_t *pudp;
> + unsigned long next;
> + unsigned int nr;
> + unsigned long span;
> + int ret = 0;
> +
> + pudp = pud_offset(p4dp, addr);
> +
> + do {
> + pud_t pud;
> +
> + nr = 1;
> + next = pud_addr_end(addr, end);
> + if (next < end)
> + nr = max(nr, ((end - next) / PUD_SIZE));
> + span = (nr - 1) * PUD_SIZE;
> +
> + if (((addr | next) & ~PUD_MASK) == 0)
> + continue;
> +
> + pud = pudp_get(pudp);
> + if (pud_leaf(pud)) {
> + phys_addr_t pmd_phys;
> + pmd_t *pmdp;
> + pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
> + PUD_TABLE_AF;
> + unsigned long pfn = pud_pfn(pud);
> + pgprot_t prot = pud_pgprot(pud);
> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> +
> + pmd_phys = split_pgtable_alloc(TABLE_PMD);
> + if (pmd_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + pudval |= PUD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
> + PMD_TYPE_SECT);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> + for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
> + set_pmd(pmdp, pfn_pmd(pfn, prot));
> + pfn += step;
> + }
> +
> + dsb(ishst);
> +
> + __pud_populate(pudp, pmd_phys, pudval);
> + }
> +
> + ret = split_cont_pmd(pudp, addr, next);
> + if (ret)
> + break;
> + } while (pudp += nr, addr = next + span, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> + p4d_t *p4dp;
> + unsigned long next;
> + int ret = 0;
> +
> + p4dp = p4d_offset(pgdp, addr);
> +
> + do {
> + next = p4d_addr_end(addr, end);
> +
> + ret = split_pud(p4dp, addr, next);
> + if (ret)
> + break;
> + } while (p4dp++, addr = next, addr != end);
> +
> + return ret;
> +}
> +
> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
> +{
> + unsigned long next;
> + int ret = 0;
> +
> + do {
> + next = pgd_addr_end(addr, end);
> + ret = split_p4d(pgdp, addr, next);
> + if (ret)
> + break;
> + } while (pgdp++, addr = next, addr != end);
> +
> + return ret;
> +}
> +
> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
> +{
> + int ret;
> +
> + if (!system_supports_bbml2_noabort())
> + return 0;
> +
> + if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> + return -EINVAL;
> +
> + mutex_lock(&pgtable_split_lock);
On reflection, I agree this lock approach is simpler than my suggestion to do it
locklessly and I doubt this will become a bottleneck. Given x86 does it this
way, I guess it's fine.
> + arch_enter_lazy_mmu_mode();
> + ret = split_pgd(pgd_offset_k(start), start, end);
My instinct still remains that it would be better not to iterate over the range
here, but instead call a "split(start); split(end);" since we just want to split
the start and end. So the code would be simpler and probably more performant if
we get rid of all the iteration.
But I'm guessing you are going to enhance this infra in the next patch to
support splitting all entries in the range for the system-doesn't-support-bbml case?
Anyway, I'll take a look at the next patch then come back to review the details
of split_pgd().
> + arch_leave_lazy_mmu_mode();
> + mutex_unlock(&pgtable_split_lock);
> +
> + return ret;
> +}
> +
> static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
> phys_addr_t size, pgprot_t prot)
> {
> @@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> +bool linear_map_requires_bbml2;
Until the next patch, you're only using this variable in this file, so at the
very least, it should be static for now. But I'm proposing below it should be
removed entirely.
> +
> +static inline bool force_pte_mapping(void)
> +{
> + /*
> + * Can't use cpufeature API to determine whether BBM level 2
> + * is supported or not since cpufeature have not been
> + * finalized yet.
> + */
> + return (!bbml2_noabort_available() && (rodata_full ||
> + arm64_kfence_can_set_direct_map() || is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
>
> early_kfence_pool = arm64_kfence_alloc_pool();
>
> - if (can_set_direct_map())
> + linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
This looks wrong; what about the kfence_can_set_direct_map and is_realm_world
conditions?
perhaps:
linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map()
> +
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> - if (can_set_direct_map())
> + if (force_pte_mapping() ||
> + (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
So force_pte_mapping() isn't actually returning what it sounds like it is; it's
returning whether you would have to force pte mapping based on the current cpu's
support for bbml2. Perhaps it would be better to implement force_pte_mapping() as:
static inline bool force_pte_mapping(void)
{
bool bbml2 = (system_capabilities_finalized() &&
system_supports_bbml2_noabort()) ||
bbml2_noabort_available();
return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
is_realm_world())) ||
debug_pagealloc_enabled();
}
Then we can just use force_pte_mapping() in both the boot and runtime paths
without any adjustment based on linear_map_requires_bbml2. So you could drop
linear_map_requires_bbml2 entirely, until the next patch at least.
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index c6a85000fa0e..6566aa9d8abb 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
>
> + ret = split_kernel_pgtable_mapping(start, start + size);
> + if (WARN_ON_ONCE(ret))
I'm on the fence as to whether this warning is desirable. Would we normally want
to warn in an OOM situation, or just unwind and carry on?
Thanks,
Ryan
> + return ret;
> +
> arch_enter_lazy_mmu_mode();
>
> /*
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
@ 2025-08-01 14:36 ` Ryan Roberts
2025-08-04 23:20 ` Christoph Lameter (Ampere)
1 sibling, 0 replies; 33+ messages in thread
From: Ryan Roberts @ 2025-08-01 14:36 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 24/07/2025 23:11, Yang Shi wrote:
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/kernel/cpufeature.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 7a3c404288d5..d2a8a509a58e 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2230,6 +2230,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
> static const struct midr_range supports_bbml2_noabort_list[] = {
> MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> {}
> };
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
2025-07-26 11:10 ` kernel test robot
@ 2025-08-01 16:14 ` Ryan Roberts
2025-08-05 18:59 ` Yang Shi
2025-08-05 7:54 ` Ryan Roberts
2 siblings, 1 reply; 33+ messages in thread
From: Ryan Roberts @ 2025-08-01 16:14 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 24/07/2025 23:11, Yang Shi wrote:
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point. So the linear
> mapping is determined by the capability of boot CPU. If the boot CPU
> supports BBML2, large block mapping will be used for linear mapping.
>
> But the secondary CPUs may not support BBML2, so repaint the linear mapping
> if large block mapping is used and the secondary CPUs don't support BBML2
> once cpufeature is finalized on all CPUs.
>
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/mmu.h | 6 +-
> arch/arm64/kernel/cpufeature.c | 8 ++
> arch/arm64/mm/mmu.c | 173 +++++++++++++++++++++++++++------
> arch/arm64/mm/pageattr.c | 2 +-
> arch/arm64/mm/proc.S | 57 ++++++++---
> 5 files changed, 196 insertions(+), 50 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 57f4b25e6f33..9bf50e8897e2 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -56,6 +56,8 @@ typedef struct {
> */
> #define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
>
> +extern bool linear_map_requires_bbml2;
> +
> static inline bool arm64_kernel_unmapped_at_el0(void)
> {
> return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
> @@ -71,7 +73,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> pgprot_t prot, bool page_mappings_only);
> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
> extern void mark_linear_text_alias_ro(void);
> -extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
> + unsigned int flags);
> +extern int linear_map_split_to_ptes(void *__unused);
>
> /*
> * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 1c96016a7a41..23c01d679c40 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -85,6 +85,7 @@
> #include <asm/insn.h>
> #include <asm/kvm_host.h>
> #include <asm/mmu_context.h>
> +#include <asm/mmu.h>
> #include <asm/mte.h>
> #include <asm/hypervisor.h>
> #include <asm/processor.h>
> @@ -2009,6 +2010,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
> return 0;
> }
>
> +static void __init linear_map_maybe_split_to_ptes(void)
> +{
> + if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
> + stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
> +}
> +
> static void __init kpti_install_ng_mappings(void)
> {
> /* Check whether KPTI is going to be used */
> @@ -3855,6 +3862,7 @@ void __init setup_system_features(void)
> {
> setup_system_capabilities();
>
> + linear_map_maybe_split_to_ptes();
> kpti_install_ng_mappings();
>
> sve_setup();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index f63b39613571..22f2d0869fdd 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -482,11 +482,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>
> #define INVALID_PHYS_ADDR -1
>
[...]
I'll review the actual walker separately (I've run out of time today).
>
> +extern u32 repaint_done;
> +
> +int __init linear_map_split_to_ptes(void *__unused)
> +{
> + typedef void (repaint_wait_fn)(void);
> + extern repaint_wait_fn bbml2_wait_for_repainting;
> + repaint_wait_fn *wait_fn;
> +
> + int cpu = smp_processor_id();
> +
> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
> +
> + /*
> + * Repainting just can be run on CPU 0 because we just can be sure
> + * CPU 0 supports BBML2.
> + */
> + if (!cpu) {
> + phys_addr_t kernel_start = __pa_symbol(_stext);
> + phys_addr_t kernel_end = __pa_symbol(__init_begin);
> + phys_addr_t start, end;
> + unsigned long vstart, vend;
> + int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> + u64 i;
> + int ret;
> +
> + /*
> + * Wait for all secondary CPUs get prepared for repainting
> + * the linear mapping.
> + */
> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
Is this correct? I would have thought the primary is waiting for the
secondaries, so "VAL == num_online_cpus() - 1" ?
> +
> + memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
> + /* Split the whole linear mapping */
> + for_each_mem_range(i, &start, &end) {
I think I asked this in the last round; but I just want to double check;
memblock is definitely still valid here and we are definitely going to get
exactly the same regions out as we did in map_mem()? I wonder if it's possible
between then and now that some other component has reserved some memory? In that
case we wouldn't walk that region?
Perhaps it would be safer (and simpler) to just walk all of [PAGE_OFFSET,
_stext) and [__init_begin, PAGE_END) and ignore the holes?
> + if (start >= end)
> + return -EINVAL;
> +
> + vstart = __phys_to_virt(start);
> + vend = __phys_to_virt(end);
> + ret = split_kernel_pgtable_mapping(vstart, vend, flags);
> + if (ret)
> + panic("Failed to split linear mappings\n");
> +
> + flush_tlb_kernel_range(vstart, vend);
> + }
> + memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
> +
> + /*
> + * Relies on dsb in flush_tlb_kernel_range() to avoid
> + * reordering before any page table split operations.
> + */
> + WRITE_ONCE(repaint_done, 0);
> + } else {
> + /*
> + * The secondary CPUs can't run in the same address space
> + * with CPU 0 because accessing the linear mapping address
> + * when CPU 0 is repainting it is not safe.
> + *
> + * Let the secondary CPUs run busy loop in idmap address
> + * space when repainting is ongoing.
> + */
> + cpu_install_idmap();
> + wait_fn();
> + cpu_uninstall_idmap();
> + }
> +
> + return 0;
> +}
> +
> #ifdef CONFIG_KFENCE
>
> bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>
> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>
> static void __init create_idmap(void)
> {
> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> __phys_to_virt(ptep) - ptep);
> }
> +
> + /*
> + * Setup idmap mapping for repaint_done flag. It will be used if
> + * repainting the linear mapping is needed later.
> + */
> + if (linear_map_requires_bbml2) {
> + u64 pa = __pa_symbol(&repaint_done);
> + ptep = __pa_symbol(bbml2_ptes);
> +
> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> + __phys_to_virt(ptep) - ptep);
> + }
> }
>
> void __init paging_init(void)
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 6566aa9d8abb..4471d7e510a1 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
>
> - ret = split_kernel_pgtable_mapping(start, start + size);
> + ret = split_kernel_pgtable_mapping(start, start + size, 0);
> if (WARN_ON_ONCE(ret))
> return ret;
>
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 80d470aa469d..f0f9c49a4466 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
> dsb nshst
> .endm
>
> + .macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
> + /* Increment the flag to let the boot CPU know we're ready */
> +1: ldxr \tmp3, [\tmp2]
> + add \tmp3, \tmp3, #1
> + stxr \tmp4, \tmp3, [\tmp2]
> + cbnz \tmp4, 1b
> +
> + /* Wait for the boot CPU to finish its job */
> + sevl
> +1: wfe
> + ldxr \tmp3, [\tmp2]
> + cbnz \tmp3, 1b
> +
> + /* All done, act like nothing happened */
> + msr ttbr1_el1, \tmp1
> + isb
> + ret
> + .endm
You've defined the macro within "#ifdef CONFIG_UNMAP_KERNEL_AT_EL0" but then
need to use it outside of that scope.
But I don't think this needs to be a macro; I think it would be better as a
function (as I suggested in the last round). Then the text only needs to appear
once in the image and it can be used from both places (see below).
> +
> /*
> * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
> * unsigned long temp_pte_va)
> @@ -416,29 +435,35 @@ alternative_else_nop_endif
> __idmap_kpti_secondary:
> /* Uninstall swapper before surgery begins */
> __idmap_cpu_set_reserved_ttbr1 x16, x17
> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>
> - /* Increment the flag to let the boot CPU we're ready */
> -1: ldxr w16, [flag_ptr]
> - add w16, w16, #1
> - stxr w17, w16, [flag_ptr]
> - cbnz w17, 1b
> + .unreq swapper_ttb
> + .unreq flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> + .popsection
> +#endif
>
> - /* Wait for the boot CPU to finish messing around with swapper */
> - sevl
> -1: wfe
> - ldxr w16, [flag_ptr]
> - cbnz w16, 1b
> +/*
> + * Wait for repainting is done. Run on secondary CPUs
> + * only.
> + */
> + .pushsection ".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> + .popsection
>
> - /* All done, act like nothing happened */
> - msr ttbr1_el1, swapper_ttb
> - isb
> - ret
> + .pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> + swapper_ttb .req x0
> + flag_ptr .req x1
> + mrs swapper_ttb, ttbr1_el1
> + adr_l flag_ptr, repaint_done
> + __idmap_cpu_set_reserved_ttbr1 x16, x17
> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>
> .unreq swapper_ttb
> .unreq flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
> .popsection
> -#endif
How about this instead?
---8<---
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..a116b2b8ad59 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -416,8 +416,30 @@ alternative_else_nop_endif
__idmap_kpti_secondary:
/* Uninstall swapper before surgery begins */
__idmap_cpu_set_reserved_ttbr1 x16, x17
+ b scondary_cpu_wait
+
+ .unreq swapper_ttb
+ .unreq flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
+
+ .pushsection ".data", "aw", %progbits
+SYM_DATA(repaint_done, .long 1)
+ .popsection
+
+ .pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
+ /* Must be same registers as in idmap_kpti_install_ng_mappings */
+ swapper_ttb .req x3
+ flag_ptr .req x4
+
+ mrs swapper_ttb, ttbr1_el1
+ adr_l flag_ptr, repaint_done
+ __idmap_cpu_set_reserved_ttbr1 x16, x17
/* Increment the flag to let the boot CPU we're ready */
+scondary_cpu_wait:
1: ldxr w16, [flag_ptr]
add w16, w16, #1
stxr w17, w16, [flag_ptr]
@@ -436,9 +458,8 @@ __idmap_kpti_secondary:
.unreq swapper_ttb
.unreq flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(bbml2_wait_for_repainting)
.popsection
-#endif
/*
* __cpu_setup
---8<---
Thanks,
Ryan
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-01 14:35 ` Ryan Roberts
@ 2025-08-04 10:07 ` Ryan Roberts
2025-08-05 18:53 ` Yang Shi
1 sibling, 0 replies; 33+ messages in thread
From: Ryan Roberts @ 2025-08-04 10:07 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 01/08/2025 15:35, Ryan Roberts wrote:
[...]
>> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>
>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping() ||
>> + (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
>
> So force_pte_mapping() isn't actually returning what it sounds like it is; it's
> returning whether you would have to force pte mapping based on the current cpu's
> support for bbml2. Perhaps it would be better to implement force_pte_mapping() as:
>
> static inline bool force_pte_mapping(void)
> {
> bool bbml2 = (system_capabilities_finalized() &&
> system_supports_bbml2_noabort()) ||
> bbml2_noabort_available();
Sorry that should have been:
bool bbml2 = system_capabilities_finalized() ?
system_supports_bbml2_noabort() : bbml2_noabort_available();
>
> return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> is_realm_world())) ||
> debug_pagealloc_enabled();
> }
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
2025-08-01 14:36 ` Ryan Roberts
@ 2025-08-04 23:20 ` Christoph Lameter (Ampere)
1 sibling, 0 replies; 33+ messages in thread
From: Christoph Lameter (Ampere) @ 2025-08-04 23:20 UTC (permalink / raw)
To: Yang Shi
Cc: ryan.roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, linux-arm-kernel, linux-kernel
On Thu, 24 Jul 2025, Yang Shi wrote:
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
2025-07-26 11:10 ` kernel test robot
2025-08-01 16:14 ` Ryan Roberts
@ 2025-08-05 7:54 ` Ryan Roberts
2025-08-05 17:45 ` Yang Shi
2 siblings, 1 reply; 33+ messages in thread
From: Ryan Roberts @ 2025-08-05 7:54 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
Hi Yang,
On 24/07/2025 23:11, Yang Shi wrote:
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point. So the linear
> mapping is determined by the capability of boot CPU. If the boot CPU
> supports BBML2, large block mapping will be used for linear mapping.
>
> But the secondary CPUs may not support BBML2, so repaint the linear mapping
> if large block mapping is used and the secondary CPUs don't support BBML2
> once cpufeature is finalized on all CPUs.
>
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/mmu.h | 6 +-
> arch/arm64/kernel/cpufeature.c | 8 ++
> arch/arm64/mm/mmu.c | 173 +++++++++++++++++++++++++++------
> arch/arm64/mm/pageattr.c | 2 +-
> arch/arm64/mm/proc.S | 57 ++++++++---
> 5 files changed, 196 insertions(+), 50 deletions(-)
[...]
> +int __init linear_map_split_to_ptes(void *__unused)
> +{
> + typedef void (repaint_wait_fn)(void);
> + extern repaint_wait_fn bbml2_wait_for_repainting;
> + repaint_wait_fn *wait_fn;
> +
> + int cpu = smp_processor_id();
> +
> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
> +
> + /*
> + * Repainting just can be run on CPU 0 because we just can be sure
> + * CPU 0 supports BBML2.
> + */
> + if (!cpu) {
> + phys_addr_t kernel_start = __pa_symbol(_stext);
> + phys_addr_t kernel_end = __pa_symbol(__init_begin);
> + phys_addr_t start, end;
> + unsigned long vstart, vend;
> + int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> + u64 i;
> + int ret;
> +
> + /*
> + * Wait for all secondary CPUs get prepared for repainting
> + * the linear mapping.
> + */
> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
> +
> + memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
> + /* Split the whole linear mapping */
> + for_each_mem_range(i, &start, &end) {
> + if (start >= end)
> + return -EINVAL;
> +
> + vstart = __phys_to_virt(start);
> + vend = __phys_to_virt(end);
> + ret = split_kernel_pgtable_mapping(vstart, vend, flags);
I've been thinking about this; I think the best approach is to use the pagewalk
API here, then you don't need to implement your own pgtable walker; you can just
implement the pud, pmd and pte callbacks to do the splitting and they can reuse
common split helper functions. This reduces code size quite a bit I think. And
also means that for split_kernel_pgtable_mapping() you can just pass a single
address and don't need to iterate over every entry.
I started prototyping this to prove to myself that it is possible and ended up
with quite a clean implementation. I'm going to post that as a v6 RFC shortly -
I hope that's ok. I've retained you as primary author since it's all based on
your work. I'm hoping that the posting will speed up review so we can hopefully
get this into 6.18.
Thanks,
Ryan
> + if (ret)
> + panic("Failed to split linear mappings\n");
> +
> + flush_tlb_kernel_range(vstart, vend);
> + }
> + memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
> +
> + /*
> + * Relies on dsb in flush_tlb_kernel_range() to avoid
> + * reordering before any page table split operations.
> + */
> + WRITE_ONCE(repaint_done, 0);
> + } else {
> + /*
> + * The secondary CPUs can't run in the same address space
> + * with CPU 0 because accessing the linear mapping address
> + * when CPU 0 is repainting it is not safe.
> + *
> + * Let the secondary CPUs run busy loop in idmap address
> + * space when repainting is ongoing.
> + */
> + cpu_install_idmap();
> + wait_fn();
> + cpu_uninstall_idmap();
> + }
> +
> + return 0;
> +}
> +
> #ifdef CONFIG_KFENCE
>
> bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>
> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>
> static void __init create_idmap(void)
> {
> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> __phys_to_virt(ptep) - ptep);
> }
> +
> + /*
> + * Setup idmap mapping for repaint_done flag. It will be used if
> + * repainting the linear mapping is needed later.
> + */
> + if (linear_map_requires_bbml2) {
> + u64 pa = __pa_symbol(&repaint_done);
> + ptep = __pa_symbol(bbml2_ptes);
> +
> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> + __phys_to_virt(ptep) - ptep);
> + }
> }
>
> void __init paging_init(void)
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 6566aa9d8abb..4471d7e510a1 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
>
> - ret = split_kernel_pgtable_mapping(start, start + size);
> + ret = split_kernel_pgtable_mapping(start, start + size, 0);
> if (WARN_ON_ONCE(ret))
> return ret;
>
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 80d470aa469d..f0f9c49a4466 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
> dsb nshst
> .endm
>
> + .macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
> + /* Increment the flag to let the boot CPU know we're ready */
> +1: ldxr \tmp3, [\tmp2]
> + add \tmp3, \tmp3, #1
> + stxr \tmp4, \tmp3, [\tmp2]
> + cbnz \tmp4, 1b
> +
> + /* Wait for the boot CPU to finish its job */
> + sevl
> +1: wfe
> + ldxr \tmp3, [\tmp2]
> + cbnz \tmp3, 1b
> +
> + /* All done, act like nothing happened */
> + msr ttbr1_el1, \tmp1
> + isb
> + ret
> + .endm
> +
> /*
> * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
> * unsigned long temp_pte_va)
> @@ -416,29 +435,35 @@ alternative_else_nop_endif
> __idmap_kpti_secondary:
> /* Uninstall swapper before surgery begins */
> __idmap_cpu_set_reserved_ttbr1 x16, x17
> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>
> - /* Increment the flag to let the boot CPU we're ready */
> -1: ldxr w16, [flag_ptr]
> - add w16, w16, #1
> - stxr w17, w16, [flag_ptr]
> - cbnz w17, 1b
> + .unreq swapper_ttb
> + .unreq flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> + .popsection
> +#endif
>
> - /* Wait for the boot CPU to finish messing around with swapper */
> - sevl
> -1: wfe
> - ldxr w16, [flag_ptr]
> - cbnz w16, 1b
> +/*
> + * Wait for repainting is done. Run on secondary CPUs
> + * only.
> + */
> + .pushsection ".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> + .popsection
>
> - /* All done, act like nothing happened */
> - msr ttbr1_el1, swapper_ttb
> - isb
> - ret
> + .pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> + swapper_ttb .req x0
> + flag_ptr .req x1
> + mrs swapper_ttb, ttbr1_el1
> + adr_l flag_ptr, repaint_done
> + __idmap_cpu_set_reserved_ttbr1 x16, x17
> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>
> .unreq swapper_ttb
> .unreq flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
> .popsection
> -#endif
>
> /*
> * __cpu_setup
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
2025-08-05 7:54 ` Ryan Roberts
@ 2025-08-05 17:45 ` Yang Shi
0 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-08-05 17:45 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/5/25 12:54 AM, Ryan Roberts wrote:
> Hi Yang,
>
> On 24/07/2025 23:11, Yang Shi wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point. So the linear
>> mapping is determined by the capability of boot CPU. If the boot CPU
>> supports BBML2, large block mapping will be used for linear mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear mapping
>> if large block mapping is used and the secondary CPUs don't support BBML2
>> once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/mmu.h | 6 +-
>> arch/arm64/kernel/cpufeature.c | 8 ++
>> arch/arm64/mm/mmu.c | 173 +++++++++++++++++++++++++++------
>> arch/arm64/mm/pageattr.c | 2 +-
>> arch/arm64/mm/proc.S | 57 ++++++++---
>> 5 files changed, 196 insertions(+), 50 deletions(-)
> [...]
>
>> +int __init linear_map_split_to_ptes(void *__unused)
>> +{
>> + typedef void (repaint_wait_fn)(void);
>> + extern repaint_wait_fn bbml2_wait_for_repainting;
>> + repaint_wait_fn *wait_fn;
>> +
>> + int cpu = smp_processor_id();
>> +
>> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>> +
>> + /*
>> + * Repainting just can be run on CPU 0 because we just can be sure
>> + * CPU 0 supports BBML2.
>> + */
>> + if (!cpu) {
>> + phys_addr_t kernel_start = __pa_symbol(_stext);
>> + phys_addr_t kernel_end = __pa_symbol(__init_begin);
>> + phys_addr_t start, end;
>> + unsigned long vstart, vend;
>> + int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> + u64 i;
>> + int ret;
>> +
>> + /*
>> + * Wait for all secondary CPUs get prepared for repainting
>> + * the linear mapping.
>> + */
>> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
>> +
>> + memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
>> + /* Split the whole linear mapping */
>> + for_each_mem_range(i, &start, &end) {
>> + if (start >= end)
>> + return -EINVAL;
>> +
>> + vstart = __phys_to_virt(start);
>> + vend = __phys_to_virt(end);
>> + ret = split_kernel_pgtable_mapping(vstart, vend, flags);
Hi Ryan,
> I've been thinking about this; I think the best approach is to use the pagewalk
> API here, then you don't need to implement your own pgtable walker; you can just
> implement the pud, pmd and pte callbacks to do the splitting and they can reuse
> common split helper functions. This reduces code size quite a bit I think. And
> also means that for split_kernel_pgtable_mapping() you can just pass a single
> address and don't need to iterate over every entry.
Using pgtable walker API is fine to me. The biggest concern is how to
reuse split code for repainting. I think it basically solves the problem.
>
> I started prototyping this to prove to myself that it is possible and ended up
> with quite a clean implementation. I'm going to post that as a v6 RFC shortly -
> I hope that's ok. I've retained you as primary author since it's all based on
> your work. I'm hoping that the posting will speed up review so we can hopefully
> get this into 6.18.
Thank you for making the prototype. I will take a look that and reply in
that series directly.
Regards,
Yang
>
> Thanks,
> Ryan
>
>> + if (ret)
>> + panic("Failed to split linear mappings\n");
>> +
>> + flush_tlb_kernel_range(vstart, vend);
>> + }
>> + memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
>> +
>> + /*
>> + * Relies on dsb in flush_tlb_kernel_range() to avoid
>> + * reordering before any page table split operations.
>> + */
>> + WRITE_ONCE(repaint_done, 0);
>> + } else {
>> + /*
>> + * The secondary CPUs can't run in the same address space
>> + * with CPU 0 because accessing the linear mapping address
>> + * when CPU 0 is repainting it is not safe.
>> + *
>> + * Let the secondary CPUs run busy loop in idmap address
>> + * space when repainting is ongoing.
>> + */
>> + cpu_install_idmap();
>> + wait_fn();
>> + cpu_uninstall_idmap();
>> + }
>> +
>> + return 0;
>> +}
>> +
>> #ifdef CONFIG_KFENCE
>>
>> bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
>> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>>
>> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>>
>> static void __init create_idmap(void)
>> {
>> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
>> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> __phys_to_virt(ptep) - ptep);
>> }
>> +
>> + /*
>> + * Setup idmap mapping for repaint_done flag. It will be used if
>> + * repainting the linear mapping is needed later.
>> + */
>> + if (linear_map_requires_bbml2) {
>> + u64 pa = __pa_symbol(&repaint_done);
>> + ptep = __pa_symbol(bbml2_ptes);
>> +
>> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> + __phys_to_virt(ptep) - ptep);
>> + }
>> }
>>
>> void __init paging_init(void)
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 6566aa9d8abb..4471d7e510a1 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
>> data.set_mask = set_mask;
>> data.clear_mask = clear_mask;
>>
>> - ret = split_kernel_pgtable_mapping(start, start + size);
>> + ret = split_kernel_pgtable_mapping(start, start + size, 0);
>> if (WARN_ON_ONCE(ret))
>> return ret;
>>
>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>> index 80d470aa469d..f0f9c49a4466 100644
>> --- a/arch/arm64/mm/proc.S
>> +++ b/arch/arm64/mm/proc.S
>> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>> dsb nshst
>> .endm
>>
>> + .macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
>> + /* Increment the flag to let the boot CPU know we're ready */
>> +1: ldxr \tmp3, [\tmp2]
>> + add \tmp3, \tmp3, #1
>> + stxr \tmp4, \tmp3, [\tmp2]
>> + cbnz \tmp4, 1b
>> +
>> + /* Wait for the boot CPU to finish its job */
>> + sevl
>> +1: wfe
>> + ldxr \tmp3, [\tmp2]
>> + cbnz \tmp3, 1b
>> +
>> + /* All done, act like nothing happened */
>> + msr ttbr1_el1, \tmp1
>> + isb
>> + ret
>> + .endm
>> +
>> /*
>> * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
>> * unsigned long temp_pte_va)
>> @@ -416,29 +435,35 @@ alternative_else_nop_endif
>> __idmap_kpti_secondary:
>> /* Uninstall swapper before surgery begins */
>> __idmap_cpu_set_reserved_ttbr1 x16, x17
>> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>
>> - /* Increment the flag to let the boot CPU we're ready */
>> -1: ldxr w16, [flag_ptr]
>> - add w16, w16, #1
>> - stxr w17, w16, [flag_ptr]
>> - cbnz w17, 1b
>> + .unreq swapper_ttb
>> + .unreq flag_ptr
>> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> + .popsection
>> +#endif
>>
>> - /* Wait for the boot CPU to finish messing around with swapper */
>> - sevl
>> -1: wfe
>> - ldxr w16, [flag_ptr]
>> - cbnz w16, 1b
>> +/*
>> + * Wait for repainting is done. Run on secondary CPUs
>> + * only.
>> + */
>> + .pushsection ".data", "aw", %progbits
>> +SYM_DATA(repaint_done, .long 1)
>> + .popsection
>>
>> - /* All done, act like nothing happened */
>> - msr ttbr1_el1, swapper_ttb
>> - isb
>> - ret
>> + .pushsection ".idmap.text", "a"
>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>> + swapper_ttb .req x0
>> + flag_ptr .req x1
>> + mrs swapper_ttb, ttbr1_el1
>> + adr_l flag_ptr, repaint_done
>> + __idmap_cpu_set_reserved_ttbr1 x16, x17
>> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>
>> .unreq swapper_ttb
>> .unreq flag_ptr
>> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>> .popsection
>> -#endif
>>
>> /*
>> * __cpu_setup
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-01 14:35 ` Ryan Roberts
2025-08-04 10:07 ` Ryan Roberts
@ 2025-08-05 18:53 ` Yang Shi
2025-08-06 7:20 ` Ryan Roberts
1 sibling, 1 reply; 33+ messages in thread
From: Yang Shi @ 2025-08-05 18:53 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/1/25 7:35 AM, Ryan Roberts wrote:
> On 24/07/2025 23:11, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>> - performance degradation
>> - more TLB pressure
>> - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full. When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P) with
>> 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal: 258988984 kB
>> MemFree: 254821700 kB
>>
>> After:
>> MemTotal: 259505132 kB
>> MemFree: 255410264 kB
>>
>> Around 500MB more memory are free to use. The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with disk
>> encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap --randrepeat 1 \
>> --status-interval=999 --rw=write --bs=4k --loops=1 --ioengine=sync \
>> --iodepth=1 --numjobs=1 --fsync_on_close=1 --group_reporting --thread \
>> --name=iops-test-job --eta-newline=1 --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad case).
>> The bandwidth is increased and the avg clat is reduced proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache populated).
>> The bandwidth is increased by 150%.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/cpufeature.h | 34 ++++
>> arch/arm64/include/asm/mmu.h | 1 +
>> arch/arm64/include/asm/pgtable.h | 5 +
>> arch/arm64/kernel/cpufeature.c | 31 +--
>> arch/arm64/mm/mmu.c | 293 +++++++++++++++++++++++++++-
>> arch/arm64/mm/pageattr.c | 4 +
>> 6 files changed, 333 insertions(+), 35 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index bf13d676aae2..d0d394cc837d 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -871,6 +871,40 @@ static inline bool system_supports_pmuv3(void)
>> return cpus_have_final_cap(ARM64_HAS_PMUV3);
>> }
>>
>> +static inline bool bbml2_noabort_available(void)
>> +{
>> + /*
>> + * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> + * as possible. This list is therefore an allow-list of known-good
>> + * implementations that both support BBML2 and additionally, fulfill the
>> + * extra constraint of never generating TLB conflict aborts when using
>> + * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
>> + * kernel contexts difficult to prove safe against recursive aborts).
>> + *
>> + * Note that implementations can only be considered "known-good" if their
>> + * implementors attest to the fact that the implementation never raises
>> + * TLB conflict aborts for BBML2 mapping granularity changes.
>> + */
>> + static const struct midr_range supports_bbml2_noabort_list[] = {
>> + MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> + MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> + MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> + MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> + {}
>> + };
>> +
>> + /* Does our cpu guarantee to never raise TLB conflict aborts? */
>> + if (!is_midr_in_range_list(supports_bbml2_noabort_list))
>> + return false;
>> +
>> + /*
>> + * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
>> + * about whether the MIDR check passes.
>> + */
>> +
>> + return true;
>> +}
> I don't think this function should be inline. Won't we end up duplicating the
> midr list everywhere? Suggest moving back to cpufeature.c.
Yes, you are right.
>
>> +
>> static inline bool system_supports_bbml2_noabort(void)
>> {
>> return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 6e8aa8e72601..57f4b25e6f33 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>> pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>> extern void mark_linear_text_alias_ro(void);
>> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>>
>> /*
>> * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index ba63c8736666..ad2a6a7e86b0 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>> }
>>
>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>> +{
>> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>> +}
>> +
>> #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
>> static inline int pte_uffd_wp(pte_t pte)
>> {
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index d2a8a509a58e..1c96016a7a41 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2215,36 +2215,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>>
>> static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>> {
>> - /*
>> - * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> - * as possible. This list is therefore an allow-list of known-good
>> - * implementations that both support BBML2 and additionally, fulfill the
>> - * extra constraint of never generating TLB conflict aborts when using
>> - * the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
>> - * kernel contexts difficult to prove safe against recursive aborts).
>> - *
>> - * Note that implementations can only be considered "known-good" if their
>> - * implementors attest to the fact that the implementation never raises
>> - * TLB conflict aborts for BBML2 mapping granularity changes.
>> - */
>> - static const struct midr_range supports_bbml2_noabort_list[] = {
>> - MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> - MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> - MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> - MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> - {}
>> - };
>> -
>> - /* Does our cpu guarantee to never raise TLB conflict aborts? */
>> - if (!is_midr_in_range_list(supports_bbml2_noabort_list))
>> - return false;
>> -
>> - /*
>> - * We currently ignore the ID_AA64MMFR2_EL1 register, and only care
>> - * about whether the MIDR check passes.
>> - */
>> -
>> - return true;
>> + return bbml2_noabort_available();
>> }
>>
>> #ifdef CONFIG_ARM64_PAN
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 3d5fb37424ab..f63b39613571 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> int flags);
>> #endif
>>
>> +#define INVALID_PHYS_ADDR -1
>> +
>> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> enum pgtable_type pgtable_type)
>> {
>> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>> phys_addr_t pa;
>>
>> - BUG_ON(!ptdesc);
>> + if (!ptdesc)
>> + return INVALID_PHYS_ADDR;
>> +
>> pa = page_to_phys(ptdesc_page(ptdesc));
>>
>> switch (pgtable_type) {
>> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> }
>>
>> static phys_addr_t __maybe_unused
>> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>> {
>> return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> }
>>
>> +static phys_addr_t __maybe_unused
>> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> + phys_addr_t pa;
>> +
>> + pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> + BUG_ON(pa == INVALID_PHYS_ADDR);
>> + return pa;
>> +}
>> +
>> static phys_addr_t
>> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>> {
>> - return __pgd_pgtable_alloc(NULL, pgtable_type);
>> + phys_addr_t pa;
>> +
>> + pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> + BUG_ON(pa == INVALID_PHYS_ADDR);
>> + return pa;
>> }
> The allocation all looks clean to me now. Thanks.
Thank you for the suggestion.
>
>>
>> /*
>> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>> pgd_pgtable_alloc_special_mm, flags);
>> }
>>
>> +static DEFINE_MUTEX(pgtable_split_lock);
>> +
>> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end)
>> +{
>> + pte_t *ptep;
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> +
>> + ptep = pte_offset_kernel(pmdp, addr);
>> +
>> + do {
>> + pte_t *_ptep;
>> +
>> + nr = 0;
>> + next = pte_cont_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / CONT_PTE_SIZE));
>> + span = nr * CONT_PTE_SIZE;
>> +
>> + _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> + ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
>> +
>> + if (((addr | next) & ~CONT_PTE_MASK) == 0)
>> + continue;
>> +
>> + if (!pte_cont(__ptep_get(_ptep)))
>> + continue;
>> +
>> + for (int i = 0; i < CONT_PTES; i++, _ptep++)
>> + __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
>> + } while (addr = next + span, addr != end);
>> +
>> + return 0;
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end)
>> +{
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> + int ret = 0;
>> +
>> + do {
>> + pmd_t pmd;
>> +
>> + nr = 1;
>> + next = pmd_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / PMD_SIZE));
>> + span = (nr - 1) * PMD_SIZE;
>> +
>> + if (((addr | next) & ~PMD_MASK) == 0)
>> + continue;
>> +
>> + pmd = pmdp_get(pmdp);
>> + if (pmd_leaf(pmd)) {
>> + phys_addr_t pte_phys;
>> + pte_t *ptep;
>> + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
>> + PMD_TABLE_AF;
>> + unsigned long pfn = pmd_pfn(pmd);
>> + pgprot_t prot = pmd_pgprot(pmd);
>> +
>> + pte_phys = split_pgtable_alloc(TABLE_PTE);
>> + if (pte_phys == INVALID_PHYS_ADDR)
>> + return -ENOMEM;
>> +
>> + if (pgprot_val(prot) & PMD_SECT_PXN)
>> + pmdval |= PMD_TABLE_PXN;
>> +
>> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
>> + PTE_TYPE_PAGE);
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> + ptep = (pte_t *)phys_to_virt(pte_phys);
>> + for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> + __set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> + dsb(ishst);
>> +
>> + __pmd_populate(pmdp, pte_phys, pmdval);
>> + }
>> +
>> + ret = split_cont_pte(pmdp, addr, next);
>> + if (ret)
>> + break;
>> + } while (pmdp += nr, addr = next + span, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end)
>> +{
>> + pmd_t *pmdp;
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> + int ret = 0;
>> +
>> + pmdp = pmd_offset(pudp, addr);
>> +
>> + do {
>> + pmd_t *_pmdp;
>> +
>> + nr = 0;
>> + next = pmd_cont_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / CONT_PMD_SIZE));
>> + span = nr * CONT_PMD_SIZE;
>> +
>> + if (((addr | next) & ~CONT_PMD_MASK) == 0) {
>> + pmdp += pmd_index(next) - pmd_index(addr) +
>> + nr * CONT_PMDS;
>> + continue;
>> + }
>> +
>> + _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> + if (!pmd_cont(pmdp_get(_pmdp)))
>> + goto split;
>> +
>> + for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
>> + set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
>> +
>> +split:
>> + ret = split_pmd(pmdp, addr, next);
>> + if (ret)
>> + break;
>> +
>> + pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
>> + } while (addr = next + span, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long end)
>> +{
>> + pud_t *pudp;
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> + int ret = 0;
>> +
>> + pudp = pud_offset(p4dp, addr);
>> +
>> + do {
>> + pud_t pud;
>> +
>> + nr = 1;
>> + next = pud_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / PUD_SIZE));
>> + span = (nr - 1) * PUD_SIZE;
>> +
>> + if (((addr | next) & ~PUD_MASK) == 0)
>> + continue;
>> +
>> + pud = pudp_get(pudp);
>> + if (pud_leaf(pud)) {
>> + phys_addr_t pmd_phys;
>> + pmd_t *pmdp;
>> + pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
>> + PUD_TABLE_AF;
>> + unsigned long pfn = pud_pfn(pud);
>> + pgprot_t prot = pud_pgprot(pud);
>> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> + pmd_phys = split_pgtable_alloc(TABLE_PMD);
>> + if (pmd_phys == INVALID_PHYS_ADDR)
>> + return -ENOMEM;
>> +
>> + if (pgprot_val(prot) & PMD_SECT_PXN)
>> + pudval |= PUD_TABLE_PXN;
>> +
>> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
>> + PMD_TYPE_SECT);
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> + for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> + set_pmd(pmdp, pfn_pmd(pfn, prot));
>> + pfn += step;
>> + }
>> +
>> + dsb(ishst);
>> +
>> + __pud_populate(pudp, pmd_phys, pudval);
>> + }
>> +
>> + ret = split_cont_pmd(pudp, addr, next);
>> + if (ret)
>> + break;
>> + } while (pudp += nr, addr = next + span, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end)
>> +{
>> + p4d_t *p4dp;
>> + unsigned long next;
>> + int ret = 0;
>> +
>> + p4dp = p4d_offset(pgdp, addr);
>> +
>> + do {
>> + next = p4d_addr_end(addr, end);
>> +
>> + ret = split_pud(p4dp, addr, next);
>> + if (ret)
>> + break;
>> + } while (p4dp++, addr = next, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long end)
>> +{
>> + unsigned long next;
>> + int ret = 0;
>> +
>> + do {
>> + next = pgd_addr_end(addr, end);
>> + ret = split_p4d(pgdp, addr, next);
>> + if (ret)
>> + break;
>> + } while (pgdp++, addr = next, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long end)
>> +{
>> + int ret;
>> +
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>> +
>> + if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> + return -EINVAL;
>> +
>> + mutex_lock(&pgtable_split_lock);
> On reflection, I agree this lock approach is simpler than my suggestion to do it
> locklessly and I doubt this will become a bottleneck. Given x86 does it this
> way, I guess it's fine.
>
>> + arch_enter_lazy_mmu_mode();
>> + ret = split_pgd(pgd_offset_k(start), start, end);
> My instinct still remains that it would be better not to iterate over the range
> here, but instead call a "split(start); split(end);" since we just want to split
> the start and end. So the code would be simpler and probably more performant if
> we get rid of all the iteration.
It should be more performant for splitting large range, especially the
range includes leaf mappings at different levels. But I had some
optimization to skip leaf mappings in this version, so it should be
close to your implementation from performance perspective. And it just
walks the page table once instead of twice. It should be more efficient
for small split, for example, 4K.
>
> But I'm guessing you are going to enhance this infra in the next patch to
> support splitting all entries in the range for the system-doesn't-support-bbml case?
Yes. I added "flags" parameter in patch #4. When repainting the page
table, NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS will be passed in to tell
split_pgd() to split the page table all the way down to PTEs.
>
> Anyway, I'll take a look at the next patch then come back to review the details
> of split_pgd().
>
>> + arch_leave_lazy_mmu_mode();
>> + mutex_unlock(&pgtable_split_lock);
>> +
>> + return ret;
>> +}
>> +
>> static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>> phys_addr_t size, pgprot_t prot)
>> {
>> @@ -639,6 +905,20 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>
>> #endif /* CONFIG_KFENCE */
>>
>> +bool linear_map_requires_bbml2;
> Until the next patch, you're only using this variable in this file, so at the
> very least, it should be static for now. But I'm proposing below it should be
> removed entirely.
>
>> +
>> +static inline bool force_pte_mapping(void)
>> +{
>> + /*
>> + * Can't use cpufeature API to determine whether BBM level 2
>> + * is supported or not since cpufeature have not been
>> + * finalized yet.
>> + */
>> + return (!bbml2_noabort_available() && (rodata_full ||
>> + arm64_kfence_can_set_direct_map() || is_realm_world())) ||
>> + debug_pagealloc_enabled();
>> +}
>> +
>> static void __init map_mem(pgd_t *pgdp)
>> {
>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -664,7 +944,9 @@ static void __init map_mem(pgd_t *pgdp)
>>
>> early_kfence_pool = arm64_kfence_alloc_pool();
>>
>> - if (can_set_direct_map())
>> + linear_map_requires_bbml2 = !force_pte_mapping() && rodata_full;
> This looks wrong; what about the kfence_can_set_direct_map and is_realm_world
> conditions?
>
> perhaps:
>
> linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map()
Thanks for figuring this out.
>
>> +
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> /*
>> @@ -1366,7 +1648,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>
>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping() ||
>> + (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()))
> So force_pte_mapping() isn't actually returning what it sounds like it is; it's
> returning whether you would have to force pte mapping based on the current cpu's
> support for bbml2. Perhaps it would be better to implement force_pte_mapping() as:
>
> static inline bool force_pte_mapping(void)
> {
> bool bbml2 = (system_capabilities_finalized() &&
> system_supports_bbml2_noabort()) ||
> bbml2_noabort_available();
>
> return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> is_realm_world())) ||
> debug_pagealloc_enabled();
> }
>
> Then we can just use force_pte_mapping() in both the boot and runtime paths
> without any adjustment based on linear_map_requires_bbml2. So you could drop
> linear_map_requires_bbml2 entirely, until the next patch at least.
Good idea.
>
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c6a85000fa0e..6566aa9d8abb 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
>> data.set_mask = set_mask;
>> data.clear_mask = clear_mask;
>>
>> + ret = split_kernel_pgtable_mapping(start, start + size);
>> + if (WARN_ON_ONCE(ret))
> I'm on the fence as to whether this warning is desirable. Would we normally want
> to warn in an OOM situation, or just unwind and carry on?
I don't have strong preference on this. It returns errno anyway. The
operation, for example, insmod, will fail if split fails.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>
>> + return ret;
>> +
>> arch_enter_lazy_mmu_mode();
>>
>> /*
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs
2025-08-01 16:14 ` Ryan Roberts
@ 2025-08-05 18:59 ` Yang Shi
0 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-08-05 18:59 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/1/25 9:14 AM, Ryan Roberts wrote:
> On 24/07/2025 23:11, Yang Shi wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point. So the linear
>> mapping is determined by the capability of boot CPU. If the boot CPU
>> supports BBML2, large block mapping will be used for linear mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear mapping
>> if large block mapping is used and the secondary CPUs don't support BBML2
>> once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/mmu.h | 6 +-
>> arch/arm64/kernel/cpufeature.c | 8 ++
>> arch/arm64/mm/mmu.c | 173 +++++++++++++++++++++++++++------
>> arch/arm64/mm/pageattr.c | 2 +-
>> arch/arm64/mm/proc.S | 57 ++++++++---
>> 5 files changed, 196 insertions(+), 50 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 57f4b25e6f33..9bf50e8897e2 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -56,6 +56,8 @@ typedef struct {
>> */
>> #define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
>>
>> +extern bool linear_map_requires_bbml2;
>> +
>> static inline bool arm64_kernel_unmapped_at_el0(void)
>> {
>> return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
>> @@ -71,7 +73,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>> pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>> extern void mark_linear_text_alias_ro(void);
>> -extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end);
>> +extern int split_kernel_pgtable_mapping(unsigned long start, unsigned long end,
>> + unsigned int flags);
>> +extern int linear_map_split_to_ptes(void *__unused);
>>
>> /*
>> * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 1c96016a7a41..23c01d679c40 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -85,6 +85,7 @@
>> #include <asm/insn.h>
>> #include <asm/kvm_host.h>
>> #include <asm/mmu_context.h>
>> +#include <asm/mmu.h>
>> #include <asm/mte.h>
>> #include <asm/hypervisor.h>
>> #include <asm/processor.h>
>> @@ -2009,6 +2010,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
>> return 0;
>> }
>>
>> +static void __init linear_map_maybe_split_to_ptes(void)
>> +{
>> + if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
>> + stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
>> +}
>> +
>> static void __init kpti_install_ng_mappings(void)
>> {
>> /* Check whether KPTI is going to be used */
>> @@ -3855,6 +3862,7 @@ void __init setup_system_features(void)
>> {
>> setup_system_capabilities();
>>
>> + linear_map_maybe_split_to_ptes();
>> kpti_install_ng_mappings();
>>
>> sve_setup();
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index f63b39613571..22f2d0869fdd 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -482,11 +482,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>>
>> #define INVALID_PHYS_ADDR -1
>>
> [...]
>
> I'll review the actual walker separately (I've run out of time today).
>
>
>>
>> +extern u32 repaint_done;
>> +
>> +int __init linear_map_split_to_ptes(void *__unused)
>> +{
>> + typedef void (repaint_wait_fn)(void);
>> + extern repaint_wait_fn bbml2_wait_for_repainting;
>> + repaint_wait_fn *wait_fn;
>> +
>> + int cpu = smp_processor_id();
>> +
>> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>> +
>> + /*
>> + * Repainting just can be run on CPU 0 because we just can be sure
>> + * CPU 0 supports BBML2.
>> + */
>> + if (!cpu) {
>> + phys_addr_t kernel_start = __pa_symbol(_stext);
>> + phys_addr_t kernel_end = __pa_symbol(__init_begin);
>> + phys_addr_t start, end;
>> + unsigned long vstart, vend;
>> + int flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> + u64 i;
>> + int ret;
>> +
>> + /*
>> + * Wait for all secondary CPUs get prepared for repainting
>> + * the linear mapping.
>> + */
>> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
> Is this correct? I would have thought the primary is waiting for the
> secondaries, so "VAL == num_online_cpus() - 1" ?
It is correct because repaint_done is initialized to 1.
>
>> +
>> + memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
>> + /* Split the whole linear mapping */
>> + for_each_mem_range(i, &start, &end) {
> I think I asked this in the last round; but I just want to double check;
> memblock is definitely still valid here and we are definitely going to get
> exactly the same regions out as we did in map_mem()? I wonder if it's possible
> between then and now that some other component has reserved some memory? In that
> case we wouldn't walk that region?
I think it is kept unchanged. The ptdump shows no block or cont mappings
for linear mapping if I force repaint the page table.
>
> Perhaps it would be safer (and simpler) to just walk all of [PAGE_OFFSET,
> _stext) and [__init_begin, PAGE_END) and ignore the holes?
This may walk some holes, particularly for multiple sockets machines? It
doesn't have correctness issue, but just not very efficient.
>
>> + if (start >= end)
>> + return -EINVAL;
>> +
>> + vstart = __phys_to_virt(start);
>> + vend = __phys_to_virt(end);
>> + ret = split_kernel_pgtable_mapping(vstart, vend, flags);
>> + if (ret)
>> + panic("Failed to split linear mappings\n");
>> +
>> + flush_tlb_kernel_range(vstart, vend);
>> + }
>> + memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
>> +
>> + /*
>> + * Relies on dsb in flush_tlb_kernel_range() to avoid
>> + * reordering before any page table split operations.
>> + */
>> + WRITE_ONCE(repaint_done, 0);
>> + } else {
>> + /*
>> + * The secondary CPUs can't run in the same address space
>> + * with CPU 0 because accessing the linear mapping address
>> + * when CPU 0 is repainting it is not safe.
>> + *
>> + * Let the secondary CPUs run busy loop in idmap address
>> + * space when repainting is ongoing.
>> + */
>> + cpu_install_idmap();
>> + wait_fn();
>> + cpu_uninstall_idmap();
>> + }
>> +
>> + return 0;
>> +}
>> +
>> #ifdef CONFIG_KFENCE
>>
>> bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
>> @@ -1079,7 +1174,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
>> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>>
>> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
>> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>>
>> static void __init create_idmap(void)
>> {
>> @@ -1104,6 +1200,19 @@ static void __init create_idmap(void)
>> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> __phys_to_virt(ptep) - ptep);
>> }
>> +
>> + /*
>> + * Setup idmap mapping for repaint_done flag. It will be used if
>> + * repainting the linear mapping is needed later.
>> + */
>> + if (linear_map_requires_bbml2) {
>> + u64 pa = __pa_symbol(&repaint_done);
>> + ptep = __pa_symbol(bbml2_ptes);
>> +
>> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> + __phys_to_virt(ptep) - ptep);
>> + }
>> }
>>
>> void __init paging_init(void)
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 6566aa9d8abb..4471d7e510a1 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,7 +140,7 @@ static int update_range_prot(unsigned long start, unsigned long size,
>> data.set_mask = set_mask;
>> data.clear_mask = clear_mask;
>>
>> - ret = split_kernel_pgtable_mapping(start, start + size);
>> + ret = split_kernel_pgtable_mapping(start, start + size, 0);
>> if (WARN_ON_ONCE(ret))
>> return ret;
>>
>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>> index 80d470aa469d..f0f9c49a4466 100644
>> --- a/arch/arm64/mm/proc.S
>> +++ b/arch/arm64/mm/proc.S
>> @@ -239,6 +239,25 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>> dsb nshst
>> .endm
>>
>> + .macro wait_for_boot_cpu, tmp1, tmp2, tmp3, tmp4
>> + /* Increment the flag to let the boot CPU know we're ready */
>> +1: ldxr \tmp3, [\tmp2]
>> + add \tmp3, \tmp3, #1
>> + stxr \tmp4, \tmp3, [\tmp2]
>> + cbnz \tmp4, 1b
>> +
>> + /* Wait for the boot CPU to finish its job */
>> + sevl
>> +1: wfe
>> + ldxr \tmp3, [\tmp2]
>> + cbnz \tmp3, 1b
>> +
>> + /* All done, act like nothing happened */
>> + msr ttbr1_el1, \tmp1
>> + isb
>> + ret
>> + .endm
> You've defined the macro within "#ifdef CONFIG_UNMAP_KERNEL_AT_EL0" but then
> need to use it outside of that scope.
>
> But I don't think this needs to be a macro; I think it would be better as a
> function (as I suggested in the last round). Then the text only needs to appear
> once in the image and it can be used from both places (see below).
>
>> +
>> /*
>> * void __kpti_install_ng_mappings(int cpu, int num_secondaries, phys_addr_t temp_pgd,
>> * unsigned long temp_pte_va)
>> @@ -416,29 +435,35 @@ alternative_else_nop_endif
>> __idmap_kpti_secondary:
>> /* Uninstall swapper before surgery begins */
>> __idmap_cpu_set_reserved_ttbr1 x16, x17
>> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>
>> - /* Increment the flag to let the boot CPU we're ready */
>> -1: ldxr w16, [flag_ptr]
>> - add w16, w16, #1
>> - stxr w17, w16, [flag_ptr]
>> - cbnz w17, 1b
>> + .unreq swapper_ttb
>> + .unreq flag_ptr
>> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> + .popsection
>> +#endif
>>
>> - /* Wait for the boot CPU to finish messing around with swapper */
>> - sevl
>> -1: wfe
>> - ldxr w16, [flag_ptr]
>> - cbnz w16, 1b
>> +/*
>> + * Wait for repainting is done. Run on secondary CPUs
>> + * only.
>> + */
>> + .pushsection ".data", "aw", %progbits
>> +SYM_DATA(repaint_done, .long 1)
>> + .popsection
>>
>> - /* All done, act like nothing happened */
>> - msr ttbr1_el1, swapper_ttb
>> - isb
>> - ret
>> + .pushsection ".idmap.text", "a"
>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>> + swapper_ttb .req x0
>> + flag_ptr .req x1
>> + mrs swapper_ttb, ttbr1_el1
>> + adr_l flag_ptr, repaint_done
>> + __idmap_cpu_set_reserved_ttbr1 x16, x17
>> + wait_for_boot_cpu swapper_ttb, flag_ptr, w16, w17
>>
>> .unreq swapper_ttb
>> .unreq flag_ptr
>> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>> .popsection
>> -#endif
> How about this instead?
Looks good to me.
Thanks,
Yang
>
> ---8<---
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 8c75965afc9e..a116b2b8ad59 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -416,8 +416,30 @@ alternative_else_nop_endif
> __idmap_kpti_secondary:
> /* Uninstall swapper before surgery begins */
> __idmap_cpu_set_reserved_ttbr1 x16, x17
> + b scondary_cpu_wait
> +
> + .unreq swapper_ttb
> + .unreq flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> + .popsection
> +#endif
> +
> + .pushsection ".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> + .popsection
> +
> + .pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> + /* Must be same registers as in idmap_kpti_install_ng_mappings */
> + swapper_ttb .req x3
> + flag_ptr .req x4
> +
> + mrs swapper_ttb, ttbr1_el1
> + adr_l flag_ptr, repaint_done
> + __idmap_cpu_set_reserved_ttbr1 x16, x17
>
> /* Increment the flag to let the boot CPU we're ready */
> +scondary_cpu_wait:
> 1: ldxr w16, [flag_ptr]
> add w16, w16, #1
> stxr w17, w16, [flag_ptr]
> @@ -436,9 +458,8 @@ __idmap_kpti_secondary:
>
> .unreq swapper_ttb
> .unreq flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
> .popsection
> -#endif
>
> /*
> * __cpu_setup
> ---8<---
>
> Thanks,
> Ryan
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-07-29 12:34 ` Dev Jain
@ 2025-08-05 21:28 ` Yang Shi
2025-08-06 0:10 ` Yang Shi
0 siblings, 1 reply; 33+ messages in thread
From: Yang Shi @ 2025-08-05 21:28 UTC (permalink / raw)
To: Dev Jain, ryan.roberts, will, catalin.marinas, akpm,
Miko.Lenczewski, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 7/29/25 5:34 AM, Dev Jain wrote:
>
> On 25/07/25 3:41 am, Yang Shi wrote:
>> [----- snip -----]
>> #ifdef CONFIG_ARM64_PAN
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 3d5fb37424ab..f63b39613571 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -480,6 +480,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir,
>> phys_addr_t phys, unsigned long virt,
>> int flags);
>> #endif
>> +#define INVALID_PHYS_ADDR -1
>> +
>> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> enum pgtable_type pgtable_type)
>> {
>> @@ -487,7 +489,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct
>> mm_struct *mm,
>> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL &
>> ~__GFP_ZERO, 0);
>> phys_addr_t pa;
>> - BUG_ON(!ptdesc);
>> + if (!ptdesc)
>> + return INVALID_PHYS_ADDR;
>> +
>> pa = page_to_phys(ptdesc_page(ptdesc));
>> switch (pgtable_type) {
>> @@ -509,15 +513,29 @@ static phys_addr_t __pgd_pgtable_alloc(struct
>> mm_struct *mm,
>> }
>> static phys_addr_t __maybe_unused
>> -pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +split_pgtable_alloc(enum pgtable_type pgtable_type)
>> {
>> return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> }
>> +static phys_addr_t __maybe_unused
>> +pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> + phys_addr_t pa;
>> +
>> + pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> + BUG_ON(pa == INVALID_PHYS_ADDR);
>> + return pa;
>> +}
>> +
>> static phys_addr_t
>> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>> {
>> - return __pgd_pgtable_alloc(NULL, pgtable_type);
>> + phys_addr_t pa;
>> +
>> + pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> + BUG_ON(pa == INVALID_PHYS_ADDR);
>> + return pa;
>> }
>> /*
>> @@ -552,6 +570,254 @@ void __init create_pgd_mapping(struct mm_struct
>> *mm, phys_addr_t phys,
>> pgd_pgtable_alloc_special_mm, flags);
>> }
>> +static DEFINE_MUTEX(pgtable_split_lock);
>
> Thanks for taking a separate lock.
>
>> +
>> +static int split_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned
>> long end)
>> +{
>> + pte_t *ptep;
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> +
>> + ptep = pte_offset_kernel(pmdp, addr);
>> +
>> + do {
>> + pte_t *_ptep;
>> +
>> + nr = 0;
>> + next = pte_cont_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / CONT_PTE_SIZE));
>> + span = nr * CONT_PTE_SIZE;
>> +
>> + _ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> + ptep += pte_index(next) - pte_index(addr) + nr * CONT_PTES;
>> +
>> + if (((addr | next) & ~CONT_PTE_MASK) == 0)
>> + continue;
>> +
>> + if (!pte_cont(__ptep_get(_ptep)))
>> + continue;
>> +
>> + for (int i = 0; i < CONT_PTES; i++, _ptep++)
>> + __set_pte(_ptep, pte_mknoncont(__ptep_get(_ptep)));
>> + } while (addr = next + span, addr != end);
>> +
>> + return 0;
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, unsigned long addr, unsigned long
>> end)
>> +{
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> + int ret = 0;
>> +
>> + do {
>> + pmd_t pmd;
>> +
>> + nr = 1;
>> + next = pmd_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / PMD_SIZE));
>> + span = (nr - 1) * PMD_SIZE;
>> +
>> + if (((addr | next) & ~PMD_MASK) == 0)
>> + continue;
>> +
>> + pmd = pmdp_get(pmdp);
>> + if (pmd_leaf(pmd)) {
>> + phys_addr_t pte_phys;
>> + pte_t *ptep;
>> + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN |
>> + PMD_TABLE_AF;
>> + unsigned long pfn = pmd_pfn(pmd);
>> + pgprot_t prot = pmd_pgprot(pmd);
>> +
>> + pte_phys = split_pgtable_alloc(TABLE_PTE);
>> + if (pte_phys == INVALID_PHYS_ADDR)
>> + return -ENOMEM;
>> +
>> + if (pgprot_val(prot) & PMD_SECT_PXN)
>> + pmdval |= PMD_TABLE_PXN;
>> +
>> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) |
>> + PTE_TYPE_PAGE);
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> + ptep = (pte_t *)phys_to_virt(pte_phys);
>> + for (int i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> + __set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> + dsb(ishst);
>> +
>> + __pmd_populate(pmdp, pte_phys, pmdval);
>> + }
>> +
>> + ret = split_cont_pte(pmdp, addr, next);
>> + if (ret)
>> + break;
>> + } while (pmdp += nr, addr = next + span, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_cont_pmd(pud_t *pudp, unsigned long addr, unsigned
>> long end)
>> +{
>> + pmd_t *pmdp;
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> + int ret = 0;
>> +
>> + pmdp = pmd_offset(pudp, addr);
>> +
>> + do {
>> + pmd_t *_pmdp;
>> +
>> + nr = 0;
>> + next = pmd_cont_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / CONT_PMD_SIZE));
>> + span = nr * CONT_PMD_SIZE;
>> +
>> + if (((addr | next) & ~CONT_PMD_MASK) == 0) {
>> + pmdp += pmd_index(next) - pmd_index(addr) +
>> + nr * CONT_PMDS;
>> + continue;
>> + }
>> +
>> + _pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> + if (!pmd_cont(pmdp_get(_pmdp)))
>> + goto split;
>> +
>> + for (int i = 0; i < CONT_PMDS; i++, _pmdp++)
>> + set_pmd(_pmdp, pmd_mknoncont(pmdp_get(_pmdp)));
>> +
>> +split:
>> + ret = split_pmd(pmdp, addr, next);
>> + if (ret)
>> + break;
>> +
>> + pmdp += pmd_index(next) - pmd_index(addr) + nr * CONT_PMDS;
>> + } while (addr = next + span, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_pud(p4d_t *p4dp, unsigned long addr, unsigned long
>> end)
>> +{
>> + pud_t *pudp;
>> + unsigned long next;
>> + unsigned int nr;
>> + unsigned long span;
>> + int ret = 0;
>> +
>> + pudp = pud_offset(p4dp, addr);
>> +
>> + do {
>> + pud_t pud;
>> +
>> + nr = 1;
>> + next = pud_addr_end(addr, end);
>> + if (next < end)
>> + nr = max(nr, ((end - next) / PUD_SIZE));
>> + span = (nr - 1) * PUD_SIZE;
>> +
>> + if (((addr | next) & ~PUD_MASK) == 0)
>> + continue;
>> +
>> + pud = pudp_get(pudp);
>> + if (pud_leaf(pud)) {
>> + phys_addr_t pmd_phys;
>> + pmd_t *pmdp;
>> + pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN |
>> + PUD_TABLE_AF;
>> + unsigned long pfn = pud_pfn(pud);
>> + pgprot_t prot = pud_pgprot(pud);
>> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> +
>> + pmd_phys = split_pgtable_alloc(TABLE_PMD);
>> + if (pmd_phys == INVALID_PHYS_ADDR)
>> + return -ENOMEM;
>> +
>> + if (pgprot_val(prot) & PMD_SECT_PXN)
>> + pudval |= PUD_TABLE_PXN;
>> +
>> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) |
>> + PMD_TYPE_SECT);
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> + for (int i = 0; i < PTRS_PER_PMD; i++, pmdp++) {
>> + set_pmd(pmdp, pfn_pmd(pfn, prot));
>> + pfn += step;
>> + }
>> +
>> + dsb(ishst);
>> +
>> + __pud_populate(pudp, pmd_phys, pudval);
>> + }
>> +
>> + ret = split_cont_pmd(pudp, addr, next);
>> + if (ret)
>> + break;
>> + } while (pudp += nr, addr = next + span, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_p4d(pgd_t *pgdp, unsigned long addr, unsigned long
>> end)
>> +{
>> + p4d_t *p4dp;
>> + unsigned long next;
>> + int ret = 0;
>> +
>> + p4dp = p4d_offset(pgdp, addr);
>> +
>> + do {
>> + next = p4d_addr_end(addr, end);
>> +
>> + ret = split_pud(p4dp, addr, next);
>> + if (ret)
>> + break;
>> + } while (p4dp++, addr = next, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_pgd(pgd_t *pgdp, unsigned long addr, unsigned long
>> end)
>> +{
>> + unsigned long next;
>> + int ret = 0;
>> +
>> + do {
>> + next = pgd_addr_end(addr, end);
>> + ret = split_p4d(pgdp, addr, next);
>> + if (ret)
>> + break;
>> + } while (pgdp++, addr = next, addr != end);
>> +
>> + return ret;
>> +}
>> +
>> +int split_kernel_pgtable_mapping(unsigned long start, unsigned long
>> end)
>> +{
>> + int ret;
>> +
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>> +
>> + if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> + return -EINVAL;
>> +
>> + mutex_lock(&pgtable_split_lock);
>> + arch_enter_lazy_mmu_mode();
>> + ret = split_pgd(pgd_offset_k(start), start, end);
>> + arch_leave_lazy_mmu_mode();
>> + mutex_unlock(&pgtable_split_lock);
>> +
>> + return ret;
>> +}
>> +
>> /*
>
> --- snip ---
>
> I'm afraid I'll have to agree with Ryan :) Looking at the signature of
> split_kernel_pgtable_mapping,
> one would expect that this function splits all block mappings in this
> region. But that is just a
> nit; it does not seem right to me that we are iterating over the
> entire space when we know *exactly* where
> we have to make the split, just to save on pgd/p4d/pud loads - the
> effect of which is probably cancelled
> out by unnecessary iterations your approach takes to skip the
> intermediate blocks.
The implementation is aimed to reuse the split code for repainting. We
have to split all leaf mappings down to PTEs for repainting.
Now Ryan suggested use pgtable walk API for repainting, it made the
duplicate code problem gone. We had some discussion in the other series.
>
> If we are concerned that most change_memory_common() operations are
> for a single page, then
> we can do something like:
>
> unsigned long size = end - start;
> bool end_split, start_split = false;
>
> if (start not aligned to block mapping)
> start_split = split(start);
>
> /*
> * split the end only if the start wasn't split, or
> * if it cannot be guaranteed that start and end lie
> * on the same contig block
> */
> if (!start_split || (size > CONT_PTE_SIZE))
> end_split = split(end);
Thanks for the suggestion. It works for some cases, but I don't think it
can work if the range is cross page table IIUC. For example, start is in
a PMD, but end is in another PMD.
Regards,
Yang
>
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-05 21:28 ` Yang Shi
@ 2025-08-06 0:10 ` Yang Shi
0 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-08-06 0:10 UTC (permalink / raw)
To: Dev Jain, ryan.roberts, will, catalin.marinas, akpm,
Miko.Lenczewski, scott, cl
Cc: linux-arm-kernel, linux-kernel
>>
>> --- snip ---
>>
>> I'm afraid I'll have to agree with Ryan :) Looking at the signature
>> of split_kernel_pgtable_mapping,
>> one would expect that this function splits all block mappings in this
>> region. But that is just a
>> nit; it does not seem right to me that we are iterating over the
>> entire space when we know *exactly* where
>> we have to make the split, just to save on pgd/p4d/pud loads - the
>> effect of which is probably cancelled
>> out by unnecessary iterations your approach takes to skip the
>> intermediate blocks.
>
> The implementation is aimed to reuse the split code for repainting. We
> have to split all leaf mappings down to PTEs for repainting.
>
> Now Ryan suggested use pgtable walk API for repainting, it made the
> duplicate code problem gone. We had some discussion in the other series.
>
>>
>> If we are concerned that most change_memory_common() operations are
>> for a single page, then
>> we can do something like:
>>
>> unsigned long size = end - start;
>> bool end_split, start_split = false;
>>
>> if (start not aligned to block mapping)
>> start_split = split(start);
>>
>> /*
>> * split the end only if the start wasn't split, or
>> * if it cannot be guaranteed that start and end lie
>> * on the same contig block
>> */
>> if (!start_split || (size > CONT_PTE_SIZE))
>> end_split = split(end);
>
> Thanks for the suggestion. It works for some cases, but I don't think
> it can work if the range is cross page table IIUC. For example, start
> is in a PMD, but end is in another PMD.
I think we have to call split_mapping(end) if size is greater than page
size, i.e.
split_mapping(start);
if (size > PAGE_SIZE)
split_mapping(end);
This can avoid walking page table twice for page size range, which
should be the most case in the current kernel.
Thanks,
Yang
>
> Regards,
> Yang
>
>>
>>
>>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-05 18:53 ` Yang Shi
@ 2025-08-06 7:20 ` Ryan Roberts
2025-08-07 0:44 ` Yang Shi
0 siblings, 1 reply; 33+ messages in thread
From: Ryan Roberts @ 2025-08-06 7:20 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 05/08/2025 19:53, Yang Shi wrote:
[...]
>>> + arch_enter_lazy_mmu_mode();
>>> + ret = split_pgd(pgd_offset_k(start), start, end);
>> My instinct still remains that it would be better not to iterate over the range
>> here, but instead call a "split(start); split(end);" since we just want to split
>> the start and end. So the code would be simpler and probably more performant if
>> we get rid of all the iteration.
>
> It should be more performant for splitting large range, especially the range
> includes leaf mappings at different levels. But I had some optimization to skip
> leaf mappings in this version, so it should be close to your implementation from
> performance perspective. And it just walks the page table once instead of twice.
> It should be more efficient for small split, for example, 4K.
I guess this is the crux of our disagreement. I think the "walks the table once
for 4K" is a micro optimization, which I doubt we would see on any benchmark
results. In the absence of data, I'd prefer the simpler, smaller, easier to
understand version.
Both implementations are on list now; perhaps the maintainers can steer us.
Thanks,
Ryan
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-06 7:20 ` Ryan Roberts
@ 2025-08-07 0:44 ` Yang Shi
0 siblings, 0 replies; 33+ messages in thread
From: Yang Shi @ 2025-08-07 0:44 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/6/25 12:20 AM, Ryan Roberts wrote:
> On 05/08/2025 19:53, Yang Shi wrote:
>
> [...]
>
>>>> + arch_enter_lazy_mmu_mode();
>>>> + ret = split_pgd(pgd_offset_k(start), start, end);
>>> My instinct still remains that it would be better not to iterate over the range
>>> here, but instead call a "split(start); split(end);" since we just want to split
>>> the start and end. So the code would be simpler and probably more performant if
>>> we get rid of all the iteration.
>> It should be more performant for splitting large range, especially the range
>> includes leaf mappings at different levels. But I had some optimization to skip
>> leaf mappings in this version, so it should be close to your implementation from
>> performance perspective. And it just walks the page table once instead of twice.
>> It should be more efficient for small split, for example, 4K.
> I guess this is the crux of our disagreement. I think the "walks the table once
> for 4K" is a micro optimization, which I doubt we would see on any benchmark
> results. In the absence of data, I'd prefer the simpler, smaller, easier to
> understand version.
I did a simple benchmark with module stressor from stress-ng. I used the
below command line:
stress-ng --module 1 --module-name loop --module-ops 1000
It basically loads loop module 1000 times. I saw a slight slowdown (2% -
3% slowdown, average time spent in 5 iterations) with your
implementation on my AmpereOne machine. It shouldn't result in any
noticeable slowdown for real life workloads per the data.
Thanks,
Yang
>
> Both implementations are on list now; perhaps the maintainers can steer us.
>
> Thanks,
> Ryan
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2025-08-07 0:47 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-24 22:11 [v5 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-07-24 22:11 ` [PATCH 1/4] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
2025-07-24 22:11 ` [PATCH 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
2025-08-01 14:36 ` Ryan Roberts
2025-08-04 23:20 ` Christoph Lameter (Ampere)
2025-07-24 22:11 ` [PATCH 3/4] arm64: mm: support large block mapping when rodata=full Yang Shi
2025-07-29 12:34 ` Dev Jain
2025-08-05 21:28 ` Yang Shi
2025-08-06 0:10 ` Yang Shi
2025-08-01 14:35 ` Ryan Roberts
2025-08-04 10:07 ` Ryan Roberts
2025-08-05 18:53 ` Yang Shi
2025-08-06 7:20 ` Ryan Roberts
2025-08-07 0:44 ` Yang Shi
2025-07-24 22:11 ` [PATCH 4/4] arm64: mm: split linear mapping if BBML2 is not supported on secondary CPUs Yang Shi
2025-07-26 11:10 ` kernel test robot
2025-08-01 16:14 ` Ryan Roberts
2025-08-05 18:59 ` Yang Shi
2025-08-05 7:54 ` Ryan Roberts
2025-08-05 17:45 ` Yang Shi
-- strict thread matches above, loose matches on Subject: below --
2025-05-31 2:41 [v4 PATCH 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-05-31 2:41 ` [PATCH 3/4] arm64: mm: support " Yang Shi
2025-06-16 11:58 ` Ryan Roberts
2025-06-16 12:33 ` Ryan Roberts
2025-06-17 21:01 ` Yang Shi
2025-06-16 16:24 ` Ryan Roberts
2025-06-17 21:09 ` Yang Shi
2025-06-23 13:26 ` Ryan Roberts
2025-06-23 19:12 ` Yang Shi
2025-06-26 22:39 ` Yang Shi
2025-07-23 17:38 ` Dev Jain
2025-07-23 20:51 ` Yang Shi
2025-07-24 11:43 ` Dev Jain
2025-07-24 17:59 ` Yang Shi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).