* [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
@ 2025-08-29 11:52 Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
` (6 more replies)
0 siblings, 7 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
Hi All,
This is a new version following on from the v6 RFC at [1] which itself is based
on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
map to be mapped with large blocks, even when rodata=full, and leads to some
nice performance improvements.
I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
modes by hacking the BBML2 feature detection code:
- mode 1: All CPUs support BBML2 so the linear map uses large mappings
- mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
- mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
initially uses large mappings but is then repainted to use pte mappings
In all cases, mm selftests run and no regressions are observed. In all cases,
ptdump of linear map is as expected:
Mode 1:
=======
---[ Linear Mapping start ]---
0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000300000000-0xffff008000000000 500G PUD
0xffff008000000000-0xffff800000000000 130560G PGD
---[ Linear Mapping end ]---
Mode 3:
=======
---[ Linear Mapping start ]---
0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000300000000-0xffff008000000000 500G PUD
0xffff008000000000-0xffff800000000000 130560G PGD
---[ Linear Mapping end ]---
Performance Testing
===================
Yang Shi has gathered some compelling results which are detailed in the commit
log for patch #3. Additionally I have run this through a random selection of
benchmarks on AmpereOne. None show any regressions, and various benchmarks show
statistically significant improvement. I'm just showing those improvements here:
+----------------------+----------------------------------------------------------+-------------------------+
| Benchmark | Result Class | Improvement vs 6.17-rc1 |
+======================+==========================================================+=========================+
| micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | (I) -9.00% |
| | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | (I) -6.93% |
| | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | (I) -6.77% |
| | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | (I) -4.63% |
+----------------------+----------------------------------------------------------+-------------------------+
| mmtests/hackbench | process-sockets-30 (seconds) | (I) -2.96% |
+----------------------+----------------------------------------------------------+-------------------------+
| mmtests/kernbench | syst-192 (seconds) | (I) -12.77% |
+----------------------+----------------------------------------------------------+-------------------------+
| pts/perl-benchmark | Test: Interpreter (Seconds) | (I) -4.86% |
+----------------------+----------------------------------------------------------+-------------------------+
| pts/pgbench | Scale: 1 Clients: 1 Read Write (TPS) | (I) 5.07% |
| | Scale: 1 Clients: 1 Read Write - Latency (ms) | (I) -4.72% |
| | Scale: 100 Clients: 1000 Read Write (TPS) | (I) 2.58% |
| | Scale: 100 Clients: 1000 Read Write - Latency (ms) | (I) -2.52% |
+----------------------+----------------------------------------------------------+-------------------------+
| pts/sqlite-speedtest | Timed Time - Size 1,000 (Seconds) | (I) -2.68% |
+----------------------+----------------------------------------------------------+-------------------------+
Changes since v6 [1]
====================
- Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
to the lockless variant for consistency (per Catalin).
- Misc function/variable renames to improve clarity and consistency.
- Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
~20K from kernel image.
- Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
- Only walk the pgtable once for the common "split single page" case.
- Bypass split to contpmd and contpte when spllitting linear map to ptes.
Applies on v6.17-rc3.
[1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-ryan.roberts@arm.com/
Thanks,
Ryan
Dev Jain (1):
arm64: Enable permission change on arm64 kernel block mappings
Ryan Roberts (3):
arm64: mm: Optimize split_kernel_leaf_mapping()
arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
arm64: mm: Optimize linear_map_split_to_ptes()
Yang Shi (2):
arm64: cpufeature: add AmpereOne to BBML2 allow list
arm64: mm: support large block mapping when rodata=full
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/mmu.h | 3 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 12 +-
arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 157 ++++++++---
arch/arm64/mm/proc.S | 27 +-
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 36 ++-
9 files changed, 599 insertions(+), 64 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
` (5 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
From: Dev Jain <dev.jain@arm.com>
This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we
can handle any permission games on the kernel (init_mm) pagetable.
Previously, __change_memory_common() used apply_to_page_range() which
does not support changing permissions for block mappings. We move away
from this by using the pagewalk API, similar to what riscv does right
now. It is the responsibility of the caller to ensure that the range
over which permissions are being changed falls on leaf mapping
boundaries. For systems with BBML2, this will be handled in future
patches by dyanmically splitting the mappings when required.
Unlike apply_to_page_range(), the pagewalk API currently enforces the
init_mm.mmap_lock to be held. To avoid the unnecessary bottleneck of the
mmap_lock for our usecase, this patch extends this generic API to be
used locklessly, so as to retain the existing behaviour for changing
permissions. Apart from this reason, it is noted at [1] that KFENCE can
manipulate kernel pgtable entries during softirqs. It does this by
calling set_memory_valid() -> __change_memory_common(). This being a
non-sleepable context, we cannot take the init_mm mmap lock.
Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive
control over the range, thus guaranteeing no concurrent access.
We require that the start and end of a given range do not partially
overlap block mappings, or cont mappings. Return -EINVAL in case a
partial block mapping is detected in any of the PGD/P4D/PUD/PMD levels;
add a corresponding comment in update_range_prot() to warn that
eliminating such a condition is the responsibility of the caller.
Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to an
atomic operation for the block mappings. This is fine as any access will
decode either the old or the new permission until the TLBI.
apply_to_page_range() currently performs all pte level callbacks while
in lazy mmu mode. Since arm64 can optimize performance by batching
barriers when modifying kernel pgtables in lazy mmu mode, we would like
to continue to benefit from this optimisation. Unfortunately
walk_kernel_page_table_range() does not use lazy mmu mode. However,
since the pagewalk framework is not allocating any memory, we can safely
bracket the whole operation inside lazy mmu mode ourselves. Therefore,
wrap the call to walk_kernel_page_table_range() with the lazy MMU
helpers.
Link: https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/ [1]
Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/mm/pageattr.c | 153 +++++++++++++++++++++++++++++++--------
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 36 ++++++---
3 files changed, 149 insertions(+), 43 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..6da8cbc32f46 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
#include <linux/mem_encrypt.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
#include <asm/cacheflush.h>
#include <asm/pgtable-prot.h>
@@ -20,6 +21,99 @@ struct page_change_data {
pgprot_t clear_mask;
};
+static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk)
+{
+ struct page_change_data *masks = walk->private;
+
+ val &= ~(pgprot_val(masks->clear_mask));
+ val |= (pgprot_val(masks->set_mask));
+
+ return val;
+}
+
+static int pageattr_pgd_entry(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pgd_t val = pgdp_get(pgd);
+
+ if (pgd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PGDIR_SIZE))
+ return -EINVAL;
+ val = __pgd(set_pageattr_masks(pgd_val(val), walk));
+ set_pgd(pgd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ p4d_t val = p4dp_get(p4d);
+
+ if (p4d_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != P4D_SIZE))
+ return -EINVAL;
+ val = __p4d(set_pageattr_masks(p4d_val(val), walk));
+ set_p4d(p4d, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pud_t val = pudp_get(pud);
+
+ if (pud_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+ return -EINVAL;
+ val = __pud(set_pageattr_masks(pud_val(val), walk));
+ set_pud(pud, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pmd_t val = pmdp_get(pmd);
+
+ if (pmd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+ return -EINVAL;
+ val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+ set_pmd(pmd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pte_t val = __ptep_get(pte);
+
+ val = __pte(set_pageattr_masks(pte_val(val), walk));
+ __set_pte(pte, val);
+
+ return 0;
+}
+
+static const struct mm_walk_ops pageattr_ops = {
+ .pgd_entry = pageattr_pgd_entry,
+ .p4d_entry = pageattr_p4d_entry,
+ .pud_entry = pageattr_pud_entry,
+ .pmd_entry = pageattr_pmd_entry,
+ .pte_entry = pageattr_pte_entry,
+};
+
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
bool can_set_direct_map(void)
@@ -37,32 +131,35 @@ bool can_set_direct_map(void)
arm64_kfence_can_set_direct_map() || is_realm_world();
}
-static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
+static int update_range_prot(unsigned long start, unsigned long size,
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data *cdata = data;
- pte_t pte = __ptep_get(ptep);
+ struct page_change_data data;
+ int ret;
- pte = clear_pte_bit(pte, cdata->clear_mask);
- pte = set_pte_bit(pte, cdata->set_mask);
+ data.set_mask = set_mask;
+ data.clear_mask = clear_mask;
- __set_pte(ptep, pte);
- return 0;
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The caller must ensure that the range we are operating on does not
+ * partially overlap a block mapping, or a cont mapping. Any such case
+ * must be eliminated by splitting the mapping.
+ */
+ ret = walk_kernel_page_table_range_lockless(start, start + size,
+ &pageattr_ops, NULL, &data);
+ arch_leave_lazy_mmu_mode();
+
+ return ret;
}
-/*
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
static int __change_memory_common(unsigned long start, unsigned long size,
- pgprot_t set_mask, pgprot_t clear_mask)
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data data;
int ret;
- data.set_mask = set_mask;
- data.clear_mask = clear_mask;
-
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
+ ret = update_range_prot(start, size, set_mask, clear_mask);
/*
* If the memory is being made valid without changing any other bits
@@ -174,32 +271,26 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
int set_direct_map_invalid_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(0),
- .clear_mask = __pgprot(PTE_VALID),
- };
+ pgprot_t clear_mask = __pgprot(PTE_VALID);
+ pgprot_t set_mask = __pgprot(0);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
int set_direct_map_default_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
- .clear_mask = __pgprot(PTE_RDONLY),
- };
+ pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+ pgprot_t clear_mask = __pgprot(PTE_RDONLY);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
static int __set_memory_enc_dec(unsigned long addr,
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 682472c15495..88e18615dd72 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
pgd_t *pgd, void *private);
+int walk_kernel_page_table_range_lockless(unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ pgd_t *pgd, void *private);
int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 648038247a8d..936689d8bcac 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -606,10 +606,32 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start, unsigned long end,
const struct mm_walk_ops *ops, pgd_t *pgd, void *private)
{
- struct mm_struct *mm = &init_mm;
+ /*
+ * Kernel intermediate page tables are usually not freed, so the mmap
+ * read lock is sufficient. But there are some exceptions.
+ * E.g. memory hot-remove. In which case, the mmap lock is insufficient
+ * to prevent the intermediate kernel pages tables belonging to the
+ * specified address range from being freed. The caller should take
+ * other actions to prevent this race.
+ */
+ mmap_assert_locked(&init_mm);
+
+ return walk_kernel_page_table_range_lockless(start, end, ops, pgd,
+ private);
+}
+
+/*
+ * Use this function to walk the kernel page tables locklessly. It should be
+ * guaranteed that the caller has exclusive access over the range they are
+ * operating on - that there should be no concurrent access, for example,
+ * changing permissions for vmalloc objects.
+ */
+int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
+ const struct mm_walk_ops *ops, pgd_t *pgd, void *private)
+{
struct mm_walk walk = {
.ops = ops,
- .mm = mm,
+ .mm = &init_mm,
.pgd = pgd,
.private = private,
.no_vma = true
@@ -620,16 +642,6 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
if (!check_ops_valid(ops))
return -EINVAL;
- /*
- * Kernel intermediate page tables are usually not freed, so the mmap
- * read lock is sufficient. But there are some exceptions.
- * E.g. memory hot-remove. In which case, the mmap lock is insufficient
- * to prevent the intermediate kernel pages tables belonging to the
- * specified address range from being freed. The caller should take
- * other actions to prevent this race.
- */
- mmap_assert_locked(mm);
-
return walk_pgd_range(start, end, &walk);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 22:08 ` Yang Shi
2025-09-03 17:24 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
` (4 subsequent siblings)
6 siblings, 2 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
From: Yang Shi <yang@os.amperecomputing.com>
AmpereOne supports BBML2 without conflict abort, add to the allow list.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/kernel/cpufeature.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d..b93f4ee57176 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2234,6 +2234,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
static const struct midr_range supports_bbml2_noabort_list[] = {
MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
{}
};
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-09-03 19:15 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
` (3 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
From: Yang Shi <yang@os.amperecomputing.com>
When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.
This resulted in a couple of problems:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.
Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full. When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.
The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.
With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P)
with 256GB memory.
* Memory use after boot
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
disk encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap \
--randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
--ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
--group_reporting --thread --name=iops-test-job --eta-newline=1 \
--size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad
case). The bandwidth is increased and the avg clat is reduced
proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache
populated). The bandwidth is increased by 150%.
Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 7 +-
arch/arm64/mm/mmu.c | 248 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 4 +
6 files changed, 261 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..e223cbf350e4 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,8 @@ static inline bool system_supports_pmuv3(void)
return cpus_have_final_cap(ARM64_HAS_PMUV3);
}
+bool cpu_supports_bbml2_noabort(void);
+
static inline bool system_supports_bbml2_noabort(void)
{
return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..56fca81f60ad 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index abd2dee416b3..aa89c2e67ebc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
}
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
static inline int pte_uffd_wp(pte_t pte)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b93f4ee57176..a8936c1023ea 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2217,7 +2217,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
}
-static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+bool cpu_supports_bbml2_noabort(void)
{
/*
* We want to allow usage of BBML2 in as wide a range of kernel contexts
@@ -2251,6 +2251,11 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
return true;
}
+static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+{
+ return cpu_supports_bbml2_noabort();
+}
+
#ifdef CONFIG_ARM64_PAN
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 34e5d78af076..114b88216b0c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
int flags);
#endif
+#define INVALID_PHYS_ADDR -1
+
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
enum pgtable_type pgtable_type)
{
@@ -488,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
phys_addr_t pa;
- BUG_ON(!ptdesc);
+ if (!ptdesc)
+ return INVALID_PHYS_ADDR;
+
pa = page_to_phys(ptdesc_page(ptdesc));
switch (pgtable_type) {
@@ -509,16 +513,240 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
return pa;
}
+static phys_addr_t
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+ return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+}
+
static phys_addr_t __maybe_unused
pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
}
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(NULL, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
+}
+
+static void split_contpte(pte_t *ptep)
+{
+ int i;
+
+ ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ for (i = 0; i < CONT_PTES; i++, ptep++)
+ __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
+}
+
+static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
+ unsigned long pfn = pmd_pfn(pmd);
+ pgprot_t prot = pmd_pgprot(pmd);
+ phys_addr_t pte_phys;
+ pte_t *ptep;
+ int i;
+
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ if (pte_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte(ptep, pfn_pte(pfn, prot));
+
+ /*
+ * Ensure the pte entries are visible to the table walker by the time
+ * the pmd entry that points to the ptes is visible.
+ */
+ dsb(ishst);
+ __pmd_populate(pmdp, pte_phys, tableprot);
+
+ return 0;
+}
+
+static void split_contpmd(pmd_t *pmdp)
+{
+ int i;
+
+ pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ for (i = 0; i < CONT_PMDS; i++, pmdp++)
+ set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
+}
+
+static int split_pud(pud_t *pudp, pud_t pud)
+{
+ pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+ unsigned long pfn = pud_pfn(pud);
+ pgprot_t prot = pud_pgprot(pud);
+ phys_addr_t pmd_phys;
+ pmd_t *pmdp;
+ int i;
+
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ if (pmd_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
+ set_pmd(pmdp, pfn_pmd(pfn, prot));
+
+ /*
+ * Ensure the pmd entries are visible to the table walker by the time
+ * the pud entry that points to the pmds is visible.
+ */
+ dsb(ishst);
+ __pud_populate(pudp, pmd_phys, tableprot);
+
+ return 0;
+}
+
+static int split_kernel_leaf_mapping_locked(unsigned long addr)
+{
+ pgd_t *pgdp, pgd;
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+ int ret = 0;
+
+ /*
+ * PGD: If addr is PGD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
+ goto out;
+ pgdp = pgd_offset_k(addr);
+ pgd = pgdp_get(pgdp);
+ if (!pgd_present(pgd))
+ goto out;
+
+ /*
+ * P4D: If addr is P4D aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
+ goto out;
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = p4dp_get(p4dp);
+ if (!p4d_present(p4d))
+ goto out;
+
+ /*
+ * PUD: If addr is PUD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split. Otherwise,
+ * if we have a pud leaf, split to contpmd.
+ */
+ if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
+ goto out;
+ pudp = pud_offset(p4dp, addr);
+ pud = pudp_get(pudp);
+ if (!pud_present(pud))
+ goto out;
+ if (pud_leaf(pud)) {
+ ret = split_pud(pudp, pud);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPMD: If addr is CONTPMD aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpmd leaf, split to pmd.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ pmdp = pmd_offset(pudp, addr);
+ pmd = pmdp_get(pmdp);
+ if (!pmd_present(pmd))
+ goto out;
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ /*
+ * PMD: If addr is PMD aligned then addr already describes a
+ * leaf boundary. Otherwise, split to contpte.
+ */
+ if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
+ goto out;
+ ret = split_pmd(pmdp, pmd);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPTE: If addr is CONTPTE aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpte leaf, split to pte.
+ */
+ if (ALIGN_DOWN(addr, CONT_PTE_SIZE) == addr)
+ goto out;
+ ptep = pte_offset_kernel(pmdp, addr);
+ pte = __ptep_get(ptep);
+ if (!pte_present(pte))
+ goto out;
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+out:
+ return ret;
+}
+
+static DEFINE_MUTEX(pgtable_split_lock);
+
+int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
+{
+ int ret;
+
+ /*
+ * !BBML2_NOABORT systems should not be trying to change permissions on
+ * anything that is not pte-mapped in the first place. Just return early
+ * and let the permission change code raise a warning if not already
+ * pte-mapped.
+ */
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ /*
+ * Ensure start and end are at least page-aligned since this is the
+ * finest granularity we can split to.
+ */
+ if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
+ return -EINVAL;
+
+ mutex_lock(&pgtable_split_lock);
+ arch_enter_lazy_mmu_mode();
+
+ ret = split_kernel_leaf_mapping_locked(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping_locked(end);
+
+ arch_leave_lazy_mmu_mode();
+ mutex_unlock(&pgtable_split_lock);
+ return ret;
}
/*
@@ -640,6 +868,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+static inline bool force_pte_mapping(void)
+{
+ bool bbml2 = system_capabilities_finalized() ?
+ system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
+
+ return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
+ is_realm_world())) ||
+ debug_pagealloc_enabled();
+}
+
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -665,7 +903,7 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1367,7 +1605,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 6da8cbc32f46..0aba80a38cef 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
+ ret = split_kernel_leaf_mapping(start, start + size);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
arch_enter_lazy_mmu_mode();
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (2 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 22:11 ` Yang Shi
2025-09-03 19:20 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
` (2 subsequent siblings)
6 siblings, 2 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
The common case for split_kernel_leaf_mapping() is for a single page.
Let's optimize this by only calling split_kernel_leaf_mapping_locked()
once.
Since the start and end address are PAGE_SIZE apart, they must be
contained within the same contpte block. Further, if start is at the
beginning of the block or end is at the end of the block, then the other
address must be in the _middle_ of the block. So if we split on this
middle-of-the-contpte-block address, it is guaranteed that the
containing contpte block is split to ptes and both start and end are
therefore mapped by pte.
This avoids the second call to split_kernel_leaf_mapping_locked()
meaning we only have to walk the pgtable once.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/mmu.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 114b88216b0c..8b5b19e1154b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -740,9 +740,21 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
mutex_lock(&pgtable_split_lock);
arch_enter_lazy_mmu_mode();
- ret = split_kernel_leaf_mapping_locked(start);
- if (!ret)
- ret = split_kernel_leaf_mapping_locked(end);
+ /*
+ * Optimize for the common case of splitting out a single page from a
+ * larger mapping. Here we can just split on the "least aligned" of
+ * start and end and this will guarantee that there must also be a split
+ * on the more aligned address since the both addresses must be in the
+ * same contpte block and it must have been split to ptes.
+ */
+ if (end - start == PAGE_SIZE) {
+ start = __ffs(start) < __ffs(end) ? start : end;
+ ret = split_kernel_leaf_mapping_locked(start);
+ } else {
+ ret = split_kernel_leaf_mapping_locked(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping_locked(end);
+ }
arch_leave_lazy_mmu_mode();
mutex_unlock(&pgtable_split_lock);
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (3 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
6 siblings, 0 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point. So the linear
mapping is determined by the capability of boot CPU only. If the boot
CPU supports BBML2, large block mappings will be used for linear
mapping.
But the secondary CPUs may not support BBML2, so repaint the linear
mapping if large block mapping is used and the secondary CPUs don't
support BBML2 once cpufeature is finalized on all CPUs.
If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.
Repainting is implemented by the boot CPU, which we know supports BBML2,
so it is safe for the live mapping size to change for this CPU. The
linear map region is walked using the pagewalk API and any discovered
large leaf mappings are split to pte mappings using the existing helper
functions. Since the repainting is performed inside of a stop_machine(),
we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
since we are still early in boot, it is expected that there is plenty of
memory available so we will never need to sleep for reclaim, and so
GFP_ATOMIC is acceptable here.
The secondary CPUs are all put into a waiting area with the idmap in
TTBR0 and reserved map in TTBR1 while this is performed since they
cannot be allowed to observe any size changes on the live mappings. Some
of this infrastructure is reused from the kpti case. Specifically we
share the same flag (was __idmap_kpti_flag, now idmap_kpti_bbml2_flag)
since it means we don't have to reserve any extra pgtable memory to
idmap the extra flag.
Co-developed-by: Yang Shi <yang@os.amperecomputing.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/mmu.h | 2 +
arch/arm64/kernel/cpufeature.c | 3 +
arch/arm64/mm/mmu.c | 168 +++++++++++++++++++++++++++++----
arch/arm64/mm/proc.S | 27 ++++--
4 files changed, 175 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 56fca81f60ad..2acfa7801d02 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -72,6 +72,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
+extern void init_idmap_kpti_bbml2_flag(void);
+extern void linear_map_maybe_split_to_ptes(void);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a8936c1023ea..461d286f40b1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
#include <asm/insn.h>
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
+#include <asm/mmu.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
@@ -2027,6 +2028,7 @@ static void __init kpti_install_ng_mappings(void)
if (arm64_use_ng_mappings)
return;
+ init_idmap_kpti_bbml2_flag();
stop_machine(__kpti_install_ng_mappings, NULL, cpu_online_mask);
}
@@ -3930,6 +3932,7 @@ void __init setup_system_features(void)
{
setup_system_capabilities();
+ linear_map_maybe_split_to_ptes();
kpti_install_ng_mappings();
sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8b5b19e1154b..6bd0b065bd97 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -27,6 +27,8 @@
#include <linux/kfence.h>
#include <linux/pkeys.h>
#include <linux/mm_inline.h>
+#include <linux/pagewalk.h>
+#include <linux/stop_machine.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -483,11 +485,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
#define INVALID_PHYS_ADDR -1
-static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
+static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
enum pgtable_type pgtable_type)
{
/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
- struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
+ struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
phys_addr_t pa;
if (!ptdesc)
@@ -514,9 +516,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t
-try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
}
static phys_addr_t __maybe_unused
@@ -524,7 +526,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -534,7 +536,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -548,7 +550,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -557,7 +559,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
pte_t *ptep;
int i;
- pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
ptep = (pte_t *)phys_to_virt(pte_phys);
@@ -590,7 +592,7 @@ static void split_contpmd(pmd_t *pmdp)
set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
}
-static int split_pud(pud_t *pudp, pud_t pud)
+static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
{
pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
@@ -600,7 +602,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
pmd_t *pmdp;
int i;
- pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pmdp = (pmd_t *)phys_to_virt(pmd_phys);
@@ -667,7 +669,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
if (!pud_present(pud))
goto out;
if (pud_leaf(pud)) {
- ret = split_pud(pudp, pud);
+ ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
if (ret)
goto out;
}
@@ -692,7 +694,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
if (ret)
goto out;
}
@@ -761,6 +763,132 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
return ret;
}
+static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pud_t pud = pudp_get(pudp);
+ int ret = 0;
+
+ if (pud_leaf(pud))
+ ret = split_pud(pudp, pud, GFP_ATOMIC);
+
+ return ret;
+}
+
+static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pmd_t pmd = pmdp_get(pmdp);
+ int ret = 0;
+
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
+ }
+
+ return ret;
+}
+
+static int __init split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pte_t pte = __ptep_get(ptep);
+
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+ return 0;
+}
+
+static const struct mm_walk_ops split_to_ptes_ops __initconst = {
+ .pud_entry = split_to_ptes_pud_entry,
+ .pmd_entry = split_to_ptes_pmd_entry,
+ .pte_entry = split_to_ptes_pte_entry,
+};
+
+static bool linear_map_requires_bbml2 __initdata;
+
+u32 idmap_kpti_bbml2_flag;
+
+void __init init_idmap_kpti_bbml2_flag(void)
+{
+ WRITE_ONCE(idmap_kpti_bbml2_flag, 1);
+ /* Must be visible to other CPUs before stop_machine() is called. */
+ smp_mb();
+}
+
+static int __init linear_map_split_to_ptes(void *__unused)
+{
+ /*
+ * Repainting the linear map must be done by CPU0 (the boot CPU) because
+ * that's the only CPU that we know supports BBML2. The other CPUs will
+ * be held in a waiting area with the idmap active.
+ */
+ if (!smp_processor_id()) {
+ unsigned long lstart = _PAGE_OFFSET(vabits_actual);
+ unsigned long lend = PAGE_END;
+ unsigned long kstart = (unsigned long)lm_alias(_stext);
+ unsigned long kend = (unsigned long)lm_alias(__init_begin);
+ int ret;
+
+ /*
+ * Wait for all secondary CPUs to be put into the waiting area.
+ */
+ smp_cond_load_acquire(&idmap_kpti_bbml2_flag, VAL == num_online_cpus());
+
+ /*
+ * Walk all of the linear map [lstart, lend), except the kernel
+ * linear map alias [kstart, kend), and split all mappings to
+ * PTE. The kernel alias remains static throughout runtime so
+ * can continue to be safely mapped with large mappings.
+ */
+ ret = walk_kernel_page_table_range_lockless(lstart, kstart,
+ &split_to_ptes_ops, NULL, NULL);
+ if (!ret)
+ ret = walk_kernel_page_table_range_lockless(kend, lend,
+ &split_to_ptes_ops, NULL, NULL);
+ if (ret)
+ panic("Failed to split linear map\n");
+ flush_tlb_kernel_range(lstart, lend);
+
+ /*
+ * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
+ * before any page table split operations.
+ */
+ WRITE_ONCE(idmap_kpti_bbml2_flag, 0);
+ } else {
+ typedef void (wait_split_fn)(void);
+ extern wait_split_fn wait_linear_map_split_to_ptes;
+ wait_split_fn *wait_fn;
+
+ wait_fn = (void *)__pa_symbol(wait_linear_map_split_to_ptes);
+
+ /*
+ * At least one secondary CPU doesn't support BBML2 so cannot
+ * tolerate the size of the live mappings changing. So have the
+ * secondary CPUs wait for the boot CPU to make the changes
+ * with the idmap active and init_mm inactive.
+ */
+ cpu_install_idmap();
+ wait_fn();
+ cpu_uninstall_idmap();
+ }
+
+ return 0;
+}
+
+void __init linear_map_maybe_split_to_ptes(void)
+{
+ if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()) {
+ init_idmap_kpti_bbml2_flag();
+ stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
+ }
+}
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
@@ -915,6 +1043,8 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
+ linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
+
if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
@@ -1048,7 +1178,7 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
- kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+ kpti_bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
static void __init create_idmap(void)
{
@@ -1060,15 +1190,17 @@ static void __init create_idmap(void)
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
- if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && !arm64_use_ng_mappings) {
- extern u32 __idmap_kpti_flag;
- u64 pa = __pa_symbol(&__idmap_kpti_flag);
+ if (linear_map_requires_bbml2 ||
+ (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && !arm64_use_ng_mappings)) {
+ u64 pa = __pa_symbol(&idmap_kpti_bbml2_flag);
/*
* The KPTI G-to-nG conversion code needs a read-write mapping
- * of its synchronization flag in the ID map.
+ * of its synchronization flag in the ID map. This is also used
+ * when splitting the linear map to ptes if a secondary CPU
+ * doesn't support bbml2.
*/
- ptep = __pa_symbol(kpti_ptes);
+ ptep = __pa_symbol(kpti_bbml2_ptes);
__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..86818511962b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -245,10 +245,6 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
*
* Called exactly once from stop_machine context by each CPU found during boot.
*/
- .pushsection ".data", "aw", %progbits
-SYM_DATA(__idmap_kpti_flag, .long 1)
- .popsection
-
SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings)
cpu .req w0
temp_pte .req x0
@@ -273,7 +269,7 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings)
mov x5, x3 // preserve temp_pte arg
mrs swapper_ttb, ttbr1_el1
- adr_l flag_ptr, __idmap_kpti_flag
+ adr_l flag_ptr, idmap_kpti_bbml2_flag
cbnz cpu, __idmap_kpti_secondary
@@ -416,7 +412,25 @@ alternative_else_nop_endif
__idmap_kpti_secondary:
/* Uninstall swapper before surgery begins */
__idmap_cpu_set_reserved_ttbr1 x16, x17
+ b scondary_cpu_wait
+
+ .unreq swapper_ttb
+ .unreq flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
+
+ .pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(wait_linear_map_split_to_ptes)
+ /* Must be same registers as in idmap_kpti_install_ng_mappings */
+ swapper_ttb .req x3
+ flag_ptr .req x4
+
+ mrs swapper_ttb, ttbr1_el1
+ adr_l flag_ptr, idmap_kpti_bbml2_flag
+ __idmap_cpu_set_reserved_ttbr1 x16, x17
+scondary_cpu_wait:
/* Increment the flag to let the boot CPU we're ready */
1: ldxr w16, [flag_ptr]
add w16, w16, #1
@@ -436,9 +450,8 @@ __idmap_kpti_secondary:
.unreq swapper_ttb
.unreq flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(wait_linear_map_split_to_ptes)
.popsection
-#endif
/*
* __cpu_setup
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (4 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 22:27 ` Yang Shi
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
6 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
When splitting kernel leaf mappings, either via
split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
previously a leaf mapping was always split to the next size down. e.g.
pud -> contpmd -> pmd -> contpte -> pte. But for
linear_map_split_to_ptes() we can avoid the contpmd and contpte states
because we know we want to split all the way down to ptes.
This avoids visiting all the ptes in a table if it was created by
splitting a pmd, which is noticible on systems with a lot of memory.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/mmu.c | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6bd0b065bd97..8e45cd08bf3a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -550,7 +550,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -568,7 +568,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
tableprot |= PMD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
__set_pte(ptep, pfn_pte(pfn, prot));
@@ -592,7 +594,7 @@ static void split_contpmd(pmd_t *pmdp)
set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
}
-static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
+static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
{
pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
@@ -611,7 +613,9 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
tableprot |= PUD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
set_pmd(pmdp, pfn_pmd(pfn, prot));
@@ -669,7 +673,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
if (!pud_present(pud))
goto out;
if (pud_leaf(pud)) {
- ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
+ ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -694,7 +698,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -771,7 +775,7 @@ static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
int ret = 0;
if (pud_leaf(pud))
- ret = split_pud(pudp, pud, GFP_ATOMIC);
+ ret = split_pud(pudp, pud, GFP_ATOMIC, false);
return ret;
}
@@ -786,7 +790,13 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
if (pmd_leaf(pmd)) {
if (pmd_cont(pmd))
split_contpmd(pmdp);
- ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
+
+ /*
+ * We have split the pmd directly to ptes so there is no need to
+ * visit each pte to check if they are contpte.
+ */
+ walk->action = ACTION_CONTINUE;
}
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
@ 2025-08-29 22:08 ` Yang Shi
2025-09-03 17:24 ` Catalin Marinas
1 sibling, 0 replies; 17+ messages in thread
From: Yang Shi @ 2025-08-29 22:08 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 8/29/25 4:52 AM, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
I saw Catalin gave Reviewed-by to v6 of this patch, I think we can keep it.
Yang
> ---
> arch/arm64/kernel/cpufeature.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ad065f15f1d..b93f4ee57176 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2234,6 +2234,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
> static const struct midr_range supports_bbml2_noabort_list[] = {
> MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> {}
> };
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
@ 2025-08-29 22:11 ` Yang Shi
2025-09-03 19:20 ` Catalin Marinas
1 sibling, 0 replies; 17+ messages in thread
From: Yang Shi @ 2025-08-29 22:11 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 8/29/25 4:52 AM, Ryan Roberts wrote:
> The common case for split_kernel_leaf_mapping() is for a single page.
> Let's optimize this by only calling split_kernel_leaf_mapping_locked()
> once.
>
> Since the start and end address are PAGE_SIZE apart, they must be
> contained within the same contpte block. Further, if start is at the
> beginning of the block or end is at the end of the block, then the other
> address must be in the _middle_ of the block. So if we split on this
> middle-of-the-contpte-block address, it is guaranteed that the
> containing contpte block is split to ptes and both start and end are
> therefore mapped by pte.
>
> This avoids the second call to split_kernel_leaf_mapping_locked()
> meaning we only have to walk the pgtable once.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/mm/mmu.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 114b88216b0c..8b5b19e1154b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -740,9 +740,21 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> mutex_lock(&pgtable_split_lock);
> arch_enter_lazy_mmu_mode();
>
> - ret = split_kernel_leaf_mapping_locked(start);
> - if (!ret)
> - ret = split_kernel_leaf_mapping_locked(end);
> + /*
> + * Optimize for the common case of splitting out a single page from a
> + * larger mapping. Here we can just split on the "least aligned" of
> + * start and end and this will guarantee that there must also be a split
> + * on the more aligned address since the both addresses must be in the
> + * same contpte block and it must have been split to ptes.
> + */
> + if (end - start == PAGE_SIZE) {
> + start = __ffs(start) < __ffs(end) ? start : end;
> + ret = split_kernel_leaf_mapping_locked(start);
This makes sense to me. I suggested the same thing in the discussion
with Dev for v5. I'd like to have this patch squashed into patch #3.
Thanks,
Yang
> + } else {
> + ret = split_kernel_leaf_mapping_locked(start);
> + if (!ret)
> + ret = split_kernel_leaf_mapping_locked(end);
> + }
>
> arch_leave_lazy_mmu_mode();
> mutex_unlock(&pgtable_split_lock);
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
@ 2025-08-29 22:27 ` Yang Shi
0 siblings, 0 replies; 17+ messages in thread
From: Yang Shi @ 2025-08-29 22:27 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 8/29/25 4:52 AM, Ryan Roberts wrote:
> When splitting kernel leaf mappings, either via
> split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
> previously a leaf mapping was always split to the next size down. e.g.
> pud -> contpmd -> pmd -> contpte -> pte. But for
> linear_map_split_to_ptes() we can avoid the contpmd and contpte states
> because we know we want to split all the way down to ptes.
>
> This avoids visiting all the ptes in a table if it was created by
> splitting a pmd, which is noticible on systems with a lot of memory.
Similar to patch #4, this patch should be squashed into patch #5 IMHO.
Thanks,
Yang
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/mm/mmu.c | 26 ++++++++++++++++++--------
> 1 file changed, 18 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 6bd0b065bd97..8e45cd08bf3a 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -550,7 +550,7 @@ static void split_contpte(pte_t *ptep)
> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> }
>
> -static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
> {
> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> unsigned long pfn = pmd_pfn(pmd);
> @@ -568,7 +568,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> tableprot |= PMD_TABLE_PXN;
>
> prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
> + if (to_cont)
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>
> for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> __set_pte(ptep, pfn_pte(pfn, prot));
> @@ -592,7 +594,7 @@ static void split_contpmd(pmd_t *pmdp)
> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
> }
>
> -static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
> {
> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> @@ -611,7 +613,9 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
> tableprot |= PUD_TABLE_PXN;
>
> prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
> + if (to_cont)
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>
> for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
> set_pmd(pmdp, pfn_pmd(pfn, prot));
> @@ -669,7 +673,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> if (!pud_present(pud))
> goto out;
> if (pud_leaf(pud)) {
> - ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
> if (ret)
> goto out;
> }
> @@ -694,7 +698,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> */
> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> goto out;
> - ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
> if (ret)
> goto out;
> }
> @@ -771,7 +775,7 @@ static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
> int ret = 0;
>
> if (pud_leaf(pud))
> - ret = split_pud(pudp, pud, GFP_ATOMIC);
> + ret = split_pud(pudp, pud, GFP_ATOMIC, false);
>
> return ret;
> }
> @@ -786,7 +790,13 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
> if (pmd_leaf(pmd)) {
> if (pmd_cont(pmd))
> split_contpmd(pmdp);
> - ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
> +
> + /*
> + * We have split the pmd directly to ptes so there is no need to
> + * visit each pte to check if they are contpte.
> + */
> + walk->action = ACTION_CONTINUE;
> }
>
> return ret;
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (5 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
@ 2025-09-01 5:04 ` Dev Jain
2025-09-01 8:03 ` Ryan Roberts
6 siblings, 1 reply; 17+ messages in thread
From: Dev Jain @ 2025-09-01 5:04 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Yang Shi, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 29/08/25 5:22 pm, Ryan Roberts wrote:
> Hi All,
>
> This is a new version following on from the v6 RFC at [1] which itself is based
> on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
> map to be mapped with large blocks, even when rodata=full, and leads to some
> nice performance improvements.
>
> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> modes by hacking the BBML2 feature detection code:
>
> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
> initially uses large mappings but is then repainted to use pte mappings
>
> In all cases, mm selftests run and no regressions are observed. In all cases,
> ptdump of linear map is as expected:
>
> Mode 1:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
> Mode 3:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
>
> Performance Testing
> ===================
>
> Yang Shi has gathered some compelling results which are detailed in the commit
> log for patch #3. Additionally I have run this through a random selection of
> benchmarks on AmpereOne. None show any regressions, and various benchmarks show
> statistically significant improvement. I'm just showing those improvements here:
>
> +----------------------+----------------------------------------------------------+-------------------------+
> | Benchmark | Result Class | Improvement vs 6.17-rc1 |
> +======================+==========================================================+=========================+
> | micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | (I) -9.00% |
> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | (I) -6.93% |
> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | (I) -6.77% |
> | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | (I) -4.63% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | mmtests/hackbench | process-sockets-30 (seconds) | (I) -2.96% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | mmtests/kernbench | syst-192 (seconds) | (I) -12.77% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | pts/perl-benchmark | Test: Interpreter (Seconds) | (I) -4.86% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | pts/pgbench | Scale: 1 Clients: 1 Read Write (TPS) | (I) 5.07% |
> | | Scale: 1 Clients: 1 Read Write - Latency (ms) | (I) -4.72% |
> | | Scale: 100 Clients: 1000 Read Write (TPS) | (I) 2.58% |
> | | Scale: 100 Clients: 1000 Read Write - Latency (ms) | (I) -2.52% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | pts/sqlite-speedtest | Timed Time - Size 1,000 (Seconds) | (I) -2.68% |
> +----------------------+----------------------------------------------------------+-------------------------+
>
>
> Changes since v6 [1]
> ====================
>
> - Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
> of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
> to the lockless variant for consistency (per Catalin).
> - Misc function/variable renames to improve clarity and consistency.
> - Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
> wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
> ~20K from kernel image.
> - Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
> - Only walk the pgtable once for the common "split single page" case.
> - Bypass split to contpmd and contpte when spllitting linear map to ptes.
>
>
> Applies on v6.17-rc3.
>
>
> [1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-ryan.roberts@arm.com/
>
> Thanks,
> Ryan
>
> Dev Jain (1):
> arm64: Enable permission change on arm64 kernel block mappings
>
> Ryan Roberts (3):
> arm64: mm: Optimize split_kernel_leaf_mapping()
> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
> arm64: mm: Optimize linear_map_split_to_ptes()
>
> Yang Shi (2):
> arm64: cpufeature: add AmpereOne to BBML2 allow list
> arm64: mm: support large block mapping when rodata=full
>
> arch/arm64/include/asm/cpufeature.h | 2 +
> arch/arm64/include/asm/mmu.h | 3 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 12 +-
> arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 157 ++++++++---
> arch/arm64/mm/proc.S | 27 +-
> include/linux/pagewalk.h | 3 +
> mm/pagewalk.c | 36 ++-
> 9 files changed, 599 insertions(+), 64 deletions(-)
>
> --
> 2.43.0
>
Hi Yang and Ryan,
I observe there are various callsites which will ultimately use update_range_prot() (from patch 1),
that they do not check the return value. I am listing the ones I could find:
set_memory_ro() in bpf_jit_comp.c
set_memory_valid() in kernel_map_pages() in pageattr.c
set_direct_map_invalid_noflush() in vm_reset_perms() in vmalloc.c
set_direct_map_default_noflush() in vm_reset_perms() in vmalloc.c, and in secretmem.c
(the secretmem.c ones should be safe as explained in the commments therein)
The first one I think can be handled easily by returning -EFAULT.
For the second, we are already returning in case of !can_set_direct_map, which renders DEBUG_PAGEALLOC useless. So maybe it is
safe to ignore the ret from set_memory_valid?
For the third, the call chain is a sequence of must-succeed void functions. Notably, when using vfree(), we may have to allocate a single
pagetable page for splitting.
I am wondering whether we can just have a warn_on_once or something for the case when we fail to allocate a pagetable page. Or, Ryan had
suggested in an off-the-list conversation that we can maintain a cache of PTE tables for every PMD block mapping, which will give us
the same memory consumption as we do today, but not sure if this is worth it. x86 can already handle splitting but due to the callchains
I have described above, it has the same problem, and the code has been working for years :)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
@ 2025-09-01 8:03 ` Ryan Roberts
2025-09-03 0:21 ` Yang Shi
0 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-09-01 8:03 UTC (permalink / raw)
To: Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Yang Shi, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 01/09/2025 06:04, Dev Jain wrote:
>
> On 29/08/25 5:22 pm, Ryan Roberts wrote:
>> Hi All,
>>
>> This is a new version following on from the v6 RFC at [1] which itself is based
>> on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
>> map to be mapped with large blocks, even when rodata=full, and leads to some
>> nice performance improvements.
>>
>> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>> modes by hacking the BBML2 feature detection code:
>>
>> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
>> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
>> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
>> initially uses large mappings but is then repainted to use pte mappings
>>
>> In all cases, mm selftests run and no regressions are observed. In all cases,
>> ptdump of linear map is as expected:
>>
>> Mode 1:
>> =======
>> ---[ Linear Mapping start ]---
>> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>> AF BLK UXN MEM/NORMAL
>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000300000000-0xffff008000000000 500G PUD
>> 0xffff008000000000-0xffff800000000000 130560G PGD
>> ---[ Linear Mapping end ]---
>>
>> Mode 3:
>> =======
>> ---[ Linear Mapping start ]---
>> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>> AF BLK UXN MEM/NORMAL
>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000300000000-0xffff008000000000 500G PUD
>> 0xffff008000000000-0xffff800000000000 130560G PGD
>> ---[ Linear Mapping end ]---
>>
>>
>> Performance Testing
>> ===================
>>
>> Yang Shi has gathered some compelling results which are detailed in the commit
>> log for patch #3. Additionally I have run this through a random selection of
>> benchmarks on AmpereOne. None show any regressions, and various benchmarks show
>> statistically significant improvement. I'm just showing those improvements here:
>>
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | Benchmark | Result
>> Class | Improvement vs 6.17-rc1 |
>> +======================+==========================================================+=========================+
>> | micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000
>> (usec) | (I) -9.00% |
>> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000
>> (usec) | (I) -6.93% |
>> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000
>> (usec) | (I) -6.77% |
>> | | pcpu_alloc_test: p:1, h:0, l:500000
>> (usec) | (I) -4.63% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | mmtests/hackbench | process-sockets-30
>> (seconds) | (I) -2.96% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | mmtests/kernbench | syst-192
>> (seconds) | (I) -12.77% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | pts/perl-benchmark | Test: Interpreter
>> (Seconds) | (I) -4.86% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | pts/pgbench | Scale: 1 Clients: 1 Read Write
>> (TPS) | (I) 5.07% |
>> | | Scale: 1 Clients: 1 Read Write - Latency
>> (ms) | (I) -4.72% |
>> | | Scale: 100 Clients: 1000 Read Write
>> (TPS) | (I) 2.58% |
>> | | Scale: 100 Clients: 1000 Read Write - Latency
>> (ms) | (I) -2.52% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | pts/sqlite-speedtest | Timed Time - Size 1,000
>> (Seconds) | (I) -2.68% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>>
>>
>> Changes since v6 [1]
>> ====================
>>
>> - Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
>> of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
>> to the lockless variant for consistency (per Catalin).
>> - Misc function/variable renames to improve clarity and consistency.
>> - Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
>> wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
>> ~20K from kernel image.
>> - Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
>> - Only walk the pgtable once for the common "split single page" case.
>> - Bypass split to contpmd and contpte when spllitting linear map to ptes.
>>
>>
>> Applies on v6.17-rc3.
>>
>>
>> [1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-
>> ryan.roberts@arm.com/
>>
>> Thanks,
>> Ryan
>>
>> Dev Jain (1):
>> arm64: Enable permission change on arm64 kernel block mappings
>>
>> Ryan Roberts (3):
>> arm64: mm: Optimize split_kernel_leaf_mapping()
>> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
>> arm64: mm: Optimize linear_map_split_to_ptes()
>>
>> Yang Shi (2):
>> arm64: cpufeature: add AmpereOne to BBML2 allow list
>> arm64: mm: support large block mapping when rodata=full
>>
>> arch/arm64/include/asm/cpufeature.h | 2 +
>> arch/arm64/include/asm/mmu.h | 3 +
>> arch/arm64/include/asm/pgtable.h | 5 +
>> arch/arm64/kernel/cpufeature.c | 12 +-
>> arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
>> arch/arm64/mm/pageattr.c | 157 ++++++++---
>> arch/arm64/mm/proc.S | 27 +-
>> include/linux/pagewalk.h | 3 +
>> mm/pagewalk.c | 36 ++-
>> 9 files changed, 599 insertions(+), 64 deletions(-)
>>
>> --
>> 2.43.0
>>
>
> Hi Yang and Ryan,
>
> I observe there are various callsites which will ultimately use
> update_range_prot() (from patch 1),
> that they do not check the return value. I am listing the ones I could find:
So your concern is that prior to patch #3 in this series, any error returned by
__change_memory_common() would be due to programming error only. But patch #3
introduces the possibility of dynamic error (-ENOMEM) due to the need to
allocate pgtable memory to split a mapping?
There is a WARN_ON_ONCE(ret) for the return code of split_kernel_leaf_mapping()
which will at least make the error visible, but I agree it's not a great solution.
>
> set_memory_ro() in bpf_jit_comp.c
There is a set_memory_rw() for the same region of memory directly above this,
which will return -EFAULT on failure. If that one succeeded, then the pgtable
must already be appropriately split for set_memory_ro() so that should never
fail in practice. I agree with improving the robustness of the code by returning
-EFAULT (or just propagate the error?) as you suggest though.
> set_memory_valid() in kernel_map_pages() in pageattr.c
This is used by CONFIG_DEBUG_PAGEALLOC to make pages in the linear map invalid
while they are not in use to catch programming errors. So if making a page
invalid during freeing fails would not technically lead to a huge issue, it just
reduces our capability of catching an errant access to that free memory.
In principle, if we were able to make the memory invalid, we should therefore be
able to make it valid again, because the mappings should be sufficiently split
already. But that doesn't actually work, because we might be allocating a
smaller order than was freed so we might not have split at free-time to the
granularity is required at allocation-time.
But as you say, for CONFIG_DEBUG_PAGEALLOC we disable this whole path anyway, so
no issue here.
> set_direct_map_invalid_noflush() in vm_reset_perms() in vmalloc.c
> set_direct_map_default_noflush() in vm_reset_perms() in vmalloc.c, and in
> secretmem.c
> (the secretmem.c ones should be safe as explained in the commments therein)
Agreed for secretmem. vmalloc looks like a problem though...
If vmalloc was only setting the linear map back to default permissions, I guess
this wouldn't be an issue because we must have split the linear map sucessfully
when changing away from default permissions in the first place. But the fact
that it is unconditionally setting the linear map pages to invalid then back to
default causes issues; I guess even without the risk of -ENOMEM, this will cause
the linear map to be split to PTEs over time as vmalloc allocs and frees?
We probably need to think through how we can solve this. It's not clear to me
why vm_reset_perms wants to unconditionally transiently set to invalid?
>
> The first one I think can be handled easily by returning -EFAULT.
>
> For the second, we are already returning in case of !can_set_direct_map, which
> renders DEBUG_PAGEALLOC useless. So maybe it is
> safe to ignore the ret from set_memory_valid?
>
> For the third, the call chain is a sequence of must-succeed void functions.
> Notably, when using vfree(), we may have to allocate a single
> pagetable page for splitting.
>
> I am wondering whether we can just have a warn_on_once or something for the case
> when we fail to allocate a pagetable page. Or, Ryan had
> suggested in an off-the-list conversation that we can maintain a cache of PTE
> tables for every PMD block mapping, which will give us
> the same memory consumption as we do today, but not sure if this is worth it.
> x86 can already handle splitting but due to the callchains
> I have described above, it has the same problem, and the code has been working
> for years :)
I think it's preferable to avoid having to keep a cache of pgtable memory if we
can...
Thanks,
Ryan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-01 8:03 ` Ryan Roberts
@ 2025-09-03 0:21 ` Yang Shi
2025-09-03 0:50 ` Yang Shi
0 siblings, 1 reply; 17+ messages in thread
From: Yang Shi @ 2025-09-03 0:21 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/1/25 1:03 AM, Ryan Roberts wrote:
> On 01/09/2025 06:04, Dev Jain wrote:
>> On 29/08/25 5:22 pm, Ryan Roberts wrote:
>>> Hi All,
>>>
>>> This is a new version following on from the v6 RFC at [1] which itself is based
>>> on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
>>> map to be mapped with large blocks, even when rodata=full, and leads to some
>>> nice performance improvements.
>>>
>>> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>>> modes by hacking the BBML2 feature detection code:
>>>
>>> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
>>> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
>>> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
>>> initially uses large mappings but is then repainted to use pte mappings
>>>
>>> In all cases, mm selftests run and no regressions are observed. In all cases,
>>> ptdump of linear map is as expected:
>>>
>>> Mode 1:
>>> =======
>>> ---[ Linear Mapping start ]---
>>> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>>> AF BLK UXN MEM/NORMAL
>>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000300000000-0xffff008000000000 500G PUD
>>> 0xffff008000000000-0xffff800000000000 130560G PGD
>>> ---[ Linear Mapping end ]---
>>>
>>> Mode 3:
>>> =======
>>> ---[ Linear Mapping start ]---
>>> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>>> AF BLK UXN MEM/NORMAL
>>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000300000000-0xffff008000000000 500G PUD
>>> 0xffff008000000000-0xffff800000000000 130560G PGD
>>> ---[ Linear Mapping end ]---
>>>
>>>
>>> Performance Testing
>>> ===================
>>>
>>> Yang Shi has gathered some compelling results which are detailed in the commit
>>> log for patch #3. Additionally I have run this through a random selection of
>>> benchmarks on AmpereOne. None show any regressions, and various benchmarks show
>>> statistically significant improvement. I'm just showing those improvements here:
>>>
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | Benchmark | Result
>>> Class | Improvement vs 6.17-rc1 |
>>> +======================+==========================================================+=========================+
>>> | micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -9.00% |
>>> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -6.93% |
>>> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -6.77% |
>>> | | pcpu_alloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -4.63% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | mmtests/hackbench | process-sockets-30
>>> (seconds) | (I) -2.96% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | mmtests/kernbench | syst-192
>>> (seconds) | (I) -12.77% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | pts/perl-benchmark | Test: Interpreter
>>> (Seconds) | (I) -4.86% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | pts/pgbench | Scale: 1 Clients: 1 Read Write
>>> (TPS) | (I) 5.07% |
>>> | | Scale: 1 Clients: 1 Read Write - Latency
>>> (ms) | (I) -4.72% |
>>> | | Scale: 100 Clients: 1000 Read Write
>>> (TPS) | (I) 2.58% |
>>> | | Scale: 100 Clients: 1000 Read Write - Latency
>>> (ms) | (I) -2.52% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | pts/sqlite-speedtest | Timed Time - Size 1,000
>>> (Seconds) | (I) -2.68% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>>
>>>
>>> Changes since v6 [1]
>>> ====================
>>>
>>> - Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
>>> of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
>>> to the lockless variant for consistency (per Catalin).
>>> - Misc function/variable renames to improve clarity and consistency.
>>> - Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
>>> wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
>>> ~20K from kernel image.
>>> - Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
>>> - Only walk the pgtable once for the common "split single page" case.
>>> - Bypass split to contpmd and contpte when spllitting linear map to ptes.
>>>
>>>
>>> Applies on v6.17-rc3.
>>>
>>>
>>> [1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-
>>> ryan.roberts@arm.com/
>>>
>>> Thanks,
>>> Ryan
>>>
>>> Dev Jain (1):
>>> arm64: Enable permission change on arm64 kernel block mappings
>>>
>>> Ryan Roberts (3):
>>> arm64: mm: Optimize split_kernel_leaf_mapping()
>>> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
>>> arm64: mm: Optimize linear_map_split_to_ptes()
>>>
>>> Yang Shi (2):
>>> arm64: cpufeature: add AmpereOne to BBML2 allow list
>>> arm64: mm: support large block mapping when rodata=full
>>>
>>> arch/arm64/include/asm/cpufeature.h | 2 +
>>> arch/arm64/include/asm/mmu.h | 3 +
>>> arch/arm64/include/asm/pgtable.h | 5 +
>>> arch/arm64/kernel/cpufeature.c | 12 +-
>>> arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
>>> arch/arm64/mm/pageattr.c | 157 ++++++++---
>>> arch/arm64/mm/proc.S | 27 +-
>>> include/linux/pagewalk.h | 3 +
>>> mm/pagewalk.c | 36 ++-
>>> 9 files changed, 599 insertions(+), 64 deletions(-)
>>>
>>> --
>>> 2.43.0
>>>
>> Hi Yang and Ryan,
>>
>> I observe there are various callsites which will ultimately use
>> update_range_prot() (from patch 1),
>> that they do not check the return value. I am listing the ones I could find:
> So your concern is that prior to patch #3 in this series, any error returned by
> __change_memory_common() would be due to programming error only. But patch #3
> introduces the possibility of dynamic error (-ENOMEM) due to the need to
> allocate pgtable memory to split a mapping?
>
> There is a WARN_ON_ONCE(ret) for the return code of split_kernel_leaf_mapping()
> which will at least make the error visible, but I agree it's not a great solution.
>
>> set_memory_ro() in bpf_jit_comp.c
Do you mean arch/arm64/net/bpf_jit_comp.c? If so I think you can just
check the return value then return -EFAULT just like what the above
set_memory_rw() does.
> There is a set_memory_rw() for the same region of memory directly above this,
> which will return -EFAULT on failure. If that one succeeded, then the pgtable
> must already be appropriately split for set_memory_ro() so that should never
> fail in practice. I agree with improving the robustness of the code by returning
> -EFAULT (or just propagate the error?) as you suggest though.
Yeah, I agree. This one should be easy to resolve.
>
>> set_memory_valid() in kernel_map_pages() in pageattr.c
> This is used by CONFIG_DEBUG_PAGEALLOC to make pages in the linear map invalid
> while they are not in use to catch programming errors. So if making a page
> invalid during freeing fails would not technically lead to a huge issue, it just
> reduces our capability of catching an errant access to that free memory.
>
> In principle, if we were able to make the memory invalid, we should therefore be
> able to make it valid again, because the mappings should be sufficiently split
> already. But that doesn't actually work, because we might be allocating a
> smaller order than was freed so we might not have split at free-time to the
> granularity is required at allocation-time.
>
> But as you say, for CONFIG_DEBUG_PAGEALLOC we disable this whole path anyway, so
> no issue here.
Yes, agreed.
>
>> set_direct_map_invalid_noflush() in vm_reset_perms() in vmalloc.c
>> set_direct_map_default_noflush() in vm_reset_perms() in vmalloc.c, and in
>> secretmem.c
>> (the secretmem.c ones should be safe as explained in the commments therein)
> Agreed for secretmem. vmalloc looks like a problem though...
>
> If vmalloc was only setting the linear map back to default permissions, I guess
> this wouldn't be an issue because we must have split the linear map sucessfully
> when changing away from default permissions in the first place. But the fact
Yes, agreed.
> that it is unconditionally setting the linear map pages to invalid then back to
> default causes issues; I guess even without the risk of -ENOMEM, this will cause
> the linear map to be split to PTEs over time as vmalloc allocs and frees?
It is possible. However, vm_reset_perms() is not called that often.
Theoretically there are plenty of other operations, for example,
loading/unloading modules, can cause the linear mapping to be split over
time. So this one is not that special IMHO.
>
> We probably need to think through how we can solve this. It's not clear to me
> why vm_reset_perms wants to unconditionally transiently set to invalid?
It seems like vm_reset_perms() is just called when VM_FLUSH_RESET_PERMS
flag is passed. It is just passed in for secretmem and hyperv. It sounds
like some preventive security measurement to me.
>
>> The first one I think can be handled easily by returning -EFAULT.
It may be not that simple. set_direct_map_invalid_noflush() is called on
page basis, so does update_range_prot(). If the split requires allocate
multiple page table pages, we may have some of the pages permission
changed (page table page allocation succeed), but the remaining is
skipped due to page table page allocation failure. The vfree() needs to
handle such case by setting pages permission back before returning any
errno.
Anyway it sounds like a general problem rather than ARM specific.
>>
>> For the second, we are already returning in case of !can_set_direct_map, which
>> renders DEBUG_PAGEALLOC useless. So maybe it is
>> safe to ignore the ret from set_memory_valid?
>>
>> For the third, the call chain is a sequence of must-succeed void functions.
>> Notably, when using vfree(), we may have to allocate a single
>> pagetable page for splitting.
>>
>> I am wondering whether we can just have a warn_on_once or something for the case
>> when we fail to allocate a pagetable page. Or, Ryan had
>> suggested in an off-the-list conversation that we can maintain a cache of PTE
>> tables for every PMD block mapping, which will give us
>> the same memory consumption as we do today, but not sure if this is worth it.
>> x86 can already handle splitting but due to the callchains
>> I have described above, it has the same problem, and the code has been working
>> for years :)
> I think it's preferable to avoid having to keep a cache of pgtable memory if we
> can...
Yes, I agree. We simply don't know how many pages we need to cache, and
it still can't guarantee 100% allocation success.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-03 0:21 ` Yang Shi
@ 2025-09-03 0:50 ` Yang Shi
0 siblings, 0 replies; 17+ messages in thread
From: Yang Shi @ 2025-09-03 0:50 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
>>>
>>>
>>> I am wondering whether we can just have a warn_on_once or something
>>> for the case
>>> when we fail to allocate a pagetable page. Or, Ryan had
>>> suggested in an off-the-list conversation that we can maintain a
>>> cache of PTE
>>> tables for every PMD block mapping, which will give us
>>> the same memory consumption as we do today, but not sure if this is
>>> worth it.
>>> x86 can already handle splitting but due to the callchains
>>> I have described above, it has the same problem, and the code has
>>> been working
>>> for years :)
>> I think it's preferable to avoid having to keep a cache of pgtable
>> memory if we
>> can...
>
> Yes, I agree. We simply don't know how many pages we need to cache,
> and it still can't guarantee 100% allocation success.
This is wrong... We can know how many pages will be needed for splitting
linear mapping to PTEs for the worst case once linear mapping is
finalized. But it may require a few hundred megabytes memory to
guarantee allocation success. I don't think it is worth for such rare
corner case.
Thanks,
Yang
>
> Thanks,
> Yang
>
>>
>> Thanks,
>> Ryan
>>
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
2025-08-29 22:08 ` Yang Shi
@ 2025-09-03 17:24 ` Catalin Marinas
1 sibling, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2025-09-03 17:24 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:43PM +0100, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Here it is again:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
@ 2025-09-03 19:15 ` Catalin Marinas
0 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2025-09-03 19:15 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:44PM +0100, Ryan Roberts wrote:
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 34e5d78af076..114b88216b0c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> int flags);
> #endif
>
> +#define INVALID_PHYS_ADDR -1
Nitpick: (-1UL) (or (-1ULL), KVM_PHYS_INVALID is defined as the latter).
Otherwise the patch looks fine.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
2025-08-29 22:11 ` Yang Shi
@ 2025-09-03 19:20 ` Catalin Marinas
1 sibling, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2025-09-03 19:20 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:45PM +0100, Ryan Roberts wrote:
> The common case for split_kernel_leaf_mapping() is for a single page.
> Let's optimize this by only calling split_kernel_leaf_mapping_locked()
> once.
>
> Since the start and end address are PAGE_SIZE apart, they must be
> contained within the same contpte block. Further, if start is at the
> beginning of the block or end is at the end of the block, then the other
> address must be in the _middle_ of the block. So if we split on this
> middle-of-the-contpte-block address, it is guaranteed that the
> containing contpte block is split to ptes and both start and end are
> therefore mapped by pte.
>
> This avoids the second call to split_kernel_leaf_mapping_locked()
> meaning we only have to walk the pgtable once.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
And I agree with Yang, you can just fold this into the previous patch.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-09-03 19:20 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
2025-08-29 22:08 ` Yang Shi
2025-09-03 17:24 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
2025-09-03 19:15 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
2025-08-29 22:11 ` Yang Shi
2025-09-03 19:20 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
2025-08-29 22:27 ` Yang Shi
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
2025-09-01 8:03 ` Ryan Roberts
2025-09-03 0:21 ` Yang Shi
2025-09-03 0:50 ` Yang Shi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).