* [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
@ 2025-08-05 8:13 Ryan Roberts
2025-08-05 8:13 ` [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
` (5 more replies)
0 siblings, 6 replies; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 8:13 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
Hi All,
This is a new version built on top of Yang Shi's work at [1]. Yang and I have
been discussing (disagreeing?) about the best way to implement the last 2
patches. So I've reworked them and am posting as RFC to illustrate how I think
this feature should be implemented, but I've retained Yang as primary author
since it is all based on his work. I'd appreciate feedback from Catalin and/or
Will on whether this is the right approach, so that hopefully we can get this
into shape for 6.18.
The first 2 patches are unchanged from Yang's v5; the first patch comes from Dev
and the rest of the series depends upon it.
I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
modes by hacking the BBML2 feature detection code:
- mode 1: All CPUs support BBML2 so the linear map uses large mappings
- mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
- mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
initially uses large mappings but is then repainted to use pte mappings
In all cases, mm selftests run and no regressions are observed. In all cases,
ptdump of linear map is as expected:
Mode 1:
=======
---[ Linear Mapping start ]---
0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
0xffff000300000000-0xffff008000000000 500G PUD
0xffff008000000000-0xffff800000000000 130560G PGD
---[ Linear Mapping end ]---
Mode 3:
=======
---[ Linear Mapping start ]---
0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
0xffff000300000000-0xffff008000000000 500G PUD
0xffff008000000000-0xffff800000000000 130560G PGD
---[ Linear Mapping end ]---
[1] https://lore.kernel.org/linux-arm-kernel/20250724221216.1998696-1-yang@os.amperecomputing.com/
Thanks,
Ryan
Dev Jain (1):
arm64: Enable permission change on arm64 kernel block mappings
Yang Shi (3):
arm64: cpufeature: add AmpereOne to BBML2 allow list
arm64: mm: support large block mapping when rodata=full
arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/mmu.h | 4 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 17 +-
arch/arm64/mm/mmu.c | 368 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 161 +++++++++---
arch/arm64/mm/proc.S | 25 +-
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 24 ++
9 files changed, 566 insertions(+), 43 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
@ 2025-08-05 8:13 ` Ryan Roberts
2025-08-28 16:26 ` Catalin Marinas
2025-08-05 8:13 ` [RFC PATCH v6 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
` (4 subsequent siblings)
5 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 8:13 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Dev Jain <dev.jain@arm.com>
This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we
can handle any permission games on the kernel (init_mm) pagetable.
Currently, __change_memory_common() uses apply_to_page_range() which
does not support changing permissions for block mappings. We attempt to
move away from this by using the pagewalk API, similar to what riscv
does right now; however, it is the responsibility of the caller to
ensure that we do not pass a range overlapping a partial block mapping
or cont mapping; in such a case, the system must be able to support
range splitting.
This patch is tied with Yang Shi's attempt [1] at using huge mappings in
the linear mapping in case the system supports BBML2, in which case we
will be able to split the linear mapping if needed without
break-before-make. Thus, Yang's series, IIUC, will be one such user of
my patch; suppose we are changing permissions on a range of the linear
map backed by PMD-hugepages, then the sequence of operations should look
like the following:
split_range(start)
split_range(end);
__change_memory_common(start, end);
However, this patch can be used independently of Yang's; since currently
permission games are being played only on pte mappings (due to
apply_to_page_range not supporting otherwise), this patch provides the
mechanism for enabling huge mappings for various kernel mappings like
linear map and vmalloc.
---------------------
Implementation
---------------------
arm64 currently changes permissions on vmalloc objects locklessly, via
apply_to_page_range, whose limitation is to deny changing permissions
for block mappings. Therefore, we move away to use the generic pagewalk
API, thus paving the path for enabling huge mappings by default on
kernel space mappings, thus leading to more efficient TLB usage.
However, the API currently enforces the init_mm.mmap_lock to be held. To
avoid the unnecessary bottleneck of the mmap_lock for our usecase, this
patch extends this generic API to be used locklessly, so as to retain
the existing behaviour for changing permissions. Apart from this reason,
it is noted at [2] that KFENCE can manipulate kernel pgtable entries
during softirqs. It does this by calling set_memory_valid() ->
__change_memory_common(). This being a non-sleepable context, we cannot
take the init_mm mmap lock.
Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive
control over the range, thus guaranteeing no concurrent access.
We require that the start and end of a given range do not partially
overlap block mappings, or cont mappings. Return -EINVAL in case a
partial block mapping is detected in any of the PGD/P4D/PUD/PMD levels;
add a corresponding comment in update_range_prot() to warn that
eliminating such a condition is the responsibility of the caller.
Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to an
atomic operation for the block mappings. This is fine as any access will
decode either the old or the new permission until the TLBI.
apply_to_page_range() currently performs all pte level callbacks while
in lazy mmu mode. Since arm64 can optimize performance by batching
barriers when modifying kernel pgtables in lazy mmu mode, we would like
to continue to benefit from this optimisation. Unfortunately
walk_kernel_page_table_range() does not use lazy mmu mode. However,
since the pagewalk framework is not allocating any memory, we can safely
bracket the whole operation inside lazy mmu mode ourselves. Therefore,
wrap the call to walk_kernel_page_table_range() with the lazy MMU
helpers.
[1] https://lore.kernel.org/all/20250304222018.615808-1-yang@os.amperecomputing.com/
[2] https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/
Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/pageattr.c | 155 +++++++++++++++++++++++++++++++--------
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 24 ++++++
3 files changed, 150 insertions(+), 32 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..c6a85000fa0e 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
#include <linux/mem_encrypt.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
#include <asm/cacheflush.h>
#include <asm/pgtable-prot.h>
@@ -20,6 +21,99 @@ struct page_change_data {
pgprot_t clear_mask;
};
+static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk)
+{
+ struct page_change_data *masks = walk->private;
+
+ val &= ~(pgprot_val(masks->clear_mask));
+ val |= (pgprot_val(masks->set_mask));
+
+ return val;
+}
+
+static int pageattr_pgd_entry(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pgd_t val = pgdp_get(pgd);
+
+ if (pgd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PGDIR_SIZE))
+ return -EINVAL;
+ val = __pgd(set_pageattr_masks(pgd_val(val), walk));
+ set_pgd(pgd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ p4d_t val = p4dp_get(p4d);
+
+ if (p4d_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != P4D_SIZE))
+ return -EINVAL;
+ val = __p4d(set_pageattr_masks(p4d_val(val), walk));
+ set_p4d(p4d, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pud_t val = pudp_get(pud);
+
+ if (pud_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+ return -EINVAL;
+ val = __pud(set_pageattr_masks(pud_val(val), walk));
+ set_pud(pud, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pmd_t val = pmdp_get(pmd);
+
+ if (pmd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+ return -EINVAL;
+ val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+ set_pmd(pmd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pte_t val = __ptep_get(pte);
+
+ val = __pte(set_pageattr_masks(pte_val(val), walk));
+ __set_pte(pte, val);
+
+ return 0;
+}
+
+static const struct mm_walk_ops pageattr_ops = {
+ .pgd_entry = pageattr_pgd_entry,
+ .p4d_entry = pageattr_p4d_entry,
+ .pud_entry = pageattr_pud_entry,
+ .pmd_entry = pageattr_pmd_entry,
+ .pte_entry = pageattr_pte_entry,
+};
+
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
bool can_set_direct_map(void)
@@ -37,33 +131,35 @@ bool can_set_direct_map(void)
arm64_kfence_can_set_direct_map() || is_realm_world();
}
-static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
+static int update_range_prot(unsigned long start, unsigned long size,
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data *cdata = data;
- pte_t pte = __ptep_get(ptep);
+ struct page_change_data data;
+ int ret;
- pte = clear_pte_bit(pte, cdata->clear_mask);
- pte = set_pte_bit(pte, cdata->set_mask);
+ data.set_mask = set_mask;
+ data.clear_mask = clear_mask;
- __set_pte(ptep, pte);
- return 0;
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The caller must ensure that the range we are operating on does not
+ * partially overlap a block mapping, or a cont mapping. Any such case
+ * must be eliminated by splitting the mapping.
+ */
+ ret = walk_kernel_page_table_range_lockless(start, start + size,
+ &pageattr_ops, &data);
+ arch_leave_lazy_mmu_mode();
+
+ return ret;
}
-/*
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
static int __change_memory_common(unsigned long start, unsigned long size,
- pgprot_t set_mask, pgprot_t clear_mask)
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data data;
int ret;
- data.set_mask = set_mask;
- data.clear_mask = clear_mask;
-
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
-
+ ret = update_range_prot(start, size, set_mask, clear_mask);
/*
* If the memory is being made valid without changing any other bits
* then a TLBI isn't required as a non-valid entry cannot be cached in
@@ -71,6 +167,7 @@ static int __change_memory_common(unsigned long start, unsigned long size,
*/
if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask))
flush_tlb_kernel_range(start, start + size);
+
return ret;
}
@@ -174,32 +271,26 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
int set_direct_map_invalid_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(0),
- .clear_mask = __pgprot(PTE_VALID),
- };
+ pgprot_t clear_mask = __pgprot(PTE_VALID);
+ pgprot_t set_mask = __pgprot(0);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
int set_direct_map_default_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
- .clear_mask = __pgprot(PTE_RDONLY),
- };
+ pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+ pgprot_t clear_mask = __pgprot(PTE_RDONLY);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
static int __set_memory_enc_dec(unsigned long addr,
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 682472c15495..8212e8f2d2d5 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
pgd_t *pgd, void *private);
+int walk_kernel_page_table_range_lockless(unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ void *private);
int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 648038247a8d..18a675ab87cf 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -633,6 +633,30 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
return walk_pgd_range(start, end, &walk);
}
+/*
+ * Use this function to walk the kernel page tables locklessly. It should be
+ * guaranteed that the caller has exclusive access over the range they are
+ * operating on - that there should be no concurrent access, for example,
+ * changing permissions for vmalloc objects.
+ */
+int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
+ const struct mm_walk_ops *ops, void *private)
+{
+ struct mm_walk walk = {
+ .ops = ops,
+ .mm = &init_mm,
+ .private = private,
+ .no_vma = true
+ };
+
+ if (start >= end)
+ return -EINVAL;
+ if (!check_ops_valid(ops))
+ return -EINVAL;
+
+ return walk_pgd_range(start, end, &walk);
+}
+
/**
* walk_page_range_debug - walk a range of pagetables not backed by a vma
* @mm: mm_struct representing the target process of page table walk
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH v6 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-05 8:13 ` [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
@ 2025-08-05 8:13 ` Ryan Roberts
2025-08-28 16:29 ` Catalin Marinas
2025-08-05 8:13 ` [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full Ryan Roberts
` (3 subsequent siblings)
5 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 8:13 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Yang Shi <yang@os.amperecomputing.com>
AmpereOne supports BBML2 without conflict abort, add to the allow list.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/kernel/cpufeature.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d..b93f4ee57176 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2234,6 +2234,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
static const struct midr_range supports_bbml2_noabort_list[] = {
MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
{}
};
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-05 8:13 ` [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
2025-08-05 8:13 ` [RFC PATCH v6 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
@ 2025-08-05 8:13 ` Ryan Roberts
2025-08-05 17:59 ` Yang Shi
2025-08-28 17:09 ` Catalin Marinas
2025-08-05 8:13 ` [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
` (2 subsequent siblings)
5 siblings, 2 replies; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 8:13 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Yang Shi <yang@os.amperecomputing.com>
When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.
This resulted in a couple of problems:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.
Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full. When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.
The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.
With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P)
with 256GB memory.
* Memory use after boot
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
disk encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap \
--randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
--ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
--group_reporting --thread --name=iops-test-job --eta-newline=1 \
--size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad
case). The bandwidth is increased and the avg clat is reduced
proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache
populated). The bandwidth is increased by 150%.
Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 7 +-
arch/arm64/mm/mmu.c | 237 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 6 +
6 files changed, 252 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..3f11e095a37d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,8 @@ static inline bool system_supports_pmuv3(void)
return cpus_have_final_cap(ARM64_HAS_PMUV3);
}
+bool bbml2_noabort_available(void);
+
static inline bool system_supports_bbml2_noabort(void)
{
return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..98565b1b93e8 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_leaf_mapping(unsigned long addr);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index abd2dee416b3..aa89c2e67ebc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
}
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
static inline int pte_uffd_wp(pte_t pte)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b93f4ee57176..f28f056087f3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2217,7 +2217,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
}
-static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+bool bbml2_noabort_available(void)
{
/*
* We want to allow usage of BBML2 in as wide a range of kernel contexts
@@ -2251,6 +2251,11 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
return true;
}
+static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+{
+ return bbml2_noabort_available();
+}
+
#ifdef CONFIG_ARM64_PAN
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index abd9725796e9..f6cd79287024 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
int flags);
#endif
+#define INVALID_PHYS_ADDR -1
+
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
enum pgtable_type pgtable_type)
{
@@ -488,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
phys_addr_t pa;
- BUG_ON(!ptdesc);
+ if (!ptdesc)
+ return INVALID_PHYS_ADDR;
+
pa = page_to_phys(ptdesc_page(ptdesc));
switch (pgtable_type) {
@@ -509,16 +513,229 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
return pa;
}
+static phys_addr_t
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+ return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+}
+
static phys_addr_t __maybe_unused
pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
}
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(NULL, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
+}
+
+static void split_contpte(pte_t *ptep)
+{
+ int i;
+
+ ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ for (i = 0; i < CONT_PTES; i++, ptep++)
+ __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
+}
+
+static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
+ unsigned long pfn = pmd_pfn(pmd);
+ pgprot_t prot = pmd_pgprot(pmd);
+ phys_addr_t pte_phys;
+ pte_t *ptep;
+ int i;
+
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ if (pte_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte(ptep, pfn_pte(pfn, prot));
+
+ /*
+ * Ensure the pte entries are visible to the table walker by the time
+ * the pmd entry that points to the ptes is visible.
+ */
+ dsb(ishst);
+ __pmd_populate(pmdp, pte_phys, tableprot);
+
+ return 0;
+}
+
+static void split_contpmd(pmd_t *pmdp)
+{
+ int i;
+
+ pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ for (i = 0; i < CONT_PMDS; i++, pmdp++)
+ set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
+}
+
+static int split_pud(pud_t *pudp, pud_t pud)
+{
+ pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+ unsigned long pfn = pud_pfn(pud);
+ pgprot_t prot = pud_pgprot(pud);
+ phys_addr_t pmd_phys;
+ pmd_t *pmdp;
+ int i;
+
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ if (pmd_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
+ set_pmd(pmdp, pfn_pmd(pfn, prot));
+
+ /*
+ * Ensure the pmd entries are visible to the table walker by the time
+ * the pud entry that points to the pmds is visible.
+ */
+ dsb(ishst);
+ __pud_populate(pudp, pmd_phys, tableprot);
+
+ return 0;
+}
+
+static DEFINE_MUTEX(pgtable_split_lock);
+
+int split_kernel_leaf_mapping(unsigned long addr)
+{
+ pgd_t *pgdp, pgd;
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+ int ret = 0;
+
+ /*
+ * !BBML2_NOABORT systems should not be trying to change permissions on
+ * anything that is not pte-mapped in the first place. Just return early
+ * and let the permission change code raise a warning if not already
+ * pte-mapped.
+ */
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ /*
+ * Ensure addr is at least page-aligned since this is the finest
+ * granularity we can split to.
+ */
+ if (addr != PAGE_ALIGN(addr))
+ return -EINVAL;
+
+ mutex_lock(&pgtable_split_lock);
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * PGD: If addr is PGD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
+ goto out;
+ pgdp = pgd_offset_k(addr);
+ pgd = pgdp_get(pgdp);
+ if (!pgd_present(pgd))
+ goto out;
+
+ /*
+ * P4D: If addr is P4D aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
+ goto out;
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = p4dp_get(p4dp);
+ if (!p4d_present(p4d))
+ goto out;
+
+ /*
+ * PUD: If addr is PUD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split. Otherwise,
+ * if we have a pud leaf, split to contpmd.
+ */
+ if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
+ goto out;
+ pudp = pud_offset(p4dp, addr);
+ pud = pudp_get(pudp);
+ if (!pud_present(pud))
+ goto out;
+ if (pud_leaf(pud)) {
+ ret = split_pud(pudp, pud);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPMD: If addr is CONTPMD aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpmd leaf, split to pmd.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ pmdp = pmd_offset(pudp, addr);
+ pmd = pmdp_get(pmdp);
+ if (!pmd_present(pmd))
+ goto out;
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ /*
+ * PMD: If addr is PMD aligned then addr already describes a
+ * leaf boundary. Otherwise, split to contpte.
+ */
+ if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
+ goto out;
+ ret = split_pmd(pmdp, pmd);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPTE: If addr is CONTPTE aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpte leaf, split to pte.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ ptep = pte_offset_kernel(pmdp, addr);
+ pte = __ptep_get(ptep);
+ if (!pte_present(pte))
+ goto out;
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+out:
+ arch_leave_lazy_mmu_mode();
+ mutex_unlock(&pgtable_split_lock);
+ return ret;
}
/*
@@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+static inline bool force_pte_mapping(void)
+{
+ bool bbml2 = system_capabilities_finalized() ?
+ system_supports_bbml2_noabort() : bbml2_noabort_available();
+
+ return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
+ is_realm_world())) ||
+ debug_pagealloc_enabled();
+}
+
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c6a85000fa0e..6a8eefc16dbc 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,6 +140,12 @@ static int update_range_prot(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
+ ret = split_kernel_leaf_mapping(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping(start + size);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
arch_enter_lazy_mmu_mode();
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (2 preceding siblings ...)
2025-08-05 8:13 ` [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full Ryan Roberts
@ 2025-08-05 8:13 ` Ryan Roberts
2025-08-05 18:14 ` Yang Shi
2025-08-05 8:16 ` [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-05 18:37 ` Yang Shi
5 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 8:13 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Yang Shi <yang@os.amperecomputing.com>
The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point. So the linear
mapping is determined by the capability of boot CPU only. If the boot
CPU supports BBML2, large block mappings will be used for linear
mapping.
But the secondary CPUs may not support BBML2, so repaint the linear
mapping if large block mapping is used and the secondary CPUs don't
support BBML2 once cpufeature is finalized on all CPUs.
If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.
Repainting is implemented by the boot CPU, which we know supports BBML2,
so it is safe for the live mapping size to change for this CPU. The
linear map region is walked using the pagewalk API and any discovered
large leaf mappings are split to pte mappings using the existing helper
functions. Since the repainting is performed inside of a stop_machine(),
we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
since we are still early in boot, it is expected that there is plenty of
memory available so we will never need to sleep for reclaim, and so
GFP_ATOMIC is acceptable here.
The secondary CPUs are all put into a waiting area with the idmap in
TTBR0 and reserved map in TTBR1 while this is performed since they
cannot be allowed to observe any size changes on the live mappings.
Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/mmu.h | 3 +
arch/arm64/kernel/cpufeature.c | 8 ++
arch/arm64/mm/mmu.c | 151 ++++++++++++++++++++++++++++++---
arch/arm64/mm/proc.S | 25 +++++-
4 files changed, 172 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 98565b1b93e8..966c08fd8126 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -56,6 +56,8 @@ typedef struct {
*/
#define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
+extern bool linear_map_requires_bbml2;
+
static inline bool arm64_kernel_unmapped_at_el0(void)
{
return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
@@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
extern int split_kernel_leaf_mapping(unsigned long addr);
+extern int linear_map_split_to_ptes(void *__unused);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f28f056087f3..11392c741e48 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
#include <asm/insn.h>
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
+#include <asm/mmu.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
@@ -2013,6 +2014,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
return 0;
}
+static void __init linear_map_maybe_split_to_ptes(void)
+{
+ if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
+ stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
+}
+
static void __init kpti_install_ng_mappings(void)
{
/* Check whether KPTI is going to be used */
@@ -3930,6 +3937,7 @@ void __init setup_system_features(void)
{
setup_system_capabilities();
+ linear_map_maybe_split_to_ptes();
kpti_install_ng_mappings();
sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f6cd79287024..5b5a84b34024 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -27,6 +27,7 @@
#include <linux/kfence.h>
#include <linux/pkeys.h>
#include <linux/mm_inline.h>
+#include <linux/pagewalk.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -483,11 +484,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
#define INVALID_PHYS_ADDR -1
-static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
+static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
enum pgtable_type pgtable_type)
{
/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
- struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
+ struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
phys_addr_t pa;
if (!ptdesc)
@@ -514,9 +515,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t
-try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
}
static phys_addr_t __maybe_unused
@@ -524,7 +525,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -534,7 +535,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -548,7 +549,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -557,7 +558,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
pte_t *ptep;
int i;
- pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
ptep = (pte_t *)phys_to_virt(pte_phys);
@@ -590,7 +591,7 @@ static void split_contpmd(pmd_t *pmdp)
set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
}
-static int split_pud(pud_t *pudp, pud_t pud)
+static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
{
pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
@@ -600,7 +601,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
pmd_t *pmdp;
int i;
- pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pmdp = (pmd_t *)phys_to_virt(pmd_phys);
@@ -688,7 +689,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
if (!pud_present(pud))
goto out;
if (pud_leaf(pud)) {
- ret = split_pud(pudp, pud);
+ ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
if (ret)
goto out;
}
@@ -713,7 +714,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
if (ret)
goto out;
}
@@ -738,6 +739,112 @@ int split_kernel_leaf_mapping(unsigned long addr)
return ret;
}
+static int split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pud_t pud = pudp_get(pudp);
+ int ret = 0;
+
+ if (pud_leaf(pud))
+ ret = split_pud(pudp, pud, GFP_ATOMIC);
+
+ return ret;
+}
+
+static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pmd_t pmd = pmdp_get(pmdp);
+ int ret = 0;
+
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
+ }
+
+ return ret;
+}
+
+static int split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pte_t pte = __ptep_get(ptep);
+
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+ return 0;
+}
+
+static const struct mm_walk_ops split_to_ptes_ops = {
+ .pud_entry = split_to_ptes_pud_entry,
+ .pmd_entry = split_to_ptes_pmd_entry,
+ .pte_entry = split_to_ptes_pte_entry,
+};
+
+extern u32 repaint_done;
+
+int __init linear_map_split_to_ptes(void *__unused)
+{
+ /*
+ * Repainting the linear map must be done by CPU0 (the boot CPU) because
+ * that's the only CPU that we know supports BBML2. The other CPUs will
+ * be held in a waiting area with the idmap active.
+ */
+ if (!smp_processor_id()) {
+ unsigned long lstart = _PAGE_OFFSET(vabits_actual);
+ unsigned long lend = PAGE_END;
+ unsigned long kstart = (unsigned long)lm_alias(_stext);
+ unsigned long kend = (unsigned long)lm_alias(__init_begin);
+ int ret;
+
+ /*
+ * Wait for all secondary CPUs to be put into the waiting area.
+ */
+ smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
+
+ /*
+ * Walk all of the linear map [lstart, lend), except the kernel
+ * linear map alias [kstart, kend), and split all mappings to
+ * PTE. The kernel alias remains static throughout runtime so
+ * can continue to be safely mapped with large mappings.
+ */
+ ret = walk_kernel_page_table_range_lockless(lstart, kstart,
+ &split_to_ptes_ops, NULL);
+ if (!ret)
+ ret = walk_kernel_page_table_range_lockless(kend, lend,
+ &split_to_ptes_ops, NULL);
+ if (ret)
+ panic("Failed to split linear map\n");
+ flush_tlb_kernel_range(lstart, lend);
+
+ /*
+ * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
+ * before any page table split operations.
+ */
+ WRITE_ONCE(repaint_done, 0);
+ } else {
+ typedef void (repaint_wait_fn)(void);
+ extern repaint_wait_fn bbml2_wait_for_repainting;
+ repaint_wait_fn *wait_fn;
+
+ wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
+
+ /*
+ * At least one secondary CPU doesn't support BBML2 so cannot
+ * tolerate the size of the live mappings changing. So have the
+ * secondary CPUs wait for the boot CPU to make the changes
+ * with the idmap active and init_mm inactive.
+ */
+ cpu_install_idmap();
+ wait_fn();
+ cpu_uninstall_idmap();
+ }
+
+ return 0;
+}
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
@@ -857,6 +964,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+bool linear_map_requires_bbml2;
+
static inline bool force_pte_mapping(void)
{
bool bbml2 = system_capabilities_finalized() ?
@@ -892,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
+ linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
+
if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
@@ -1025,7 +1136,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
- kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+ kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
+ bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
static void __init create_idmap(void)
{
@@ -1050,6 +1162,19 @@ static void __init create_idmap(void)
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
}
+
+ /*
+ * Setup idmap mapping for repaint_done flag. It will be used if
+ * repainting the linear mapping is needed later.
+ */
+ if (linear_map_requires_bbml2) {
+ u64 pa = __pa_symbol(&repaint_done);
+
+ ptep = __pa_symbol(bbml2_ptes);
+ __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
+ IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
+ __phys_to_virt(ptep) - ptep);
+ }
}
void __init paging_init(void)
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..dbaac2e824d7 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -416,7 +416,29 @@ alternative_else_nop_endif
__idmap_kpti_secondary:
/* Uninstall swapper before surgery begins */
__idmap_cpu_set_reserved_ttbr1 x16, x17
+ b scondary_cpu_wait
+ .unreq swapper_ttb
+ .unreq flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
+
+ .pushsection ".data", "aw", %progbits
+SYM_DATA(repaint_done, .long 1)
+ .popsection
+
+ .pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
+ /* Must be same registers as in idmap_kpti_install_ng_mappings */
+ swapper_ttb .req x3
+ flag_ptr .req x4
+
+ mrs swapper_ttb, ttbr1_el1
+ adr_l flag_ptr, repaint_done
+ __idmap_cpu_set_reserved_ttbr1 x16, x17
+
+scondary_cpu_wait:
/* Increment the flag to let the boot CPU we're ready */
1: ldxr w16, [flag_ptr]
add w16, w16, #1
@@ -436,9 +458,8 @@ __idmap_kpti_secondary:
.unreq swapper_ttb
.unreq flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(bbml2_wait_for_repainting)
.popsection
-#endif
/*
* __cpu_setup
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (3 preceding siblings ...)
2025-08-05 8:13 ` [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
@ 2025-08-05 8:16 ` Ryan Roberts
2025-08-05 14:39 ` Catalin Marinas
2025-08-05 18:37 ` Yang Shi
5 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 8:16 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 05/08/2025 09:13, Ryan Roberts wrote:
> Hi All,
>
> This is a new version built on top of Yang Shi's work at [1]. Yang and I have
> been discussing (disagreeing?) about the best way to implement the last 2
> patches. So I've reworked them and am posting as RFC to illustrate how I think
> this feature should be implemented, but I've retained Yang as primary author
> since it is all based on his work. I'd appreciate feedback from Catalin and/or
> Will on whether this is the right approach, so that hopefully we can get this
> into shape for 6.18.
I forgot to mention that it applies on Linus's current master (it depends upon
mm and arm64 changes that will first appear in v6.17-rc1 and are already merged
in master). I'm using 89748acdf226 as the base.
>
> The first 2 patches are unchanged from Yang's v5; the first patch comes from Dev
> and the rest of the series depends upon it.
>
> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> modes by hacking the BBML2 feature detection code:
>
> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
> initially uses large mappings but is then repainted to use pte mappings
>
> In all cases, mm selftests run and no regressions are observed. In all cases,
> ptdump of linear map is as expected:
>
> Mode 1:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
> Mode 3:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
> [1] https://lore.kernel.org/linux-arm-kernel/20250724221216.1998696-1-yang@os.amperecomputing.com/
>
> Thanks,
> Ryan
>
> Dev Jain (1):
> arm64: Enable permission change on arm64 kernel block mappings
>
> Yang Shi (3):
> arm64: cpufeature: add AmpereOne to BBML2 allow list
> arm64: mm: support large block mapping when rodata=full
> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
>
> arch/arm64/include/asm/cpufeature.h | 2 +
> arch/arm64/include/asm/mmu.h | 4 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 17 +-
> arch/arm64/mm/mmu.c | 368 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 161 +++++++++---
> arch/arm64/mm/proc.S | 25 +-
> include/linux/pagewalk.h | 3 +
> mm/pagewalk.c | 24 ++
> 9 files changed, 566 insertions(+), 43 deletions(-)
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-05 8:16 ` [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
@ 2025-08-05 14:39 ` Catalin Marinas
2025-08-05 14:52 ` Ryan Roberts
0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2025-08-05 14:39 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On Tue, Aug 05, 2025 at 09:16:31AM +0100, Ryan Roberts wrote:
> On 05/08/2025 09:13, Ryan Roberts wrote:
> > This is a new version built on top of Yang Shi's work at [1]. Yang and I have
> > been discussing (disagreeing?) about the best way to implement the last 2
> > patches. So I've reworked them and am posting as RFC to illustrate how I think
> > this feature should be implemented, but I've retained Yang as primary author
> > since it is all based on his work. I'd appreciate feedback from Catalin and/or
> > Will on whether this is the right approach, so that hopefully we can get this
> > into shape for 6.18.
>
> I forgot to mention that it applies on Linus's current master (it depends upon
> mm and arm64 changes that will first appear in v6.17-rc1 and are already merged
> in master). I'm using 89748acdf226 as the base.
It's fine as an RFC but, for upstream, please rebase on top of -rc1
rather than a random commit in the middle of the merging window. Also
note that many maintainers ignore new series posted during the merging
window.
--
Catalin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-05 14:39 ` Catalin Marinas
@ 2025-08-05 14:52 ` Ryan Roberts
0 siblings, 0 replies; 22+ messages in thread
From: Ryan Roberts @ 2025-08-05 14:52 UTC (permalink / raw)
To: Catalin Marinas
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On 05/08/2025 15:39, Catalin Marinas wrote:
> On Tue, Aug 05, 2025 at 09:16:31AM +0100, Ryan Roberts wrote:
>> On 05/08/2025 09:13, Ryan Roberts wrote:
>>> This is a new version built on top of Yang Shi's work at [1]. Yang and I have
>>> been discussing (disagreeing?) about the best way to implement the last 2
>>> patches. So I've reworked them and am posting as RFC to illustrate how I think
>>> this feature should be implemented, but I've retained Yang as primary author
>>> since it is all based on his work. I'd appreciate feedback from Catalin and/or
>>> Will on whether this is the right approach, so that hopefully we can get this
>>> into shape for 6.18.
>>
>> I forgot to mention that it applies on Linus's current master (it depends upon
>> mm and arm64 changes that will first appear in v6.17-rc1 and are already merged
>> in master). I'm using 89748acdf226 as the base.
>
> It's fine as an RFC but, for upstream, please rebase on top of -rc1
> rather than a random commit in the middle of the merging window. Also
> note that many maintainers ignore new series posted during the merging
> window.
Yeah understood - I'm going to be out from Saturday for 2 weeks so thought it
was better to post an RFC now hoping to get some feedback so I can repost
against -rc3 when I'm back and have a chance of getting it into v6.18.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-05 8:13 ` [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full Ryan Roberts
@ 2025-08-05 17:59 ` Yang Shi
2025-08-06 7:57 ` Ryan Roberts
2025-08-28 17:09 ` Catalin Marinas
1 sibling, 1 reply; 22+ messages in thread
From: Yang Shi @ 2025-08-05 17:59 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/5/25 1:13 AM, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
>
> This resulted in a couple of problems:
> - performance degradation
> - more TLB pressure
> - memory waste for kernel page table
>
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
>
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full. When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
>
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
>
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P)
> with 256GB memory.
>
> * Memory use after boot
> Before:
> MemTotal: 258988984 kB
> MemFree: 254821700 kB
>
> After:
> MemTotal: 259505132 kB
> MemFree: 255410264 kB
>
> Around 500MB more memory are free to use. The larger the machine, the
> more memory saved.
>
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
> disk encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap \
> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
> --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad
> case). The bandwidth is increased and the avg clat is reduced
> proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache
> populated). The bandwidth is increased by 150%.
>
> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/cpufeature.h | 2 +
> arch/arm64/include/asm/mmu.h | 1 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 7 +-
> arch/arm64/mm/mmu.c | 237 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 6 +
> 6 files changed, 252 insertions(+), 6 deletions(-)
>
[...]
> +
> +static DEFINE_MUTEX(pgtable_split_lock);
> +
> +int split_kernel_leaf_mapping(unsigned long addr)
> +{
> + pgd_t *pgdp, pgd;
> + p4d_t *p4dp, p4d;
> + pud_t *pudp, pud;
> + pmd_t *pmdp, pmd;
> + pte_t *ptep, pte;
> + int ret = 0;
> +
> + /*
> + * !BBML2_NOABORT systems should not be trying to change permissions on
> + * anything that is not pte-mapped in the first place. Just return early
> + * and let the permission change code raise a warning if not already
> + * pte-mapped.
> + */
> + if (!system_supports_bbml2_noabort())
> + return 0;
> +
> + /*
> + * Ensure addr is at least page-aligned since this is the finest
> + * granularity we can split to.
> + */
> + if (addr != PAGE_ALIGN(addr))
> + return -EINVAL;
> +
> + mutex_lock(&pgtable_split_lock);
> + arch_enter_lazy_mmu_mode();
> +
> + /*
> + * PGD: If addr is PGD aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split.
> + */
> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
> + goto out;
> + pgdp = pgd_offset_k(addr);
> + pgd = pgdp_get(pgdp);
> + if (!pgd_present(pgd))
> + goto out;
> +
> + /*
> + * P4D: If addr is P4D aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split.
> + */
> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
> + goto out;
> + p4dp = p4d_offset(pgdp, addr);
> + p4d = p4dp_get(p4dp);
> + if (!p4d_present(p4d))
> + goto out;
> +
> + /*
> + * PUD: If addr is PUD aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split. Otherwise,
> + * if we have a pud leaf, split to contpmd.
> + */
> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
> + goto out;
> + pudp = pud_offset(p4dp, addr);
> + pud = pudp_get(pudp);
> + if (!pud_present(pud))
> + goto out;
> + if (pud_leaf(pud)) {
> + ret = split_pud(pudp, pud);
> + if (ret)
> + goto out;
> + }
> +
> + /*
> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
> + * leaf boundary. If not present then there is nothing to split.
> + * Otherwise, if we have a contpmd leaf, split to pmd.
> + */
> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> + goto out;
> + pmdp = pmd_offset(pudp, addr);
> + pmd = pmdp_get(pmdp);
> + if (!pmd_present(pmd))
> + goto out;
> + if (pmd_leaf(pmd)) {
> + if (pmd_cont(pmd))
> + split_contpmd(pmdp);
> + /*
> + * PMD: If addr is PMD aligned then addr already describes a
> + * leaf boundary. Otherwise, split to contpte.
> + */
> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> + goto out;
> + ret = split_pmd(pmdp, pmd);
> + if (ret)
> + goto out;
> + }
> +
> + /*
> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
> + * leaf boundary. If not present then there is nothing to split.
> + * Otherwise, if we have a contpte leaf, split to pte.
> + */
> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> + goto out;
> + ptep = pte_offset_kernel(pmdp, addr);
> + pte = __ptep_get(ptep);
> + if (!pte_present(pte))
> + goto out;
> + if (pte_cont(pte))
> + split_contpte(ptep);
> +
> +out:
> + arch_leave_lazy_mmu_mode();
> + mutex_unlock(&pgtable_split_lock);
> + return ret;
> }
>
> /*
> @@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> +static inline bool force_pte_mapping(void)
> +{
> + bool bbml2 = system_capabilities_finalized() ?
> + system_supports_bbml2_noabort() : bbml2_noabort_available();
> +
> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> + is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> early_kfence_pool = arm64_kfence_alloc_pool();
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> @@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index c6a85000fa0e..6a8eefc16dbc 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,6 +140,12 @@ static int update_range_prot(unsigned long start, unsigned long size,
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
>
> + ret = split_kernel_leaf_mapping(start);
> + if (!ret)
> + ret = split_kernel_leaf_mapping(start + size);
> + if (WARN_ON_ONCE(ret))
> + return ret;
This means we take the mutex lock twice and do lazy mmu twice too. So
how's about:
mutex_lock()
enter lazy mmu
split_mapping(start)
split_mapping(end)
leave lazy mmu
mutex_unlock()
Thanks,
Yang
> +
> arch_enter_lazy_mmu_mode();
>
> /*
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-05 8:13 ` [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
@ 2025-08-05 18:14 ` Yang Shi
2025-08-06 8:15 ` Ryan Roberts
0 siblings, 1 reply; 22+ messages in thread
From: Yang Shi @ 2025-08-05 18:14 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/5/25 1:13 AM, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point. So the linear
> mapping is determined by the capability of boot CPU only. If the boot
> CPU supports BBML2, large block mappings will be used for linear
> mapping.
>
> But the secondary CPUs may not support BBML2, so repaint the linear
> mapping if large block mapping is used and the secondary CPUs don't
> support BBML2 once cpufeature is finalized on all CPUs.
>
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
>
> Repainting is implemented by the boot CPU, which we know supports BBML2,
> so it is safe for the live mapping size to change for this CPU. The
> linear map region is walked using the pagewalk API and any discovered
> large leaf mappings are split to pte mappings using the existing helper
> functions. Since the repainting is performed inside of a stop_machine(),
> we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
> since we are still early in boot, it is expected that there is plenty of
> memory available so we will never need to sleep for reclaim, and so
> GFP_ATOMIC is acceptable here.
>
> The secondary CPUs are all put into a waiting area with the idmap in
> TTBR0 and reserved map in TTBR1 while this is performed since they
> cannot be allowed to observe any size changes on the live mappings.
>
> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/mmu.h | 3 +
> arch/arm64/kernel/cpufeature.c | 8 ++
> arch/arm64/mm/mmu.c | 151 ++++++++++++++++++++++++++++++---
> arch/arm64/mm/proc.S | 25 +++++-
> 4 files changed, 172 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 98565b1b93e8..966c08fd8126 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -56,6 +56,8 @@ typedef struct {
> */
> #define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
>
> +extern bool linear_map_requires_bbml2;
> +
> static inline bool arm64_kernel_unmapped_at_el0(void)
> {
> return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
> @@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
> extern void mark_linear_text_alias_ro(void);
> extern int split_kernel_leaf_mapping(unsigned long addr);
> +extern int linear_map_split_to_ptes(void *__unused);
>
> /*
> * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index f28f056087f3..11392c741e48 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -85,6 +85,7 @@
> #include <asm/insn.h>
> #include <asm/kvm_host.h>
> #include <asm/mmu_context.h>
> +#include <asm/mmu.h>
> #include <asm/mte.h>
> #include <asm/hypervisor.h>
> #include <asm/processor.h>
> @@ -2013,6 +2014,12 @@ static int __init __kpti_install_ng_mappings(void *__unused)
> return 0;
> }
>
> +static void __init linear_map_maybe_split_to_ptes(void)
> +{
> + if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
> + stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
> +}
> +
> static void __init kpti_install_ng_mappings(void)
> {
> /* Check whether KPTI is going to be used */
> @@ -3930,6 +3937,7 @@ void __init setup_system_features(void)
> {
> setup_system_capabilities();
>
> + linear_map_maybe_split_to_ptes();
> kpti_install_ng_mappings();
>
> sve_setup();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index f6cd79287024..5b5a84b34024 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -27,6 +27,7 @@
> #include <linux/kfence.h>
> #include <linux/pkeys.h>
> #include <linux/mm_inline.h>
> +#include <linux/pagewalk.h>
>
> #include <asm/barrier.h>
> #include <asm/cputype.h>
> @@ -483,11 +484,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>
> #define INVALID_PHYS_ADDR -1
>
> -static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> +static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
> enum pgtable_type pgtable_type)
> {
> /* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
> - struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> + struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
> phys_addr_t pa;
>
> if (!ptdesc)
> @@ -514,9 +515,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> }
>
> static phys_addr_t
> -try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
> {
> - return __pgd_pgtable_alloc(&init_mm, pgtable_type);
> + return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
> }
>
> static phys_addr_t __maybe_unused
> @@ -524,7 +525,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> {
> phys_addr_t pa;
>
> - pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> + pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
> BUG_ON(pa == INVALID_PHYS_ADDR);
> return pa;
> }
> @@ -534,7 +535,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
> {
> phys_addr_t pa;
>
> - pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> + pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
> BUG_ON(pa == INVALID_PHYS_ADDR);
> return pa;
> }
> @@ -548,7 +549,7 @@ static void split_contpte(pte_t *ptep)
> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> }
>
> -static int split_pmd(pmd_t *pmdp, pmd_t pmd)
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> {
> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> unsigned long pfn = pmd_pfn(pmd);
> @@ -557,7 +558,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
> pte_t *ptep;
> int i;
>
> - pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
> + pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
> if (pte_phys == INVALID_PHYS_ADDR)
> return -ENOMEM;
> ptep = (pte_t *)phys_to_virt(pte_phys);
> @@ -590,7 +591,7 @@ static void split_contpmd(pmd_t *pmdp)
> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
> }
>
> -static int split_pud(pud_t *pudp, pud_t pud)
> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
> {
> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> @@ -600,7 +601,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
> pmd_t *pmdp;
> int i;
>
> - pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
> + pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
> if (pmd_phys == INVALID_PHYS_ADDR)
> return -ENOMEM;
> pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> @@ -688,7 +689,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
> if (!pud_present(pud))
> goto out;
> if (pud_leaf(pud)) {
> - ret = split_pud(pudp, pud);
> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
> if (ret)
> goto out;
> }
> @@ -713,7 +714,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
> */
> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> goto out;
> - ret = split_pmd(pmdp, pmd);
> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
> if (ret)
> goto out;
> }
> @@ -738,6 +739,112 @@ int split_kernel_leaf_mapping(unsigned long addr)
> return ret;
> }
>
> +static int split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + pud_t pud = pudp_get(pudp);
> + int ret = 0;
> +
> + if (pud_leaf(pud))
> + ret = split_pud(pudp, pud, GFP_ATOMIC);
> +
> + return ret;
> +}
> +
> +static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + pmd_t pmd = pmdp_get(pmdp);
> + int ret = 0;
> +
> + if (pmd_leaf(pmd)) {
> + if (pmd_cont(pmd))
> + split_contpmd(pmdp);
> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
> + }
> +
> + return ret;
> +}
> +
> +static int split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + pte_t pte = __ptep_get(ptep);
> +
> + if (pte_cont(pte))
> + split_contpte(ptep);
> +
> + return 0;
> +}
IIUC pgtable walker API walks the page table PTE by PTE, so it means the
split function is called on every PTE even though it has been split.
This is not very efficient. But it may be ok since repainting is just
called once at boot time.
Thanks,
Yang
> +
> +static const struct mm_walk_ops split_to_ptes_ops = {
> + .pud_entry = split_to_ptes_pud_entry,
> + .pmd_entry = split_to_ptes_pmd_entry,
> + .pte_entry = split_to_ptes_pte_entry,
> +};
> +
> +extern u32 repaint_done;
> +
> +int __init linear_map_split_to_ptes(void *__unused)
> +{
> + /*
> + * Repainting the linear map must be done by CPU0 (the boot CPU) because
> + * that's the only CPU that we know supports BBML2. The other CPUs will
> + * be held in a waiting area with the idmap active.
> + */
> + if (!smp_processor_id()) {
> + unsigned long lstart = _PAGE_OFFSET(vabits_actual);
> + unsigned long lend = PAGE_END;
> + unsigned long kstart = (unsigned long)lm_alias(_stext);
> + unsigned long kend = (unsigned long)lm_alias(__init_begin);
> + int ret;
> +
> + /*
> + * Wait for all secondary CPUs to be put into the waiting area.
> + */
> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
> +
> + /*
> + * Walk all of the linear map [lstart, lend), except the kernel
> + * linear map alias [kstart, kend), and split all mappings to
> + * PTE. The kernel alias remains static throughout runtime so
> + * can continue to be safely mapped with large mappings.
> + */
> + ret = walk_kernel_page_table_range_lockless(lstart, kstart,
> + &split_to_ptes_ops, NULL);
> + if (!ret)
> + ret = walk_kernel_page_table_range_lockless(kend, lend,
> + &split_to_ptes_ops, NULL);
> + if (ret)
> + panic("Failed to split linear map\n");
> + flush_tlb_kernel_range(lstart, lend);
> +
> + /*
> + * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
> + * before any page table split operations.
> + */
> + WRITE_ONCE(repaint_done, 0);
> + } else {
> + typedef void (repaint_wait_fn)(void);
> + extern repaint_wait_fn bbml2_wait_for_repainting;
> + repaint_wait_fn *wait_fn;
> +
> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
> +
> + /*
> + * At least one secondary CPU doesn't support BBML2 so cannot
> + * tolerate the size of the live mappings changing. So have the
> + * secondary CPUs wait for the boot CPU to make the changes
> + * with the idmap active and init_mm inactive.
> + */
> + cpu_install_idmap();
> + wait_fn();
> + cpu_uninstall_idmap();
> + }
> +
> + return 0;
> +}
> +
> /*
> * This function can only be used to modify existing table entries,
> * without allocating new levels of table. Note that this permits the
> @@ -857,6 +964,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> +bool linear_map_requires_bbml2;
> +
> static inline bool force_pte_mapping(void)
> {
> bool bbml2 = system_capabilities_finalized() ?
> @@ -892,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
>
> early_kfence_pool = arm64_kfence_alloc_pool();
>
> + linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
> +
> if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> @@ -1025,7 +1136,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>
> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
>
> static void __init create_idmap(void)
> {
> @@ -1050,6 +1162,19 @@ static void __init create_idmap(void)
> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> __phys_to_virt(ptep) - ptep);
> }
> +
> + /*
> + * Setup idmap mapping for repaint_done flag. It will be used if
> + * repainting the linear mapping is needed later.
> + */
> + if (linear_map_requires_bbml2) {
> + u64 pa = __pa_symbol(&repaint_done);
> +
> + ptep = __pa_symbol(bbml2_ptes);
> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> + __phys_to_virt(ptep) - ptep);
> + }
> }
>
> void __init paging_init(void)
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 8c75965afc9e..dbaac2e824d7 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -416,7 +416,29 @@ alternative_else_nop_endif
> __idmap_kpti_secondary:
> /* Uninstall swapper before surgery begins */
> __idmap_cpu_set_reserved_ttbr1 x16, x17
> + b scondary_cpu_wait
>
> + .unreq swapper_ttb
> + .unreq flag_ptr
> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> + .popsection
> +#endif
> +
> + .pushsection ".data", "aw", %progbits
> +SYM_DATA(repaint_done, .long 1)
> + .popsection
> +
> + .pushsection ".idmap.text", "a"
> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
> + /* Must be same registers as in idmap_kpti_install_ng_mappings */
> + swapper_ttb .req x3
> + flag_ptr .req x4
> +
> + mrs swapper_ttb, ttbr1_el1
> + adr_l flag_ptr, repaint_done
> + __idmap_cpu_set_reserved_ttbr1 x16, x17
> +
> +scondary_cpu_wait:
> /* Increment the flag to let the boot CPU we're ready */
> 1: ldxr w16, [flag_ptr]
> add w16, w16, #1
> @@ -436,9 +458,8 @@ __idmap_kpti_secondary:
>
> .unreq swapper_ttb
> .unreq flag_ptr
> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
> +SYM_FUNC_END(bbml2_wait_for_repainting)
> .popsection
> -#endif
>
> /*
> * __cpu_setup
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (4 preceding siblings ...)
2025-08-05 8:16 ` [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
@ 2025-08-05 18:37 ` Yang Shi
2025-08-27 15:00 ` Ryan Roberts
5 siblings, 1 reply; 22+ messages in thread
From: Yang Shi @ 2025-08-05 18:37 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
Hi Ryan
On 8/5/25 1:13 AM, Ryan Roberts wrote:
> Hi All,
>
> This is a new version built on top of Yang Shi's work at [1]. Yang and I have
> been discussing (disagreeing?) about the best way to implement the last 2
> patches. So I've reworked them and am posting as RFC to illustrate how I think
> this feature should be implemented, but I've retained Yang as primary author
> since it is all based on his work. I'd appreciate feedback from Catalin and/or
> Will on whether this is the right approach, so that hopefully we can get this
> into shape for 6.18.
>
> The first 2 patches are unchanged from Yang's v5; the first patch comes from Dev
> and the rest of the series depends upon it.
Thank you for making the prototype and retaining me as primary author.
The approach is basically fine to me. But there are some minor concerns.
Some of them were raised in the comment for patch #3 and patch #4. I put
them together here.
1. Walk page table twice. This has been discussed before. It is not
very efficient for small split, for example, 4K. Unfortunately, the most
split is still 4K in the current kernel AFAICT.
Hopefully this can be mitigated by some new development, for
example, ROX cache.
2. Take mutex lock twice and do lazy mmu twice. I think it is easy
to resolve as I suggested in patch #3.
3. Walk every PTE and call split on every PTE for repainting. It is
not very efficient, but may be ok for repainting since it is just called
once at boot time.
I don't think these concerns are major blockers IMHO. Anyway let's see
what Catalin and/or Will think about this.
Regards,
Yang
>
> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> modes by hacking the BBML2 feature detection code:
>
> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
> initially uses large mappings but is then repainted to use pte mappings
>
> In all cases, mm selftests run and no regressions are observed. In all cases,
> ptdump of linear map is as expected:
>
> Mode 1:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
> Mode 3:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
> [1] https://lore.kernel.org/linux-arm-kernel/20250724221216.1998696-1-yang@os.amperecomputing.com/
>
> Thanks,
> Ryan
>
> Dev Jain (1):
> arm64: Enable permission change on arm64 kernel block mappings
>
> Yang Shi (3):
> arm64: cpufeature: add AmpereOne to BBML2 allow list
> arm64: mm: support large block mapping when rodata=full
> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
>
> arch/arm64/include/asm/cpufeature.h | 2 +
> arch/arm64/include/asm/mmu.h | 4 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 17 +-
> arch/arm64/mm/mmu.c | 368 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 161 +++++++++---
> arch/arm64/mm/proc.S | 25 +-
> include/linux/pagewalk.h | 3 +
> mm/pagewalk.c | 24 ++
> 9 files changed, 566 insertions(+), 43 deletions(-)
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-05 17:59 ` Yang Shi
@ 2025-08-06 7:57 ` Ryan Roberts
2025-08-07 0:19 ` Yang Shi
0 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-06 7:57 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 05/08/2025 18:59, Yang Shi wrote:
>
>
> On 8/5/25 1:13 AM, Ryan Roberts wrote:
>> From: Yang Shi <yang@os.amperecomputing.com>
>>
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>> - performance degradation
>> - more TLB pressure
>> - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full. When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
>> with 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal: 258988984 kB
>> MemFree: 254821700 kB
>>
>> After:
>> MemTotal: 259505132 kB
>> MemFree: 255410264 kB
>>
>> Around 500MB more memory are free to use. The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
>> disk encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap \
>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
>> --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad
>> case). The bandwidth is increased and the avg clat is reduced
>> proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache
>> populated). The bandwidth is increased by 150%.
>>
>> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/cpufeature.h | 2 +
>> arch/arm64/include/asm/mmu.h | 1 +
>> arch/arm64/include/asm/pgtable.h | 5 +
>> arch/arm64/kernel/cpufeature.c | 7 +-
>> arch/arm64/mm/mmu.c | 237 +++++++++++++++++++++++++++-
>> arch/arm64/mm/pageattr.c | 6 +
>> 6 files changed, 252 insertions(+), 6 deletions(-)
>>
>
> [...]
>
>> +
>> +static DEFINE_MUTEX(pgtable_split_lock);
>> +
>> +int split_kernel_leaf_mapping(unsigned long addr)
>> +{
>> + pgd_t *pgdp, pgd;
>> + p4d_t *p4dp, p4d;
>> + pud_t *pudp, pud;
>> + pmd_t *pmdp, pmd;
>> + pte_t *ptep, pte;
>> + int ret = 0;
>> +
>> + /*
>> + * !BBML2_NOABORT systems should not be trying to change permissions on
>> + * anything that is not pte-mapped in the first place. Just return early
>> + * and let the permission change code raise a warning if not already
>> + * pte-mapped.
>> + */
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>> +
>> + /*
>> + * Ensure addr is at least page-aligned since this is the finest
>> + * granularity we can split to.
>> + */
>> + if (addr != PAGE_ALIGN(addr))
>> + return -EINVAL;
>> +
>> + mutex_lock(&pgtable_split_lock);
>> + arch_enter_lazy_mmu_mode();
>> +
>> + /*
>> + * PGD: If addr is PGD aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split.
>> + */
>> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>> + goto out;
>> + pgdp = pgd_offset_k(addr);
>> + pgd = pgdp_get(pgdp);
>> + if (!pgd_present(pgd))
>> + goto out;
>> +
>> + /*
>> + * P4D: If addr is P4D aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split.
>> + */
>> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>> + goto out;
>> + p4dp = p4d_offset(pgdp, addr);
>> + p4d = p4dp_get(p4dp);
>> + if (!p4d_present(p4d))
>> + goto out;
>> +
>> + /*
>> + * PUD: If addr is PUD aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split. Otherwise,
>> + * if we have a pud leaf, split to contpmd.
>> + */
>> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>> + goto out;
>> + pudp = pud_offset(p4dp, addr);
>> + pud = pudp_get(pudp);
>> + if (!pud_present(pud))
>> + goto out;
>> + if (pud_leaf(pud)) {
>> + ret = split_pud(pudp, pud);
>> + if (ret)
>> + goto out;
>> + }
>> +
>> + /*
>> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>> + * leaf boundary. If not present then there is nothing to split.
>> + * Otherwise, if we have a contpmd leaf, split to pmd.
>> + */
>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> + goto out;
>> + pmdp = pmd_offset(pudp, addr);
>> + pmd = pmdp_get(pmdp);
>> + if (!pmd_present(pmd))
>> + goto out;
>> + if (pmd_leaf(pmd)) {
>> + if (pmd_cont(pmd))
>> + split_contpmd(pmdp);
>> + /*
>> + * PMD: If addr is PMD aligned then addr already describes a
>> + * leaf boundary. Otherwise, split to contpte.
>> + */
>> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>> + goto out;
>> + ret = split_pmd(pmdp, pmd);
>> + if (ret)
>> + goto out;
>> + }
>> +
>> + /*
>> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>> + * leaf boundary. If not present then there is nothing to split.
>> + * Otherwise, if we have a contpte leaf, split to pte.
>> + */
>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> + goto out;
>> + ptep = pte_offset_kernel(pmdp, addr);
>> + pte = __ptep_get(ptep);
>> + if (!pte_present(pte))
>> + goto out;
>> + if (pte_cont(pte))
>> + split_contpte(ptep);
>> +
>> +out:
>> + arch_leave_lazy_mmu_mode();
>> + mutex_unlock(&pgtable_split_lock);
>> + return ret;
>> }
>> /*
>> @@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t
>> kfence_pool, pgd_t *pgdp) {
>> #endif /* CONFIG_KFENCE */
>> +static inline bool force_pte_mapping(void)
>> +{
>> + bool bbml2 = system_capabilities_finalized() ?
>> + system_supports_bbml2_noabort() : bbml2_noabort_available();
>> +
>> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
>> + is_realm_world())) ||
>> + debug_pagealloc_enabled();
>> +}
>> +
>> static void __init map_mem(pgd_t *pgdp)
>> {
>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
>> early_kfence_pool = arm64_kfence_alloc_pool();
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> /*
>> @@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c6a85000fa0e..6a8eefc16dbc 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,6 +140,12 @@ static int update_range_prot(unsigned long start,
>> unsigned long size,
>> data.set_mask = set_mask;
>> data.clear_mask = clear_mask;
>> + ret = split_kernel_leaf_mapping(start);
>> + if (!ret)
>> + ret = split_kernel_leaf_mapping(start + size);
>> + if (WARN_ON_ONCE(ret))
>> + return ret;
>
> This means we take the mutex lock twice and do lazy mmu twice too. So how's about:
>
> mutex_lock()
> enter lazy mmu
> split_mapping(start)
> split_mapping(end)
> leave lazy mmu
> mutex_unlock()
Good point. In fact it would be even better to share the same lazy mmu
invocation with the permission change code below. How about something like this:
---8<---
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5b5a84b34024..90ab0ab5b06a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -625,8 +625,6 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
return 0;
}
-static DEFINE_MUTEX(pgtable_split_lock);
-
int split_kernel_leaf_mapping(unsigned long addr)
{
pgd_t *pgdp, pgd;
@@ -636,14 +634,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
pte_t *ptep, pte;
int ret = 0;
- /*
- * !BBML2_NOABORT systems should not be trying to change permissions on
- * anything that is not pte-mapped in the first place. Just return early
- * and let the permission change code raise a warning if not already
- * pte-mapped.
- */
- if (!system_supports_bbml2_noabort())
- return 0;
+ VM_WARN_ON(!system_supports_bbml2_noabort());
/*
* Ensure addr is at least page-aligned since this is the finest
@@ -652,9 +643,6 @@ int split_kernel_leaf_mapping(unsigned long addr)
if (addr != PAGE_ALIGN(addr))
return -EINVAL;
- mutex_lock(&pgtable_split_lock);
- arch_enter_lazy_mmu_mode();
-
/*
* PGD: If addr is PGD aligned then addr already describes a leaf
* boundary. If not present then there is nothing to split.
@@ -734,8 +722,6 @@ int split_kernel_leaf_mapping(unsigned long addr)
split_contpte(ptep);
out:
- arch_leave_lazy_mmu_mode();
- mutex_unlock(&pgtable_split_lock);
return ret;
}
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 6a8eefc16dbc..73f80db2e5ba 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -131,6 +131,8 @@ bool can_set_direct_map(void)
arm64_kfence_can_set_direct_map() || is_realm_world();
}
+static DEFINE_MUTEX(pgtable_split_lock);
+
static int update_range_prot(unsigned long start, unsigned long size,
pgprot_t set_mask, pgprot_t clear_mask)
{
@@ -140,14 +142,23 @@ static int update_range_prot(unsigned long start, unsigned
long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
- ret = split_kernel_leaf_mapping(start);
- if (!ret)
- ret = split_kernel_leaf_mapping(start + size);
- if (WARN_ON_ONCE(ret))
- return ret;
-
arch_enter_lazy_mmu_mode();
+ /*
+ * split_kernel_leaf_mapping() is only allowed for BBML2_NOABORT
+ * systems. !BBML2_NOABORT systems should not be trying to change
+ * permissions on anything that is not pte-mapped in the first place.
+ */
+ if (system_supports_bbml2_noabort()) {
+ mutex_lock(&pgtable_split_lock);
+ ret = split_kernel_leaf_mapping(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping(start + size);
+ mutex_unlock(&pgtable_split_lock);
+ if (ret)
+ goto out;
+ }
+
/*
* The caller must ensure that the range we are operating on does not
* partially overlap a block mapping, or a cont mapping. Any such case
@@ -155,8 +166,8 @@ static int update_range_prot(unsigned long start, unsigned
long size,
*/
ret = walk_kernel_page_table_range_lockless(start, start + size,
&pageattr_ops, &data);
+out:
arch_leave_lazy_mmu_mode();
-
return ret;
}
---8<---
Of course this means we take the mutex while inside the lazy mmu section.
Technically sleeping is not allowed while in lazy mmu mode, but the arm64
implementation can handle it. If this is a step too far, I guess we can keep 2
separate lazy mmu sections; one for split as you suggest, and another for
permission change.
Thanks,
Ryan
>
> Thanks,
> Yang
>
>> +
>> arch_enter_lazy_mmu_mode();
>> /*
>
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-05 18:14 ` Yang Shi
@ 2025-08-06 8:15 ` Ryan Roberts
2025-08-07 0:29 ` Yang Shi
0 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-06 8:15 UTC (permalink / raw)
To: Yang Shi, will, catalin.marinas, akpm, Miko.Lenczewski, dev.jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel
On 05/08/2025 19:14, Yang Shi wrote:
>
>
> On 8/5/25 1:13 AM, Ryan Roberts wrote:
>> From: Yang Shi <yang@os.amperecomputing.com>
>>
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point. So the linear
>> mapping is determined by the capability of boot CPU only. If the boot
>> CPU supports BBML2, large block mappings will be used for linear
>> mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear
>> mapping if large block mapping is used and the secondary CPUs don't
>> support BBML2 once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Repainting is implemented by the boot CPU, which we know supports BBML2,
>> so it is safe for the live mapping size to change for this CPU. The
>> linear map region is walked using the pagewalk API and any discovered
>> large leaf mappings are split to pte mappings using the existing helper
>> functions. Since the repainting is performed inside of a stop_machine(),
>> we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
>> since we are still early in boot, it is expected that there is plenty of
>> memory available so we will never need to sleep for reclaim, and so
>> GFP_ATOMIC is acceptable here.
>>
>> The secondary CPUs are all put into a waiting area with the idmap in
>> TTBR0 and reserved map in TTBR1 while this is performed since they
>> cannot be allowed to observe any size changes on the live mappings.
>>
>> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/mmu.h | 3 +
>> arch/arm64/kernel/cpufeature.c | 8 ++
>> arch/arm64/mm/mmu.c | 151 ++++++++++++++++++++++++++++++---
>> arch/arm64/mm/proc.S | 25 +++++-
>> 4 files changed, 172 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 98565b1b93e8..966c08fd8126 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -56,6 +56,8 @@ typedef struct {
>> */
>> #define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
>> +extern bool linear_map_requires_bbml2;
>> +
>> static inline bool arm64_kernel_unmapped_at_el0(void)
>> {
>> return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
>> @@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm,
>> phys_addr_t phys,
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>> extern void mark_linear_text_alias_ro(void);
>> extern int split_kernel_leaf_mapping(unsigned long addr);
>> +extern int linear_map_split_to_ptes(void *__unused);
>> /*
>> * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index f28f056087f3..11392c741e48 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -85,6 +85,7 @@
>> #include <asm/insn.h>
>> #include <asm/kvm_host.h>
>> #include <asm/mmu_context.h>
>> +#include <asm/mmu.h>
>> #include <asm/mte.h>
>> #include <asm/hypervisor.h>
>> #include <asm/processor.h>
>> @@ -2013,6 +2014,12 @@ static int __init __kpti_install_ng_mappings(void
>> *__unused)
>> return 0;
>> }
>> +static void __init linear_map_maybe_split_to_ptes(void)
>> +{
>> + if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
>> + stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
>> +}
>> +
>> static void __init kpti_install_ng_mappings(void)
>> {
>> /* Check whether KPTI is going to be used */
>> @@ -3930,6 +3937,7 @@ void __init setup_system_features(void)
>> {
>> setup_system_capabilities();
>> + linear_map_maybe_split_to_ptes();
>> kpti_install_ng_mappings();
>> sve_setup();
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index f6cd79287024..5b5a84b34024 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -27,6 +27,7 @@
>> #include <linux/kfence.h>
>> #include <linux/pkeys.h>
>> #include <linux/mm_inline.h>
>> +#include <linux/pagewalk.h>
>> #include <asm/barrier.h>
>> #include <asm/cputype.h>
>> @@ -483,11 +484,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t
>> phys, unsigned long virt,
>> #define INVALID_PHYS_ADDR -1
>> -static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> +static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
>> enum pgtable_type pgtable_type)
>> {
>> /* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
>> - struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO,
>> 0);
>> + struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
>> phys_addr_t pa;
>> if (!ptdesc)
>> @@ -514,9 +515,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> }
>> static phys_addr_t
>> -try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
>> {
>> - return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> + return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
>> }
>> static phys_addr_t __maybe_unused
>> @@ -524,7 +525,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> {
>> phys_addr_t pa;
>> - pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> + pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
>> BUG_ON(pa == INVALID_PHYS_ADDR);
>> return pa;
>> }
>> @@ -534,7 +535,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>> {
>> phys_addr_t pa;
>> - pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> + pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
>> BUG_ON(pa == INVALID_PHYS_ADDR);
>> return pa;
>> }
>> @@ -548,7 +549,7 @@ static void split_contpte(pte_t *ptep)
>> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
>> }
>> -static int split_pmd(pmd_t *pmdp, pmd_t pmd)
>> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
>> {
>> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>> unsigned long pfn = pmd_pfn(pmd);
>> @@ -557,7 +558,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
>> pte_t *ptep;
>> int i;
>> - pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
>> + pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
>> if (pte_phys == INVALID_PHYS_ADDR)
>> return -ENOMEM;
>> ptep = (pte_t *)phys_to_virt(pte_phys);
>> @@ -590,7 +591,7 @@ static void split_contpmd(pmd_t *pmdp)
>> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
>> }
>> -static int split_pud(pud_t *pudp, pud_t pud)
>> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
>> {
>> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> @@ -600,7 +601,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
>> pmd_t *pmdp;
>> int i;
>> - pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
>> + pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
>> if (pmd_phys == INVALID_PHYS_ADDR)
>> return -ENOMEM;
>> pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> @@ -688,7 +689,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
>> if (!pud_present(pud))
>> goto out;
>> if (pud_leaf(pud)) {
>> - ret = split_pud(pudp, pud);
>> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
>> if (ret)
>> goto out;
>> }
>> @@ -713,7 +714,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
>> */
>> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>> goto out;
>> - ret = split_pmd(pmdp, pmd);
>> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
>> if (ret)
>> goto out;
>> }
>> @@ -738,6 +739,112 @@ int split_kernel_leaf_mapping(unsigned long addr)
>> return ret;
>> }
>> +static int split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
>> + unsigned long next, struct mm_walk *walk)
>> +{
>> + pud_t pud = pudp_get(pudp);
>> + int ret = 0;
>> +
>> + if (pud_leaf(pud))
>> + ret = split_pud(pudp, pud, GFP_ATOMIC);
>> +
>> + return ret;
>> +}
>> +
>> +static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
>> + unsigned long next, struct mm_walk *walk)
>> +{
>> + pmd_t pmd = pmdp_get(pmdp);
>> + int ret = 0;
>> +
>> + if (pmd_leaf(pmd)) {
>> + if (pmd_cont(pmd))
>> + split_contpmd(pmdp);
>> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
>> + unsigned long next, struct mm_walk *walk)
>> +{
>> + pte_t pte = __ptep_get(ptep);
>> +
>> + if (pte_cont(pte))
>> + split_contpte(ptep);
>> +
>> + return 0;
>> +}
>
> IIUC pgtable walker API walks the page table PTE by PTE, so it means the split
> function is called on every PTE even though it has been split. This is not very
> efficient. But it may be ok since repainting is just called once at boot time.
Good point. I think this could be improved, while continuing to use the walker API.
Currently I'm splitting leaf puds to cont-pmds, then cont-pmds to pmds, then
pmds to cont-ptes then cont-ptes to ptes. And we therefore need to visit each
pte (or technically 1 in 16 ptes) to check if they are cont-mapped. I did it
this way to reuse the existing split logic without modification.
But we could provide a flag to the split logic to tell it to "bypass split to
cont-pte" so that we then have puds to cont-pmds, cont-pmds to pmds and pmds to
ptes. And in that final case we can avoid walking any ptes that we already split
from pmds because we know they can't be cont-mapped. We can do that with
ACTION_CONTINUE when returning from the pmd handler. We would still visit every
pte that was already mapped at pte level because we would still need to check
for cont-pte. The API doesn't provide a way for us to skip forward by 16 ptes at
a time.
Something like this, perhaps:
---8<---
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5b5a84b34024..f0066ecbe6b2 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -549,7 +549,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont_pte)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -567,7 +567,8 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
tableprot |= PMD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ if (to_cont_pte)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
__set_pte(ptep, pfn_pte(pfn, prot));
@@ -714,7 +715,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -760,7 +761,8 @@ static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned
long addr,
if (pmd_leaf(pmd)) {
if (pmd_cont(pmd))
split_contpmd(pmdp);
- ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
+ walk->action = ACTION_CONTINUE;
}
return ret;
---8<---
Thanks,
Ryan
>
> Thanks,
> Yang
>
>> +
>> +static const struct mm_walk_ops split_to_ptes_ops = {
>> + .pud_entry = split_to_ptes_pud_entry,
>> + .pmd_entry = split_to_ptes_pmd_entry,
>> + .pte_entry = split_to_ptes_pte_entry,
>> +};
>> +
>> +extern u32 repaint_done;
>> +
>> +int __init linear_map_split_to_ptes(void *__unused)
>> +{
>> + /*
>> + * Repainting the linear map must be done by CPU0 (the boot CPU) because
>> + * that's the only CPU that we know supports BBML2. The other CPUs will
>> + * be held in a waiting area with the idmap active.
>> + */
>> + if (!smp_processor_id()) {
>> + unsigned long lstart = _PAGE_OFFSET(vabits_actual);
>> + unsigned long lend = PAGE_END;
>> + unsigned long kstart = (unsigned long)lm_alias(_stext);
>> + unsigned long kend = (unsigned long)lm_alias(__init_begin);
>> + int ret;
>> +
>> + /*
>> + * Wait for all secondary CPUs to be put into the waiting area.
>> + */
>> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
>> +
>> + /*
>> + * Walk all of the linear map [lstart, lend), except the kernel
>> + * linear map alias [kstart, kend), and split all mappings to
>> + * PTE. The kernel alias remains static throughout runtime so
>> + * can continue to be safely mapped with large mappings.
>> + */
>> + ret = walk_kernel_page_table_range_lockless(lstart, kstart,
>> + &split_to_ptes_ops, NULL);
>> + if (!ret)
>> + ret = walk_kernel_page_table_range_lockless(kend, lend,
>> + &split_to_ptes_ops, NULL);
>> + if (ret)
>> + panic("Failed to split linear map\n");
>> + flush_tlb_kernel_range(lstart, lend);
>> +
>> + /*
>> + * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
>> + * before any page table split operations.
>> + */
>> + WRITE_ONCE(repaint_done, 0);
>> + } else {
>> + typedef void (repaint_wait_fn)(void);
>> + extern repaint_wait_fn bbml2_wait_for_repainting;
>> + repaint_wait_fn *wait_fn;
>> +
>> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>> +
>> + /*
>> + * At least one secondary CPU doesn't support BBML2 so cannot
>> + * tolerate the size of the live mappings changing. So have the
>> + * secondary CPUs wait for the boot CPU to make the changes
>> + * with the idmap active and init_mm inactive.
>> + */
>> + cpu_install_idmap();
>> + wait_fn();
>> + cpu_uninstall_idmap();
>> + }
>> +
>> + return 0;
>> +}
>> +
>> /*
>> * This function can only be used to modify existing table entries,
>> * without allocating new levels of table. Note that this permits the
>> @@ -857,6 +964,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t
>> kfence_pool, pgd_t *pgdp) {
>> #endif /* CONFIG_KFENCE */
>> +bool linear_map_requires_bbml2;
>> +
>> static inline bool force_pte_mapping(void)
>> {
>> bool bbml2 = system_capabilities_finalized() ?
>> @@ -892,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
>> early_kfence_pool = arm64_kfence_alloc_pool();
>> + linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
>> +
>> if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>> @@ -1025,7 +1136,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64
>> pa, pgprot_t prot,
>> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>> __ro_after_init,
>> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>> __ro_after_init;
>> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>> __ro_after_init,
>> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>> __ro_after_init;
>> static void __init create_idmap(void)
>> {
>> @@ -1050,6 +1162,19 @@ static void __init create_idmap(void)
>> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> __phys_to_virt(ptep) - ptep);
>> }
>> +
>> + /*
>> + * Setup idmap mapping for repaint_done flag. It will be used if
>> + * repainting the linear mapping is needed later.
>> + */
>> + if (linear_map_requires_bbml2) {
>> + u64 pa = __pa_symbol(&repaint_done);
>> +
>> + ptep = __pa_symbol(bbml2_ptes);
>> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>> + __phys_to_virt(ptep) - ptep);
>> + }
>> }
>> void __init paging_init(void)
>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>> index 8c75965afc9e..dbaac2e824d7 100644
>> --- a/arch/arm64/mm/proc.S
>> +++ b/arch/arm64/mm/proc.S
>> @@ -416,7 +416,29 @@ alternative_else_nop_endif
>> __idmap_kpti_secondary:
>> /* Uninstall swapper before surgery begins */
>> __idmap_cpu_set_reserved_ttbr1 x16, x17
>> + b scondary_cpu_wait
>> + .unreq swapper_ttb
>> + .unreq flag_ptr
>> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> + .popsection
>> +#endif
>> +
>> + .pushsection ".data", "aw", %progbits
>> +SYM_DATA(repaint_done, .long 1)
>> + .popsection
>> +
>> + .pushsection ".idmap.text", "a"
>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>> + /* Must be same registers as in idmap_kpti_install_ng_mappings */
>> + swapper_ttb .req x3
>> + flag_ptr .req x4
>> +
>> + mrs swapper_ttb, ttbr1_el1
>> + adr_l flag_ptr, repaint_done
>> + __idmap_cpu_set_reserved_ttbr1 x16, x17
>> +
>> +scondary_cpu_wait:
>> /* Increment the flag to let the boot CPU we're ready */
>> 1: ldxr w16, [flag_ptr]
>> add w16, w16, #1
>> @@ -436,9 +458,8 @@ __idmap_kpti_secondary:
>> .unreq swapper_ttb
>> .unreq flag_ptr
>> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>> .popsection
>> -#endif
>> /*
>> * __cpu_setup
>
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-06 7:57 ` Ryan Roberts
@ 2025-08-07 0:19 ` Yang Shi
0 siblings, 0 replies; 22+ messages in thread
From: Yang Shi @ 2025-08-07 0:19 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/6/25 12:57 AM, Ryan Roberts wrote:
> On 05/08/2025 18:59, Yang Shi wrote:
>>
>> On 8/5/25 1:13 AM, Ryan Roberts wrote:
>>> From: Yang Shi <yang@os.amperecomputing.com>
>>>
>>> When rodata=full is specified, kernel linear mapping has to be mapped at
>>> PTE level since large page table can't be split due to break-before-make
>>> rule on ARM64.
>>>
>>> This resulted in a couple of problems:
>>> - performance degradation
>>> - more TLB pressure
>>> - memory waste for kernel page table
>>>
>>> With FEAT_BBM level 2 support, splitting large block page table to
>>> smaller ones doesn't need to make the page table entry invalid anymore.
>>> This allows kernel split large block mapping on the fly.
>>>
>>> Add kernel page table split support and use large block mapping by
>>> default when FEAT_BBM level 2 is supported for rodata=full. When
>>> changing permissions for kernel linear mapping, the page table will be
>>> split to smaller size.
>>>
>>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>>> mapping PTE-mapped when rodata=full.
>>>
>>> With this we saw significant performance boost with some benchmarks and
>>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
>>> with 256GB memory.
>>>
>>> * Memory use after boot
>>> Before:
>>> MemTotal: 258988984 kB
>>> MemFree: 254821700 kB
>>>
>>> After:
>>> MemTotal: 259505132 kB
>>> MemFree: 255410264 kB
>>>
>>> Around 500MB more memory are free to use. The larger the machine, the
>>> more memory saved.
>>>
>>> * Memcached
>>> We saw performance degradation when running Memcached benchmark with
>>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>>> latency is reduced by around 9.6%.
>>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>>> MPKI is reduced by 28.5%.
>>>
>>> The benchmark data is now on par with rodata=on too.
>>>
>>> * Disk encryption (dm-crypt) benchmark
>>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
>>> disk encryption (by dm-crypt).
>>> fio --directory=/data --random_generator=lfsr --norandommap \
>>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
>>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
>>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
>>> --size 100G
>>>
>>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>>> number of good case is around 90% more than the best number of bad
>>> case). The bandwidth is increased and the avg clat is reduced
>>> proportionally.
>>>
>>> * Sequential file read
>>> Read 100G file sequentially on XFS (xfs_io read with page cache
>>> populated). The bandwidth is increased by 150%.
>>>
>>> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>>> ---
>>> arch/arm64/include/asm/cpufeature.h | 2 +
>>> arch/arm64/include/asm/mmu.h | 1 +
>>> arch/arm64/include/asm/pgtable.h | 5 +
>>> arch/arm64/kernel/cpufeature.c | 7 +-
>>> arch/arm64/mm/mmu.c | 237 +++++++++++++++++++++++++++-
>>> arch/arm64/mm/pageattr.c | 6 +
>>> 6 files changed, 252 insertions(+), 6 deletions(-)
>>>
>> [...]
>>
>>> +
>>> +static DEFINE_MUTEX(pgtable_split_lock);
>>> +
>>> +int split_kernel_leaf_mapping(unsigned long addr)
>>> +{
>>> + pgd_t *pgdp, pgd;
>>> + p4d_t *p4dp, p4d;
>>> + pud_t *pudp, pud;
>>> + pmd_t *pmdp, pmd;
>>> + pte_t *ptep, pte;
>>> + int ret = 0;
>>> +
>>> + /*
>>> + * !BBML2_NOABORT systems should not be trying to change permissions on
>>> + * anything that is not pte-mapped in the first place. Just return early
>>> + * and let the permission change code raise a warning if not already
>>> + * pte-mapped.
>>> + */
>>> + if (!system_supports_bbml2_noabort())
>>> + return 0;
>>> +
>>> + /*
>>> + * Ensure addr is at least page-aligned since this is the finest
>>> + * granularity we can split to.
>>> + */
>>> + if (addr != PAGE_ALIGN(addr))
>>> + return -EINVAL;
>>> +
>>> + mutex_lock(&pgtable_split_lock);
>>> + arch_enter_lazy_mmu_mode();
>>> +
>>> + /*
>>> + * PGD: If addr is PGD aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split.
>>> + */
>>> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>>> + goto out;
>>> + pgdp = pgd_offset_k(addr);
>>> + pgd = pgdp_get(pgdp);
>>> + if (!pgd_present(pgd))
>>> + goto out;
>>> +
>>> + /*
>>> + * P4D: If addr is P4D aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split.
>>> + */
>>> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>>> + goto out;
>>> + p4dp = p4d_offset(pgdp, addr);
>>> + p4d = p4dp_get(p4dp);
>>> + if (!p4d_present(p4d))
>>> + goto out;
>>> +
>>> + /*
>>> + * PUD: If addr is PUD aligned then addr already describes a leaf
>>> + * boundary. If not present then there is nothing to split. Otherwise,
>>> + * if we have a pud leaf, split to contpmd.
>>> + */
>>> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>>> + goto out;
>>> + pudp = pud_offset(p4dp, addr);
>>> + pud = pudp_get(pudp);
>>> + if (!pud_present(pud))
>>> + goto out;
>>> + if (pud_leaf(pud)) {
>>> + ret = split_pud(pudp, pud);
>>> + if (ret)
>>> + goto out;
>>> + }
>>> +
>>> + /*
>>> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>>> + * leaf boundary. If not present then there is nothing to split.
>>> + * Otherwise, if we have a contpmd leaf, split to pmd.
>>> + */
>>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> + goto out;
>>> + pmdp = pmd_offset(pudp, addr);
>>> + pmd = pmdp_get(pmdp);
>>> + if (!pmd_present(pmd))
>>> + goto out;
>>> + if (pmd_leaf(pmd)) {
>>> + if (pmd_cont(pmd))
>>> + split_contpmd(pmdp);
>>> + /*
>>> + * PMD: If addr is PMD aligned then addr already describes a
>>> + * leaf boundary. Otherwise, split to contpte.
>>> + */
>>> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> + goto out;
>>> + ret = split_pmd(pmdp, pmd);
>>> + if (ret)
>>> + goto out;
>>> + }
>>> +
>>> + /*
>>> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>>> + * leaf boundary. If not present then there is nothing to split.
>>> + * Otherwise, if we have a contpte leaf, split to pte.
>>> + */
>>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>>> + goto out;
>>> + ptep = pte_offset_kernel(pmdp, addr);
>>> + pte = __ptep_get(ptep);
>>> + if (!pte_present(pte))
>>> + goto out;
>>> + if (pte_cont(pte))
>>> + split_contpte(ptep);
>>> +
>>> +out:
>>> + arch_leave_lazy_mmu_mode();
>>> + mutex_unlock(&pgtable_split_lock);
>>> + return ret;
>>> }
>>> /*
>>> @@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t
>>> kfence_pool, pgd_t *pgdp) {
>>> #endif /* CONFIG_KFENCE */
>>> +static inline bool force_pte_mapping(void)
>>> +{
>>> + bool bbml2 = system_capabilities_finalized() ?
>>> + system_supports_bbml2_noabort() : bbml2_noabort_available();
>>> +
>>> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
>>> + is_realm_world())) ||
>>> + debug_pagealloc_enabled();
>>> +}
>>> +
>>> static void __init map_mem(pgd_t *pgdp)
>>> {
>>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>>> @@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
>>> early_kfence_pool = arm64_kfence_alloc_pool();
>>> - if (can_set_direct_map())
>>> + if (force_pte_mapping())
>>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>> /*
>>> @@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>> - if (can_set_direct_map())
>>> + if (force_pte_mapping())
>>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> index c6a85000fa0e..6a8eefc16dbc 100644
>>> --- a/arch/arm64/mm/pageattr.c
>>> +++ b/arch/arm64/mm/pageattr.c
>>> @@ -140,6 +140,12 @@ static int update_range_prot(unsigned long start,
>>> unsigned long size,
>>> data.set_mask = set_mask;
>>> data.clear_mask = clear_mask;
>>> + ret = split_kernel_leaf_mapping(start);
>>> + if (!ret)
>>> + ret = split_kernel_leaf_mapping(start + size);
>>> + if (WARN_ON_ONCE(ret))
>>> + return ret;
>> This means we take the mutex lock twice and do lazy mmu twice too. So how's about:
>>
>> mutex_lock()
>> enter lazy mmu
>> split_mapping(start)
>> split_mapping(end)
>> leave lazy mmu
>> mutex_unlock()
> Good point. In fact it would be even better to share the same lazy mmu
> invocation with the permission change code below. How about something like this:
>
> ---8<---
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 5b5a84b34024..90ab0ab5b06a 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -625,8 +625,6 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
> return 0;
> }
>
> -static DEFINE_MUTEX(pgtable_split_lock);
> -
> int split_kernel_leaf_mapping(unsigned long addr)
> {
> pgd_t *pgdp, pgd;
> @@ -636,14 +634,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
> pte_t *ptep, pte;
> int ret = 0;
>
> - /*
> - * !BBML2_NOABORT systems should not be trying to change permissions on
> - * anything that is not pte-mapped in the first place. Just return early
> - * and let the permission change code raise a warning if not already
> - * pte-mapped.
> - */
> - if (!system_supports_bbml2_noabort())
> - return 0;
> + VM_WARN_ON(!system_supports_bbml2_noabort());
>
> /*
> * Ensure addr is at least page-aligned since this is the finest
> @@ -652,9 +643,6 @@ int split_kernel_leaf_mapping(unsigned long addr)
> if (addr != PAGE_ALIGN(addr))
> return -EINVAL;
>
> - mutex_lock(&pgtable_split_lock);
> - arch_enter_lazy_mmu_mode();
> -
> /*
> * PGD: If addr is PGD aligned then addr already describes a leaf
> * boundary. If not present then there is nothing to split.
> @@ -734,8 +722,6 @@ int split_kernel_leaf_mapping(unsigned long addr)
> split_contpte(ptep);
>
> out:
> - arch_leave_lazy_mmu_mode();
> - mutex_unlock(&pgtable_split_lock);
> return ret;
> }
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 6a8eefc16dbc..73f80db2e5ba 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -131,6 +131,8 @@ bool can_set_direct_map(void)
> arm64_kfence_can_set_direct_map() || is_realm_world();
> }
>
> +static DEFINE_MUTEX(pgtable_split_lock);
> +
> static int update_range_prot(unsigned long start, unsigned long size,
> pgprot_t set_mask, pgprot_t clear_mask)
> {
> @@ -140,14 +142,23 @@ static int update_range_prot(unsigned long start, unsigned
> long size,
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
>
> - ret = split_kernel_leaf_mapping(start);
> - if (!ret)
> - ret = split_kernel_leaf_mapping(start + size);
> - if (WARN_ON_ONCE(ret))
> - return ret;
> -
> arch_enter_lazy_mmu_mode();
>
> + /*
> + * split_kernel_leaf_mapping() is only allowed for BBML2_NOABORT
> + * systems. !BBML2_NOABORT systems should not be trying to change
> + * permissions on anything that is not pte-mapped in the first place.
> + */
> + if (system_supports_bbml2_noabort()) {
> + mutex_lock(&pgtable_split_lock);
> + ret = split_kernel_leaf_mapping(start);
> + if (!ret)
> + ret = split_kernel_leaf_mapping(start + size);
> + mutex_unlock(&pgtable_split_lock);
> + if (ret)
> + goto out;
> + }
> +
> /*
> * The caller must ensure that the range we are operating on does not
> * partially overlap a block mapping, or a cont mapping. Any such case
> @@ -155,8 +166,8 @@ static int update_range_prot(unsigned long start, unsigned
> long size,
> */
> ret = walk_kernel_page_table_range_lockless(start, start + size,
> &pageattr_ops, &data);
> +out:
> arch_leave_lazy_mmu_mode();
> -
> return ret;
> }
> ---8<---
>
> Of course this means we take the mutex while inside the lazy mmu section.
> Technically sleeping is not allowed while in lazy mmu mode, but the arm64
> implementation can handle it. If this is a step too far, I guess we can keep 2
> separate lazy mmu sections; one for split as you suggest, and another for
> permission change.
IMHO, we'd better keep them separate because the split primitive may be
used by others, and I don't think it is a good idea to have the callers
handle lazy mmu.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
>>> +
>>> arch_enter_lazy_mmu_mode();
>>> /*
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-06 8:15 ` Ryan Roberts
@ 2025-08-07 0:29 ` Yang Shi
0 siblings, 0 replies; 22+ messages in thread
From: Yang Shi @ 2025-08-07 0:29 UTC (permalink / raw)
To: Ryan Roberts, will, catalin.marinas, akpm, Miko.Lenczewski,
dev.jain, scott, cl
Cc: linux-arm-kernel, linux-kernel
On 8/6/25 1:15 AM, Ryan Roberts wrote:
> On 05/08/2025 19:14, Yang Shi wrote:
>>
>> On 8/5/25 1:13 AM, Ryan Roberts wrote:
>>> From: Yang Shi <yang@os.amperecomputing.com>
>>>
>>> The kernel linear mapping is painted in very early stage of system boot.
>>> The cpufeature has not been finalized yet at this point. So the linear
>>> mapping is determined by the capability of boot CPU only. If the boot
>>> CPU supports BBML2, large block mappings will be used for linear
>>> mapping.
>>>
>>> But the secondary CPUs may not support BBML2, so repaint the linear
>>> mapping if large block mapping is used and the secondary CPUs don't
>>> support BBML2 once cpufeature is finalized on all CPUs.
>>>
>>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>>> same BBML2 capability with the boot CPU, repainting the linear mapping
>>> is not needed.
>>>
>>> Repainting is implemented by the boot CPU, which we know supports BBML2,
>>> so it is safe for the live mapping size to change for this CPU. The
>>> linear map region is walked using the pagewalk API and any discovered
>>> large leaf mappings are split to pte mappings using the existing helper
>>> functions. Since the repainting is performed inside of a stop_machine(),
>>> we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
>>> since we are still early in boot, it is expected that there is plenty of
>>> memory available so we will never need to sleep for reclaim, and so
>>> GFP_ATOMIC is acceptable here.
>>>
>>> The secondary CPUs are all put into a waiting area with the idmap in
>>> TTBR0 and reserved map in TTBR1 while this is performed since they
>>> cannot be allowed to observe any size changes on the live mappings.
>>>
>>> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>>> ---
>>> arch/arm64/include/asm/mmu.h | 3 +
>>> arch/arm64/kernel/cpufeature.c | 8 ++
>>> arch/arm64/mm/mmu.c | 151 ++++++++++++++++++++++++++++++---
>>> arch/arm64/mm/proc.S | 25 +++++-
>>> 4 files changed, 172 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>>> index 98565b1b93e8..966c08fd8126 100644
>>> --- a/arch/arm64/include/asm/mmu.h
>>> +++ b/arch/arm64/include/asm/mmu.h
>>> @@ -56,6 +56,8 @@ typedef struct {
>>> */
>>> #define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
>>> +extern bool linear_map_requires_bbml2;
>>> +
>>> static inline bool arm64_kernel_unmapped_at_el0(void)
>>> {
>>> return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
>>> @@ -72,6 +74,7 @@ extern void create_pgd_mapping(struct mm_struct *mm,
>>> phys_addr_t phys,
>>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>>> extern void mark_linear_text_alias_ro(void);
>>> extern int split_kernel_leaf_mapping(unsigned long addr);
>>> +extern int linear_map_split_to_ptes(void *__unused);
>>> /*
>>> * This check is triggered during the early boot before the cpufeature
>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>> index f28f056087f3..11392c741e48 100644
>>> --- a/arch/arm64/kernel/cpufeature.c
>>> +++ b/arch/arm64/kernel/cpufeature.c
>>> @@ -85,6 +85,7 @@
>>> #include <asm/insn.h>
>>> #include <asm/kvm_host.h>
>>> #include <asm/mmu_context.h>
>>> +#include <asm/mmu.h>
>>> #include <asm/mte.h>
>>> #include <asm/hypervisor.h>
>>> #include <asm/processor.h>
>>> @@ -2013,6 +2014,12 @@ static int __init __kpti_install_ng_mappings(void
>>> *__unused)
>>> return 0;
>>> }
>>> +static void __init linear_map_maybe_split_to_ptes(void)
>>> +{
>>> + if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort())
>>> + stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
>>> +}
>>> +
>>> static void __init kpti_install_ng_mappings(void)
>>> {
>>> /* Check whether KPTI is going to be used */
>>> @@ -3930,6 +3937,7 @@ void __init setup_system_features(void)
>>> {
>>> setup_system_capabilities();
>>> + linear_map_maybe_split_to_ptes();
>>> kpti_install_ng_mappings();
>>> sve_setup();
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index f6cd79287024..5b5a84b34024 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -27,6 +27,7 @@
>>> #include <linux/kfence.h>
>>> #include <linux/pkeys.h>
>>> #include <linux/mm_inline.h>
>>> +#include <linux/pagewalk.h>
>>> #include <asm/barrier.h>
>>> #include <asm/cputype.h>
>>> @@ -483,11 +484,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t
>>> phys, unsigned long virt,
>>> #define INVALID_PHYS_ADDR -1
>>> -static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>> +static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
>>> enum pgtable_type pgtable_type)
>>> {
>>> /* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
>>> - struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO,
>>> 0);
>>> + struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
>>> phys_addr_t pa;
>>> if (!ptdesc)
>>> @@ -514,9 +515,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>>> }
>>> static phys_addr_t
>>> -try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>>> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
>>> {
>>> - return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>>> + return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
>>> }
>>> static phys_addr_t __maybe_unused
>>> @@ -524,7 +525,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>>> {
>>> phys_addr_t pa;
>>> - pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>>> + pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
>>> BUG_ON(pa == INVALID_PHYS_ADDR);
>>> return pa;
>>> }
>>> @@ -534,7 +535,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>>> {
>>> phys_addr_t pa;
>>> - pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>>> + pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
>>> BUG_ON(pa == INVALID_PHYS_ADDR);
>>> return pa;
>>> }
>>> @@ -548,7 +549,7 @@ static void split_contpte(pte_t *ptep)
>>> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
>>> }
>>> -static int split_pmd(pmd_t *pmdp, pmd_t pmd)
>>> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
>>> {
>>> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>>> unsigned long pfn = pmd_pfn(pmd);
>>> @@ -557,7 +558,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
>>> pte_t *ptep;
>>> int i;
>>> - pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
>>> + pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
>>> if (pte_phys == INVALID_PHYS_ADDR)
>>> return -ENOMEM;
>>> ptep = (pte_t *)phys_to_virt(pte_phys);
>>> @@ -590,7 +591,7 @@ static void split_contpmd(pmd_t *pmdp)
>>> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
>>> }
>>> -static int split_pud(pud_t *pudp, pud_t pud)
>>> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
>>> {
>>> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>>> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>>> @@ -600,7 +601,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
>>> pmd_t *pmdp;
>>> int i;
>>> - pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
>>> + pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
>>> if (pmd_phys == INVALID_PHYS_ADDR)
>>> return -ENOMEM;
>>> pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>>> @@ -688,7 +689,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
>>> if (!pud_present(pud))
>>> goto out;
>>> if (pud_leaf(pud)) {
>>> - ret = split_pud(pudp, pud);
>>> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
>>> if (ret)
>>> goto out;
>>> }
>>> @@ -713,7 +714,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
>>> */
>>> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> goto out;
>>> - ret = split_pmd(pmdp, pmd);
>>> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
>>> if (ret)
>>> goto out;
>>> }
>>> @@ -738,6 +739,112 @@ int split_kernel_leaf_mapping(unsigned long addr)
>>> return ret;
>>> }
>>> +static int split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
>>> + unsigned long next, struct mm_walk *walk)
>>> +{
>>> + pud_t pud = pudp_get(pudp);
>>> + int ret = 0;
>>> +
>>> + if (pud_leaf(pud))
>>> + ret = split_pud(pudp, pud, GFP_ATOMIC);
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
>>> + unsigned long next, struct mm_walk *walk)
>>> +{
>>> + pmd_t pmd = pmdp_get(pmdp);
>>> + int ret = 0;
>>> +
>>> + if (pmd_leaf(pmd)) {
>>> + if (pmd_cont(pmd))
>>> + split_contpmd(pmdp);
>>> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
>>> + }
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +static int split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
>>> + unsigned long next, struct mm_walk *walk)
>>> +{
>>> + pte_t pte = __ptep_get(ptep);
>>> +
>>> + if (pte_cont(pte))
>>> + split_contpte(ptep);
>>> +
>>> + return 0;
>>> +}
>> IIUC pgtable walker API walks the page table PTE by PTE, so it means the split
>> function is called on every PTE even though it has been split. This is not very
>> efficient. But it may be ok since repainting is just called once at boot time.
> Good point. I think this could be improved, while continuing to use the walker API.
>
> Currently I'm splitting leaf puds to cont-pmds, then cont-pmds to pmds, then
> pmds to cont-ptes then cont-ptes to ptes. And we therefore need to visit each
> pte (or technically 1 in 16 ptes) to check if they are cont-mapped. I did it
> this way to reuse the existing split logic without modification.
>
> But we could provide a flag to the split logic to tell it to "bypass split to
> cont-pte" so that we then have puds to cont-pmds, cont-pmds to pmds and pmds to
> ptes. And in that final case we can avoid walking any ptes that we already split
Why do you need to split pud to cont-pmds, then cont-pmds to pmds? Can't
we just split pud to pmds for this case?
> from pmds because we know they can't be cont-mapped. We can do that with
> ACTION_CONTINUE when returning from the pmd handler. We would still visit every
> pte that was already mapped at pte level because we would still need to check
> for cont-pte. The API doesn't provide a way for us to skip forward by 16 ptes at
> a time.
Yes, page table walk API advances by one PTE. My implementation can skip
forward by 16 ptes. :-)
Thanks,
Yang
>
> Something like this, perhaps:
>
> ---8<---
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 5b5a84b34024..f0066ecbe6b2 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -549,7 +549,7 @@ static void split_contpte(pte_t *ptep)
> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> }
>
> -static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont_pte)
> {
> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> unsigned long pfn = pmd_pfn(pmd);
> @@ -567,7 +567,8 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> tableprot |= PMD_TABLE_PXN;
>
> prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + if (to_cont_pte)
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>
> for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> __set_pte(ptep, pfn_pte(pfn, prot));
> @@ -714,7 +715,7 @@ int split_kernel_leaf_mapping(unsigned long addr)
> */
> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> goto out;
> - ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
> if (ret)
> goto out;
> }
> @@ -760,7 +761,8 @@ static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned
> long addr,
> if (pmd_leaf(pmd)) {
> if (pmd_cont(pmd))
> split_contpmd(pmdp);
> - ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
> + walk->action = ACTION_CONTINUE;
> }
>
> return ret;
> ---8<---
>
> Thanks,
> Ryan
>
>
>> Thanks,
>> Yang
>>
>>> +
>>> +static const struct mm_walk_ops split_to_ptes_ops = {
>>> + .pud_entry = split_to_ptes_pud_entry,
>>> + .pmd_entry = split_to_ptes_pmd_entry,
>>> + .pte_entry = split_to_ptes_pte_entry,
>>> +};
>>> +
>>> +extern u32 repaint_done;
>>> +
>>> +int __init linear_map_split_to_ptes(void *__unused)
>>> +{
>>> + /*
>>> + * Repainting the linear map must be done by CPU0 (the boot CPU) because
>>> + * that's the only CPU that we know supports BBML2. The other CPUs will
>>> + * be held in a waiting area with the idmap active.
>>> + */
>>> + if (!smp_processor_id()) {
>>> + unsigned long lstart = _PAGE_OFFSET(vabits_actual);
>>> + unsigned long lend = PAGE_END;
>>> + unsigned long kstart = (unsigned long)lm_alias(_stext);
>>> + unsigned long kend = (unsigned long)lm_alias(__init_begin);
>>> + int ret;
>>> +
>>> + /*
>>> + * Wait for all secondary CPUs to be put into the waiting area.
>>> + */
>>> + smp_cond_load_acquire(&repaint_done, VAL == num_online_cpus());
>>> +
>>> + /*
>>> + * Walk all of the linear map [lstart, lend), except the kernel
>>> + * linear map alias [kstart, kend), and split all mappings to
>>> + * PTE. The kernel alias remains static throughout runtime so
>>> + * can continue to be safely mapped with large mappings.
>>> + */
>>> + ret = walk_kernel_page_table_range_lockless(lstart, kstart,
>>> + &split_to_ptes_ops, NULL);
>>> + if (!ret)
>>> + ret = walk_kernel_page_table_range_lockless(kend, lend,
>>> + &split_to_ptes_ops, NULL);
>>> + if (ret)
>>> + panic("Failed to split linear map\n");
>>> + flush_tlb_kernel_range(lstart, lend);
>>> +
>>> + /*
>>> + * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
>>> + * before any page table split operations.
>>> + */
>>> + WRITE_ONCE(repaint_done, 0);
>>> + } else {
>>> + typedef void (repaint_wait_fn)(void);
>>> + extern repaint_wait_fn bbml2_wait_for_repainting;
>>> + repaint_wait_fn *wait_fn;
>>> +
>>> + wait_fn = (void *)__pa_symbol(bbml2_wait_for_repainting);
>>> +
>>> + /*
>>> + * At least one secondary CPU doesn't support BBML2 so cannot
>>> + * tolerate the size of the live mappings changing. So have the
>>> + * secondary CPUs wait for the boot CPU to make the changes
>>> + * with the idmap active and init_mm inactive.
>>> + */
>>> + cpu_install_idmap();
>>> + wait_fn();
>>> + cpu_uninstall_idmap();
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> /*
>>> * This function can only be used to modify existing table entries,
>>> * without allocating new levels of table. Note that this permits the
>>> @@ -857,6 +964,8 @@ static inline void arm64_kfence_map_pool(phys_addr_t
>>> kfence_pool, pgd_t *pgdp) {
>>> #endif /* CONFIG_KFENCE */
>>> +bool linear_map_requires_bbml2;
>>> +
>>> static inline bool force_pte_mapping(void)
>>> {
>>> bool bbml2 = system_capabilities_finalized() ?
>>> @@ -892,6 +1001,8 @@ static void __init map_mem(pgd_t *pgdp)
>>> early_kfence_pool = arm64_kfence_alloc_pool();
>>> + linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
>>> +
>>> if (force_pte_mapping())
>>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>> @@ -1025,7 +1136,8 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64
>>> pa, pgprot_t prot,
>>> int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
>>> static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>>> __ro_after_init,
>>> - kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>>> __ro_after_init;
>>> + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>>> __ro_after_init,
>>> + bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE)
>>> __ro_after_init;
>>> static void __init create_idmap(void)
>>> {
>>> @@ -1050,6 +1162,19 @@ static void __init create_idmap(void)
>>> IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>>> __phys_to_virt(ptep) - ptep);
>>> }
>>> +
>>> + /*
>>> + * Setup idmap mapping for repaint_done flag. It will be used if
>>> + * repainting the linear mapping is needed later.
>>> + */
>>> + if (linear_map_requires_bbml2) {
>>> + u64 pa = __pa_symbol(&repaint_done);
>>> +
>>> + ptep = __pa_symbol(bbml2_ptes);
>>> + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
>>> + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
>>> + __phys_to_virt(ptep) - ptep);
>>> + }
>>> }
>>> void __init paging_init(void)
>>> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
>>> index 8c75965afc9e..dbaac2e824d7 100644
>>> --- a/arch/arm64/mm/proc.S
>>> +++ b/arch/arm64/mm/proc.S
>>> @@ -416,7 +416,29 @@ alternative_else_nop_endif
>>> __idmap_kpti_secondary:
>>> /* Uninstall swapper before surgery begins */
>>> __idmap_cpu_set_reserved_ttbr1 x16, x17
>>> + b scondary_cpu_wait
>>> + .unreq swapper_ttb
>>> + .unreq flag_ptr
>>> +SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>>> + .popsection
>>> +#endif
>>> +
>>> + .pushsection ".data", "aw", %progbits
>>> +SYM_DATA(repaint_done, .long 1)
>>> + .popsection
>>> +
>>> + .pushsection ".idmap.text", "a"
>>> +SYM_TYPED_FUNC_START(bbml2_wait_for_repainting)
>>> + /* Must be same registers as in idmap_kpti_install_ng_mappings */
>>> + swapper_ttb .req x3
>>> + flag_ptr .req x4
>>> +
>>> + mrs swapper_ttb, ttbr1_el1
>>> + adr_l flag_ptr, repaint_done
>>> + __idmap_cpu_set_reserved_ttbr1 x16, x17
>>> +
>>> +scondary_cpu_wait:
>>> /* Increment the flag to let the boot CPU we're ready */
>>> 1: ldxr w16, [flag_ptr]
>>> add w16, w16, #1
>>> @@ -436,9 +458,8 @@ __idmap_kpti_secondary:
>>> .unreq swapper_ttb
>>> .unreq flag_ptr
>>> -SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>>> +SYM_FUNC_END(bbml2_wait_for_repainting)
>>> .popsection
>>> -#endif
>>> /*
>>> * __cpu_setup
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-05 18:37 ` Yang Shi
@ 2025-08-27 15:00 ` Ryan Roberts
0 siblings, 0 replies; 22+ messages in thread
From: Ryan Roberts @ 2025-08-27 15:00 UTC (permalink / raw)
To: linux-arm-kernel
Hi Yang,
Sorry for slow reply; for some reason this didn't make it to my mailbox, then
I've been out on holiday for 2 weeks.
On 05/08/2025 19:37, Yang Shi wrote:
> Hi Ryan
>
> On 8/5/25 1:13 AM, Ryan Roberts wrote:
>> Hi All,
>>
>> This is a new version built on top of Yang Shi's work at [1]. Yang and I have
>> been discussing (disagreeing?) about the best way to implement the last 2
>> patches. So I've reworked them and am posting as RFC to illustrate how I think
>> this feature should be implemented, but I've retained Yang as primary author
>> since it is all based on his work. I'd appreciate feedback from Catalin and/or
>> Will on whether this is the right approach, so that hopefully we can get this
>> into shape for 6.18.
>>
>> The first 2 patches are unchanged from Yang's v5; the first patch comes from Dev
>> and the rest of the series depends upon it.
>
> Thank you for making the prototype and retaining me as primary author. The
> approach is basically fine to me. But there are some minor concerns. Some of
> them were raised in the comment for patch #3 and patch #4. I put them together
> here.
> 1. Walk page table twice. This has been discussed before. It is not very
> efficient for small split, for example, 4K. Unfortunately, the most split is
> still 4K in the current kernel AFAICT.
I have added a solution to this in my current version; to solve the mutex
problem, I've ended up passing both start and end to the main split() function
then have a helper function which the parent calls twice. With that I can avoid
the second call if the (end - start) is PAGE_SIZE - I just call split() once
with the "lesser aligned" of the 2 addresses and that guarantees that the page
is mapped by PTE. Added as it's own optimization patch.
> Hopefully this can be mitigated by some new development, for example,
> ROX cache.
> 2. Take mutex lock twice and do lazy mmu twice. I think it is easy to
> resolve as I suggested in patch #3.
Yep, I've fixed this, as per above.
> 3. Walk every PTE and call split on every PTE for repainting. It is not very
> efficient, but may be ok for repainting since it is just called once at boot time.
I've added an optimization patch to avoid splitting to contpmd/contpte in this
case and we can additionally avoid visiting the pte table at all if it was
created by splitting a pmd.
>
> I don't think these concerns are major blockers IMHO. Anyway let's see what
> Catalin and/or Will think about this.
I've additionally done some more cleanup; most notably I've removed the
bbml2_ptes[] array to save those 5 pages. Instead, I'm reusing the same flag
that kpti uses.
Functional tests all look good. I'll run some benchmarks overnight and if all
looks good, I'll post by the end of the week.
Hopefully Catalin or Will can provide some comments for this version.
Thanks,
Ryan
>
> Regards,
> Yang
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings
2025-08-05 8:13 ` [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
@ 2025-08-28 16:26 ` Catalin Marinas
2025-08-29 9:23 ` Ryan Roberts
0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2025-08-28 16:26 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On Tue, Aug 05, 2025 at 09:13:46AM +0100, Ryan Roberts wrote:
> From: Dev Jain <dev.jain@arm.com>
>
> This patch paves the path to enable huge mappings in vmalloc space and
> linear map space by default on arm64. For this we must ensure that we
> can handle any permission games on the kernel (init_mm) pagetable.
> Currently, __change_memory_common() uses apply_to_page_range() which
> does not support changing permissions for block mappings. We attempt to
> move away from this by using the pagewalk API, similar to what riscv
> does right now; however, it is the responsibility of the caller to
> ensure that we do not pass a range overlapping a partial block mapping
> or cont mapping; in such a case, the system must be able to support
> range splitting.
>
> This patch is tied with Yang Shi's attempt [1] at using huge mappings in
> the linear mapping in case the system supports BBML2, in which case we
> will be able to split the linear mapping if needed without
> break-before-make. Thus, Yang's series, IIUC, will be one such user of
> my patch; suppose we are changing permissions on a range of the linear
> map backed by PMD-hugepages, then the sequence of operations should look
> like the following:
>
> split_range(start)
> split_range(end);
> __change_memory_common(start, end);
>
> However, this patch can be used independently of Yang's; since currently
> permission games are being played only on pte mappings (due to
> apply_to_page_range not supporting otherwise), this patch provides the
> mechanism for enabling huge mappings for various kernel mappings like
> linear map and vmalloc.
[...]
I think some of this text needs to be trimmed down, avoid references to
other series if they are merged at the same time.
> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
> index 682472c15495..8212e8f2d2d5 100644
> --- a/include/linux/pagewalk.h
> +++ b/include/linux/pagewalk.h
> @@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
> int walk_kernel_page_table_range(unsigned long start,
> unsigned long end, const struct mm_walk_ops *ops,
> pgd_t *pgd, void *private);
> +int walk_kernel_page_table_range_lockless(unsigned long start,
> + unsigned long end, const struct mm_walk_ops *ops,
> + void *private);
> int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
> unsigned long end, const struct mm_walk_ops *ops,
> void *private);
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index 648038247a8d..18a675ab87cf 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -633,6 +633,30 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
> return walk_pgd_range(start, end, &walk);
> }
>
> +/*
> + * Use this function to walk the kernel page tables locklessly. It should be
> + * guaranteed that the caller has exclusive access over the range they are
> + * operating on - that there should be no concurrent access, for example,
> + * changing permissions for vmalloc objects.
> + */
> +int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
> + const struct mm_walk_ops *ops, void *private)
> +{
> + struct mm_walk walk = {
> + .ops = ops,
> + .mm = &init_mm,
> + .private = private,
> + .no_vma = true
> + };
> +
> + if (start >= end)
> + return -EINVAL;
> + if (!check_ops_valid(ops))
> + return -EINVAL;
> +
> + return walk_pgd_range(start, end, &walk);
> +}
More of a nit: we could change walk_kernel_page_table_range() to call
this function after checking the mm lock as they look nearly identical.
The existing function has a pgd argument but it doesn't seem to be used
anywhere and could be removed (or add it here for consistency).
Either way, the patch looks fine.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-05 8:13 ` [RFC PATCH v6 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
@ 2025-08-28 16:29 ` Catalin Marinas
0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2025-08-28 16:29 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On Tue, Aug 05, 2025 at 09:13:47AM +0100, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-05 8:13 ` [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full Ryan Roberts
2025-08-05 17:59 ` Yang Shi
@ 2025-08-28 17:09 ` Catalin Marinas
2025-08-28 17:45 ` Ryan Roberts
1 sibling, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2025-08-28 17:09 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On Tue, Aug 05, 2025 at 09:13:48AM +0100, Ryan Roberts wrote:
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index abd9725796e9..f6cd79287024 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
[...]
> @@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> +static inline bool force_pte_mapping(void)
> +{
> + bool bbml2 = system_capabilities_finalized() ?
> + system_supports_bbml2_noabort() : bbml2_noabort_available();
> +
> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> + is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> early_kfence_pool = arm64_kfence_alloc_pool();
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> @@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
Not sure this works in a heterogeneous configuration.
bbml2_noabort_available() only checks the current/boot CPU which may
return true but if secondary CPUs don't have the feature, it results in
system_supports_bbml2_noabort() being false with force_pte_mapping()
also false in the early map_mem() calls.
I don't see a nice solution other than making BBML2 no-abort a boot CPU
feature.
--
Catalin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-28 17:09 ` Catalin Marinas
@ 2025-08-28 17:45 ` Ryan Roberts
2025-08-28 18:48 ` Catalin Marinas
0 siblings, 1 reply; 22+ messages in thread
From: Ryan Roberts @ 2025-08-28 17:45 UTC (permalink / raw)
To: Catalin Marinas
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On 28/08/2025 18:09, Catalin Marinas wrote:
> On Tue, Aug 05, 2025 at 09:13:48AM +0100, Ryan Roberts wrote:
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index abd9725796e9..f6cd79287024 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
> [...]
>> @@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>
>> #endif /* CONFIG_KFENCE */
>>
>> +static inline bool force_pte_mapping(void)
>> +{
>> + bool bbml2 = system_capabilities_finalized() ?
>> + system_supports_bbml2_noabort() : bbml2_noabort_available();
>> +
>> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
>> + is_realm_world())) ||
>> + debug_pagealloc_enabled();
>> +}
>> +
>> static void __init map_mem(pgd_t *pgdp)
>> {
>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
>>
>> early_kfence_pool = arm64_kfence_alloc_pool();
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> /*
>> @@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>
>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>
> Not sure this works in a heterogeneous configuration.
> bbml2_noabort_available() only checks the current/boot CPU which may
> return true but if secondary CPUs don't have the feature, it results in
> system_supports_bbml2_noabort() being false with force_pte_mapping()
> also false in the early map_mem() calls.
The intent is that we eagerly create a block-mapped linear map at boot if the
boot CPU supports BBML2. If, once we have determined that a secondary CPU
doesn't support BBML2 (and therefore the system doesn't support it) then we
repaint the linear map using page mappings.
The repainting mechanism is added in the next patch.
I've tested this with heterogeneous configs and I'm confident it does work.
FYI, I actually have a new version of this ready to go - I was hoping to post
tomorrow, subject to performance results. I thought you were implying in a
previous mail that you weren't interested in reviewing until it was based on top
of an -rc. Perhaps I misunderstood. Let me know if you want me to hold off on
posting that given you are now reviewing this version.
Thanks,
Ryan
>
> I don't see a nice solution other than making BBML2 no-abort a boot CPU
> feature.
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full
2025-08-28 17:45 ` Ryan Roberts
@ 2025-08-28 18:48 ` Catalin Marinas
0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2025-08-28 18:48 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On Thu, Aug 28, 2025 at 06:45:32PM +0100, Ryan Roberts wrote:
> On 28/08/2025 18:09, Catalin Marinas wrote:
> > On Tue, Aug 05, 2025 at 09:13:48AM +0100, Ryan Roberts wrote:
> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> >> index abd9725796e9..f6cd79287024 100644
> >> --- a/arch/arm64/mm/mmu.c
> >> +++ b/arch/arm64/mm/mmu.c
> > [...]
> >> @@ -640,6 +857,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
> >>
> >> #endif /* CONFIG_KFENCE */
> >>
> >> +static inline bool force_pte_mapping(void)
> >> +{
> >> + bool bbml2 = system_capabilities_finalized() ?
> >> + system_supports_bbml2_noabort() : bbml2_noabort_available();
> >> +
> >> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> >> + is_realm_world())) ||
> >> + debug_pagealloc_enabled();
> >> +}
> >> +
> >> static void __init map_mem(pgd_t *pgdp)
> >> {
> >> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> >> @@ -665,7 +892,7 @@ static void __init map_mem(pgd_t *pgdp)
> >>
> >> early_kfence_pool = arm64_kfence_alloc_pool();
> >>
> >> - if (can_set_direct_map())
> >> + if (force_pte_mapping())
> >> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >>
> >> /*
> >> @@ -1367,7 +1594,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
> >>
> >> VM_BUG_ON(!mhp_range_allowed(start, size, true));
> >>
> >> - if (can_set_direct_map())
> >> + if (force_pte_mapping())
> >> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >>
> >> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> >
> > Not sure this works in a heterogeneous configuration.
> > bbml2_noabort_available() only checks the current/boot CPU which may
> > return true but if secondary CPUs don't have the feature, it results in
> > system_supports_bbml2_noabort() being false with force_pte_mapping()
> > also false in the early map_mem() calls.
>
> The intent is that we eagerly create a block-mapped linear map at boot if the
> boot CPU supports BBML2. If, once we have determined that a secondary CPU
> doesn't support BBML2 (and therefore the system doesn't support it) then we
> repaint the linear map using page mappings.
>
> The repainting mechanism is added in the next patch.
Ah, I haven't reached that patch yet ;).
> I've tested this with heterogeneous configs and I'm confident it does work.
Great. The downside is that such configuration is rare, the logic is
fairly complex and won't get tested much. Hardware with such
configuration will take a slight hit on the boot time.
I don't remember the discussions around Miko's patches adding the BBML2
feature - do we have such heterogeneous configurations or are they just
theoretical at this stage?
> FYI, I actually have a new version of this ready to go - I was hoping to post
> tomorrow, subject to performance results. I thought you were implying in a
> previous mail that you weren't interested in reviewing until it was based on top
> of an -rc. Perhaps I misunderstood. Let me know if you want me to hold off on
> posting that given you are now reviewing this version.
In general I prefer patches on top of a fixed -rc, especially if I need
to apply them locally. But I was wondering if you are waiting for review
feedback before rebasing, so I had a quick look ;).
Please post a new version. I'll have a look at that since you were
planning to update a few bits anyway.
--
Catalin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings
2025-08-28 16:26 ` Catalin Marinas
@ 2025-08-29 9:23 ` Ryan Roberts
0 siblings, 0 replies; 22+ messages in thread
From: Ryan Roberts @ 2025-08-29 9:23 UTC (permalink / raw)
To: Catalin Marinas
Cc: Yang Shi, will, akpm, Miko.Lenczewski, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel
On 28/08/2025 17:26, Catalin Marinas wrote:
> On Tue, Aug 05, 2025 at 09:13:46AM +0100, Ryan Roberts wrote:
>> From: Dev Jain <dev.jain@arm.com>
>>
>> This patch paves the path to enable huge mappings in vmalloc space and
>> linear map space by default on arm64. For this we must ensure that we
>> can handle any permission games on the kernel (init_mm) pagetable.
>> Currently, __change_memory_common() uses apply_to_page_range() which
>> does not support changing permissions for block mappings. We attempt to
>> move away from this by using the pagewalk API, similar to what riscv
>> does right now; however, it is the responsibility of the caller to
>> ensure that we do not pass a range overlapping a partial block mapping
>> or cont mapping; in such a case, the system must be able to support
>> range splitting.
>>
>> This patch is tied with Yang Shi's attempt [1] at using huge mappings in
>> the linear mapping in case the system supports BBML2, in which case we
>> will be able to split the linear mapping if needed without
>> break-before-make. Thus, Yang's series, IIUC, will be one such user of
>> my patch; suppose we are changing permissions on a range of the linear
>> map backed by PMD-hugepages, then the sequence of operations should look
>> like the following:
>>
>> split_range(start)
>> split_range(end);
>> __change_memory_common(start, end);
>>
>> However, this patch can be used independently of Yang's; since currently
>> permission games are being played only on pte mappings (due to
>> apply_to_page_range not supporting otherwise), this patch provides the
>> mechanism for enabling huge mappings for various kernel mappings like
>> linear map and vmalloc.
> [...]
>
> I think some of this text needs to be trimmed down, avoid references to
> other series if they are merged at the same time.
>
>> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
>> index 682472c15495..8212e8f2d2d5 100644
>> --- a/include/linux/pagewalk.h
>> +++ b/include/linux/pagewalk.h
>> @@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
>> int walk_kernel_page_table_range(unsigned long start,
>> unsigned long end, const struct mm_walk_ops *ops,
>> pgd_t *pgd, void *private);
>> +int walk_kernel_page_table_range_lockless(unsigned long start,
>> + unsigned long end, const struct mm_walk_ops *ops,
>> + void *private);
>> int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
>> unsigned long end, const struct mm_walk_ops *ops,
>> void *private);
>> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
>> index 648038247a8d..18a675ab87cf 100644
>> --- a/mm/pagewalk.c
>> +++ b/mm/pagewalk.c
>> @@ -633,6 +633,30 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
>> return walk_pgd_range(start, end, &walk);
>> }
>>
>> +/*
>> + * Use this function to walk the kernel page tables locklessly. It should be
>> + * guaranteed that the caller has exclusive access over the range they are
>> + * operating on - that there should be no concurrent access, for example,
>> + * changing permissions for vmalloc objects.
>> + */
>> +int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
>> + const struct mm_walk_ops *ops, void *private)
>> +{
>> + struct mm_walk walk = {
>> + .ops = ops,
>> + .mm = &init_mm,
>> + .private = private,
>> + .no_vma = true
>> + };
>> +
>> + if (start >= end)
>> + return -EINVAL;
>> + if (!check_ops_valid(ops))
>> + return -EINVAL;
>> +
>> + return walk_pgd_range(start, end, &walk);
>> +}
>
> More of a nit: we could change walk_kernel_page_table_range() to call
> this function after checking the mm lock as they look nearly identical.
> The existing function has a pgd argument but it doesn't seem to be used
> anywhere and could be removed (or add it here for consistency).
Good point. I've done this refactoring in my new version, adding pgd to the
_lockless() variant, since it's used by x86. Let's see what Lorenzo and co think
in the context of the next version (incomming shortly).
>
> Either way, the patch looks fine.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2025-08-29 11:57 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-05 8:13 [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-05 8:13 ` [RFC PATCH v6 1/4] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
2025-08-28 16:26 ` Catalin Marinas
2025-08-29 9:23 ` Ryan Roberts
2025-08-05 8:13 ` [RFC PATCH v6 2/4] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
2025-08-28 16:29 ` Catalin Marinas
2025-08-05 8:13 ` [RFC PATCH v6 3/4] arm64: mm: support large block mapping when rodata=full Ryan Roberts
2025-08-05 17:59 ` Yang Shi
2025-08-06 7:57 ` Ryan Roberts
2025-08-07 0:19 ` Yang Shi
2025-08-28 17:09 ` Catalin Marinas
2025-08-28 17:45 ` Ryan Roberts
2025-08-28 18:48 ` Catalin Marinas
2025-08-05 8:13 ` [RFC PATCH v6 4/4] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
2025-08-05 18:14 ` Yang Shi
2025-08-06 8:15 ` Ryan Roberts
2025-08-07 0:29 ` Yang Shi
2025-08-05 8:16 ` [RFC PATCH v6 0/4] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-05 14:39 ` Catalin Marinas
2025-08-05 14:52 ` Ryan Roberts
2025-08-05 18:37 ` Yang Shi
2025-08-27 15:00 ` Ryan Roberts
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).