* [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
[not found] ` <7705c29b-4f08-4b56-aab3-024795ee9124@huawei.com>
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
` (5 subsequent siblings)
6 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
From: Dev Jain <dev.jain@arm.com>
This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we
can handle any permission games on the kernel (init_mm) pagetable.
Previously, __change_memory_common() used apply_to_page_range() which
does not support changing permissions for block mappings. We move away
from this by using the pagewalk API, similar to what riscv does right
now. It is the responsibility of the caller to ensure that the range
over which permissions are being changed falls on leaf mapping
boundaries. For systems with BBML2, this will be handled in future
patches by dyanmically splitting the mappings when required.
Unlike apply_to_page_range(), the pagewalk API currently enforces the
init_mm.mmap_lock to be held. To avoid the unnecessary bottleneck of the
mmap_lock for our usecase, this patch extends this generic API to be
used locklessly, so as to retain the existing behaviour for changing
permissions. Apart from this reason, it is noted at [1] that KFENCE can
manipulate kernel pgtable entries during softirqs. It does this by
calling set_memory_valid() -> __change_memory_common(). This being a
non-sleepable context, we cannot take the init_mm mmap lock.
Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive
control over the range, thus guaranteeing no concurrent access.
We require that the start and end of a given range do not partially
overlap block mappings, or cont mappings. Return -EINVAL in case a
partial block mapping is detected in any of the PGD/P4D/PUD/PMD levels;
add a corresponding comment in update_range_prot() to warn that
eliminating such a condition is the responsibility of the caller.
Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to an
atomic operation for the block mappings. This is fine as any access will
decode either the old or the new permission until the TLBI.
apply_to_page_range() currently performs all pte level callbacks while
in lazy mmu mode. Since arm64 can optimize performance by batching
barriers when modifying kernel pgtables in lazy mmu mode, we would like
to continue to benefit from this optimisation. Unfortunately
walk_kernel_page_table_range() does not use lazy mmu mode. However,
since the pagewalk framework is not allocating any memory, we can safely
bracket the whole operation inside lazy mmu mode ourselves. Therefore,
wrap the call to walk_kernel_page_table_range() with the lazy MMU
helpers.
Link: https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/ [1]
Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/mm/pageattr.c | 153 +++++++++++++++++++++++++++++++--------
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 36 ++++++---
3 files changed, 149 insertions(+), 43 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..6da8cbc32f46 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
#include <linux/mem_encrypt.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
#include <asm/cacheflush.h>
#include <asm/pgtable-prot.h>
@@ -20,6 +21,99 @@ struct page_change_data {
pgprot_t clear_mask;
};
+static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk)
+{
+ struct page_change_data *masks = walk->private;
+
+ val &= ~(pgprot_val(masks->clear_mask));
+ val |= (pgprot_val(masks->set_mask));
+
+ return val;
+}
+
+static int pageattr_pgd_entry(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pgd_t val = pgdp_get(pgd);
+
+ if (pgd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PGDIR_SIZE))
+ return -EINVAL;
+ val = __pgd(set_pageattr_masks(pgd_val(val), walk));
+ set_pgd(pgd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ p4d_t val = p4dp_get(p4d);
+
+ if (p4d_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != P4D_SIZE))
+ return -EINVAL;
+ val = __p4d(set_pageattr_masks(p4d_val(val), walk));
+ set_p4d(p4d, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pud_t val = pudp_get(pud);
+
+ if (pud_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+ return -EINVAL;
+ val = __pud(set_pageattr_masks(pud_val(val), walk));
+ set_pud(pud, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pmd_t val = pmdp_get(pmd);
+
+ if (pmd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+ return -EINVAL;
+ val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+ set_pmd(pmd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pte_t val = __ptep_get(pte);
+
+ val = __pte(set_pageattr_masks(pte_val(val), walk));
+ __set_pte(pte, val);
+
+ return 0;
+}
+
+static const struct mm_walk_ops pageattr_ops = {
+ .pgd_entry = pageattr_pgd_entry,
+ .p4d_entry = pageattr_p4d_entry,
+ .pud_entry = pageattr_pud_entry,
+ .pmd_entry = pageattr_pmd_entry,
+ .pte_entry = pageattr_pte_entry,
+};
+
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
bool can_set_direct_map(void)
@@ -37,32 +131,35 @@ bool can_set_direct_map(void)
arm64_kfence_can_set_direct_map() || is_realm_world();
}
-static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
+static int update_range_prot(unsigned long start, unsigned long size,
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data *cdata = data;
- pte_t pte = __ptep_get(ptep);
+ struct page_change_data data;
+ int ret;
- pte = clear_pte_bit(pte, cdata->clear_mask);
- pte = set_pte_bit(pte, cdata->set_mask);
+ data.set_mask = set_mask;
+ data.clear_mask = clear_mask;
- __set_pte(ptep, pte);
- return 0;
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The caller must ensure that the range we are operating on does not
+ * partially overlap a block mapping, or a cont mapping. Any such case
+ * must be eliminated by splitting the mapping.
+ */
+ ret = walk_kernel_page_table_range_lockless(start, start + size,
+ &pageattr_ops, NULL, &data);
+ arch_leave_lazy_mmu_mode();
+
+ return ret;
}
-/*
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
static int __change_memory_common(unsigned long start, unsigned long size,
- pgprot_t set_mask, pgprot_t clear_mask)
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data data;
int ret;
- data.set_mask = set_mask;
- data.clear_mask = clear_mask;
-
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
+ ret = update_range_prot(start, size, set_mask, clear_mask);
/*
* If the memory is being made valid without changing any other bits
@@ -174,32 +271,26 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
int set_direct_map_invalid_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(0),
- .clear_mask = __pgprot(PTE_VALID),
- };
+ pgprot_t clear_mask = __pgprot(PTE_VALID);
+ pgprot_t set_mask = __pgprot(0);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
int set_direct_map_default_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
- .clear_mask = __pgprot(PTE_RDONLY),
- };
+ pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+ pgprot_t clear_mask = __pgprot(PTE_RDONLY);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
static int __set_memory_enc_dec(unsigned long addr,
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 682472c15495..88e18615dd72 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
pgd_t *pgd, void *private);
+int walk_kernel_page_table_range_lockless(unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ pgd_t *pgd, void *private);
int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 648038247a8d..936689d8bcac 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -606,10 +606,32 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start, unsigned long end,
const struct mm_walk_ops *ops, pgd_t *pgd, void *private)
{
- struct mm_struct *mm = &init_mm;
+ /*
+ * Kernel intermediate page tables are usually not freed, so the mmap
+ * read lock is sufficient. But there are some exceptions.
+ * E.g. memory hot-remove. In which case, the mmap lock is insufficient
+ * to prevent the intermediate kernel pages tables belonging to the
+ * specified address range from being freed. The caller should take
+ * other actions to prevent this race.
+ */
+ mmap_assert_locked(&init_mm);
+
+ return walk_kernel_page_table_range_lockless(start, end, ops, pgd,
+ private);
+}
+
+/*
+ * Use this function to walk the kernel page tables locklessly. It should be
+ * guaranteed that the caller has exclusive access over the range they are
+ * operating on - that there should be no concurrent access, for example,
+ * changing permissions for vmalloc objects.
+ */
+int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
+ const struct mm_walk_ops *ops, pgd_t *pgd, void *private)
+{
struct mm_walk walk = {
.ops = ops,
- .mm = mm,
+ .mm = &init_mm,
.pgd = pgd,
.private = private,
.no_vma = true
@@ -620,16 +642,6 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
if (!check_ops_valid(ops))
return -EINVAL;
- /*
- * Kernel intermediate page tables are usually not freed, so the mmap
- * read lock is sufficient. But there are some exceptions.
- * E.g. memory hot-remove. In which case, the mmap lock is insufficient
- * to prevent the intermediate kernel pages tables belonging to the
- * specified address range from being freed. The caller should take
- * other actions to prevent this race.
- */
- mmap_assert_locked(mm);
-
return walk_pgd_range(start, end, &walk);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 22:08 ` Yang Shi
2025-09-03 17:24 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
` (4 subsequent siblings)
6 siblings, 2 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
From: Yang Shi <yang@os.amperecomputing.com>
AmpereOne supports BBML2 without conflict abort, add to the allow list.
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/kernel/cpufeature.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d..b93f4ee57176 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2234,6 +2234,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
static const struct midr_range supports_bbml2_noabort_list[] = {
MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
{}
};
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
@ 2025-08-29 22:08 ` Yang Shi
2025-09-04 11:07 ` Ryan Roberts
2025-09-03 17:24 ` Catalin Marinas
1 sibling, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-08-29 22:08 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 8/29/25 4:52 AM, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
I saw Catalin gave Reviewed-by to v6 of this patch, I think we can keep it.
Yang
> ---
> arch/arm64/kernel/cpufeature.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ad065f15f1d..b93f4ee57176 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2234,6 +2234,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
> static const struct midr_range supports_bbml2_noabort_list[] = {
> MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
> MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1),
> + MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
> {}
> };
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 22:08 ` Yang Shi
@ 2025-09-04 11:07 ` Ryan Roberts
0 siblings, 0 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 11:07 UTC (permalink / raw)
To: Yang Shi, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 29/08/2025 23:08, Yang Shi wrote:
>
>
> On 8/29/25 4:52 AM, Ryan Roberts wrote:
>> From: Yang Shi <yang@os.amperecomputing.com>
>>
>> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
>> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
> I saw Catalin gave Reviewed-by to v6 of this patch, I think we can keep it.
Sorry I missed that. I see Catalin and Rb'ed again.
>
> Yang
>
>> ---
>> arch/arm64/kernel/cpufeature.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 9ad065f15f1d..b93f4ee57176 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2234,6 +2234,8 @@ static bool has_bbml2_noabort(const struct
>> arm64_cpu_capabilities *caps, int sco
>> static const struct midr_range supports_bbml2_noabort_list[] = {
>> MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
>> MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
>> + MIDR_ALL_VERSIONS(MIDR_AMPERE1),
>> + MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
>> {}
>> };
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
2025-08-29 22:08 ` Yang Shi
@ 2025-09-03 17:24 ` Catalin Marinas
2025-09-04 0:49 ` Yang Shi
1 sibling, 1 reply; 51+ messages in thread
From: Catalin Marinas @ 2025-09-03 17:24 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:43PM +0100, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Here it is again:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-09-03 17:24 ` Catalin Marinas
@ 2025-09-04 0:49 ` Yang Shi
0 siblings, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-04 0:49 UTC (permalink / raw)
To: Catalin Marinas, Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On 9/3/25 10:24 AM, Catalin Marinas wrote:
> On Fri, Aug 29, 2025 at 12:52:43PM +0100, Ryan Roberts wrote:
>> From: Yang Shi <yang@os.amperecomputing.com>
>>
>> AmpereOne supports BBML2 without conflict abort, add to the allow list.
>>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
>> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> Here it is again:
>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Thank you!
Yang
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-09-03 19:15 ` Catalin Marinas
2025-09-04 11:15 ` Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
` (3 subsequent siblings)
6 siblings, 2 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
From: Yang Shi <yang@os.amperecomputing.com>
When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.
This resulted in a couple of problems:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.
Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full. When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.
The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.
With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P)
with 256GB memory.
* Memory use after boot
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
disk encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap \
--randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
--ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
--group_reporting --thread --name=iops-test-job --eta-newline=1 \
--size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad
case). The bandwidth is increased and the avg clat is reduced
proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache
populated). The bandwidth is increased by 150%.
Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 7 +-
arch/arm64/mm/mmu.c | 248 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 4 +
6 files changed, 261 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..e223cbf350e4 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,8 @@ static inline bool system_supports_pmuv3(void)
return cpus_have_final_cap(ARM64_HAS_PMUV3);
}
+bool cpu_supports_bbml2_noabort(void);
+
static inline bool system_supports_bbml2_noabort(void)
{
return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6e8aa8e72601..56fca81f60ad 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index abd2dee416b3..aa89c2e67ebc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
}
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
static inline int pte_uffd_wp(pte_t pte)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b93f4ee57176..a8936c1023ea 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2217,7 +2217,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
}
-static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+bool cpu_supports_bbml2_noabort(void)
{
/*
* We want to allow usage of BBML2 in as wide a range of kernel contexts
@@ -2251,6 +2251,11 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
return true;
}
+static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+{
+ return cpu_supports_bbml2_noabort();
+}
+
#ifdef CONFIG_ARM64_PAN
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 34e5d78af076..114b88216b0c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
int flags);
#endif
+#define INVALID_PHYS_ADDR -1
+
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
enum pgtable_type pgtable_type)
{
@@ -488,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
phys_addr_t pa;
- BUG_ON(!ptdesc);
+ if (!ptdesc)
+ return INVALID_PHYS_ADDR;
+
pa = page_to_phys(ptdesc_page(ptdesc));
switch (pgtable_type) {
@@ -509,16 +513,240 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
return pa;
}
+static phys_addr_t
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+ return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+}
+
static phys_addr_t __maybe_unused
pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
}
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(NULL, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
+}
+
+static void split_contpte(pte_t *ptep)
+{
+ int i;
+
+ ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ for (i = 0; i < CONT_PTES; i++, ptep++)
+ __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
+}
+
+static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
+ unsigned long pfn = pmd_pfn(pmd);
+ pgprot_t prot = pmd_pgprot(pmd);
+ phys_addr_t pte_phys;
+ pte_t *ptep;
+ int i;
+
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ if (pte_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte(ptep, pfn_pte(pfn, prot));
+
+ /*
+ * Ensure the pte entries are visible to the table walker by the time
+ * the pmd entry that points to the ptes is visible.
+ */
+ dsb(ishst);
+ __pmd_populate(pmdp, pte_phys, tableprot);
+
+ return 0;
+}
+
+static void split_contpmd(pmd_t *pmdp)
+{
+ int i;
+
+ pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ for (i = 0; i < CONT_PMDS; i++, pmdp++)
+ set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
+}
+
+static int split_pud(pud_t *pudp, pud_t pud)
+{
+ pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+ unsigned long pfn = pud_pfn(pud);
+ pgprot_t prot = pud_pgprot(pud);
+ phys_addr_t pmd_phys;
+ pmd_t *pmdp;
+ int i;
+
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ if (pmd_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
+ set_pmd(pmdp, pfn_pmd(pfn, prot));
+
+ /*
+ * Ensure the pmd entries are visible to the table walker by the time
+ * the pud entry that points to the pmds is visible.
+ */
+ dsb(ishst);
+ __pud_populate(pudp, pmd_phys, tableprot);
+
+ return 0;
+}
+
+static int split_kernel_leaf_mapping_locked(unsigned long addr)
+{
+ pgd_t *pgdp, pgd;
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+ int ret = 0;
+
+ /*
+ * PGD: If addr is PGD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
+ goto out;
+ pgdp = pgd_offset_k(addr);
+ pgd = pgdp_get(pgdp);
+ if (!pgd_present(pgd))
+ goto out;
+
+ /*
+ * P4D: If addr is P4D aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
+ goto out;
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = p4dp_get(p4dp);
+ if (!p4d_present(p4d))
+ goto out;
+
+ /*
+ * PUD: If addr is PUD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split. Otherwise,
+ * if we have a pud leaf, split to contpmd.
+ */
+ if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
+ goto out;
+ pudp = pud_offset(p4dp, addr);
+ pud = pudp_get(pudp);
+ if (!pud_present(pud))
+ goto out;
+ if (pud_leaf(pud)) {
+ ret = split_pud(pudp, pud);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPMD: If addr is CONTPMD aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpmd leaf, split to pmd.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ pmdp = pmd_offset(pudp, addr);
+ pmd = pmdp_get(pmdp);
+ if (!pmd_present(pmd))
+ goto out;
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ /*
+ * PMD: If addr is PMD aligned then addr already describes a
+ * leaf boundary. Otherwise, split to contpte.
+ */
+ if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
+ goto out;
+ ret = split_pmd(pmdp, pmd);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPTE: If addr is CONTPTE aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpte leaf, split to pte.
+ */
+ if (ALIGN_DOWN(addr, CONT_PTE_SIZE) == addr)
+ goto out;
+ ptep = pte_offset_kernel(pmdp, addr);
+ pte = __ptep_get(ptep);
+ if (!pte_present(pte))
+ goto out;
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+out:
+ return ret;
+}
+
+static DEFINE_MUTEX(pgtable_split_lock);
+
+int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
+{
+ int ret;
+
+ /*
+ * !BBML2_NOABORT systems should not be trying to change permissions on
+ * anything that is not pte-mapped in the first place. Just return early
+ * and let the permission change code raise a warning if not already
+ * pte-mapped.
+ */
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ /*
+ * Ensure start and end are at least page-aligned since this is the
+ * finest granularity we can split to.
+ */
+ if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
+ return -EINVAL;
+
+ mutex_lock(&pgtable_split_lock);
+ arch_enter_lazy_mmu_mode();
+
+ ret = split_kernel_leaf_mapping_locked(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping_locked(end);
+
+ arch_leave_lazy_mmu_mode();
+ mutex_unlock(&pgtable_split_lock);
+ return ret;
}
/*
@@ -640,6 +868,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+static inline bool force_pte_mapping(void)
+{
+ bool bbml2 = system_capabilities_finalized() ?
+ system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
+
+ return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
+ is_realm_world())) ||
+ debug_pagealloc_enabled();
+}
+
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -665,7 +903,7 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1367,7 +1605,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 6da8cbc32f46..0aba80a38cef 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
+ ret = split_kernel_leaf_mapping(start, start + size);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
arch_enter_lazy_mmu_mode();
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
@ 2025-09-03 19:15 ` Catalin Marinas
2025-09-04 0:52 ` Yang Shi
2025-09-04 11:09 ` Ryan Roberts
2025-09-04 11:15 ` Ryan Roberts
1 sibling, 2 replies; 51+ messages in thread
From: Catalin Marinas @ 2025-09-03 19:15 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:44PM +0100, Ryan Roberts wrote:
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 34e5d78af076..114b88216b0c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> int flags);
> #endif
>
> +#define INVALID_PHYS_ADDR -1
Nitpick: (-1UL) (or (-1ULL), KVM_PHYS_INVALID is defined as the latter).
Otherwise the patch looks fine.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-09-03 19:15 ` Catalin Marinas
@ 2025-09-04 0:52 ` Yang Shi
2025-09-04 11:09 ` Ryan Roberts
1 sibling, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-04 0:52 UTC (permalink / raw)
To: Catalin Marinas, Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On 9/3/25 12:15 PM, Catalin Marinas wrote:
> On Fri, Aug 29, 2025 at 12:52:44PM +0100, Ryan Roberts wrote:
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 34e5d78af076..114b88216b0c 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> int flags);
>> #endif
>>
>> +#define INVALID_PHYS_ADDR -1
> Nitpick: (-1UL) (or (-1ULL), KVM_PHYS_INVALID is defined as the latter).
>
> Otherwise the patch looks fine.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thank you!
Yang
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-09-03 19:15 ` Catalin Marinas
2025-09-04 0:52 ` Yang Shi
@ 2025-09-04 11:09 ` Ryan Roberts
1 sibling, 0 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 11:09 UTC (permalink / raw)
To: Catalin Marinas
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On 03/09/2025 20:15, Catalin Marinas wrote:
> On Fri, Aug 29, 2025 at 12:52:44PM +0100, Ryan Roberts wrote:
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 34e5d78af076..114b88216b0c 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> int flags);
>> #endif
>>
>> +#define INVALID_PHYS_ADDR -1
>
> Nitpick: (-1UL) (or (-1ULL), KVM_PHYS_INVALID is defined as the latter).
Fair. Will fix.
>
> Otherwise the patch looks fine.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cheers! But I think we need to solve the issue where code is ignoring the error
code problem that Dev raised before merging this.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
2025-09-03 19:15 ` Catalin Marinas
@ 2025-09-04 11:15 ` Ryan Roberts
2025-09-04 14:57 ` Yang Shi
1 sibling, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 11:15 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 29/08/2025 12:52, Ryan Roberts wrote:
> From: Yang Shi <yang@os.amperecomputing.com>
>
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
>
> This resulted in a couple of problems:
> - performance degradation
> - more TLB pressure
> - memory waste for kernel page table
>
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
>
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full. When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
>
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
>
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P)
> with 256GB memory.
>
> * Memory use after boot
> Before:
> MemTotal: 258988984 kB
> MemFree: 254821700 kB
>
> After:
> MemTotal: 259505132 kB
> MemFree: 255410264 kB
>
> Around 500MB more memory are free to use. The larger the machine, the
> more memory saved.
>
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
> disk encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap \
> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
> --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad
> case). The bandwidth is increased and the avg clat is reduced
> proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache
> populated). The bandwidth is increased by 150%.
>
> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/include/asm/cpufeature.h | 2 +
> arch/arm64/include/asm/mmu.h | 1 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 7 +-
> arch/arm64/mm/mmu.c | 248 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 4 +
> 6 files changed, 261 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index bf13d676aae2..e223cbf350e4 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -871,6 +871,8 @@ static inline bool system_supports_pmuv3(void)
> return cpus_have_final_cap(ARM64_HAS_PMUV3);
> }
>
> +bool cpu_supports_bbml2_noabort(void);
> +
> static inline bool system_supports_bbml2_noabort(void)
> {
> return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 6e8aa8e72601..56fca81f60ad 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> pgprot_t prot, bool page_mappings_only);
> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
> extern void mark_linear_text_alias_ro(void);
> +extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
>
> /*
> * This check is triggered during the early boot before the cpufeature
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index abd2dee416b3..aa89c2e67ebc 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
> }
>
> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
> +{
> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
> +}
> +
> #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
> static inline int pte_uffd_wp(pte_t pte)
> {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index b93f4ee57176..a8936c1023ea 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2217,7 +2217,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
> return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
> }
>
> -static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
> +bool cpu_supports_bbml2_noabort(void)
> {
> /*
> * We want to allow usage of BBML2 in as wide a range of kernel contexts
> @@ -2251,6 +2251,11 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
> return true;
> }
>
> +static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
> +{
> + return cpu_supports_bbml2_noabort();
> +}
> +
> #ifdef CONFIG_ARM64_PAN
> static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
> {
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 34e5d78af076..114b88216b0c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
> int flags);
> #endif
>
> +#define INVALID_PHYS_ADDR -1
> +
> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> enum pgtable_type pgtable_type)
> {
> @@ -488,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> phys_addr_t pa;
>
> - BUG_ON(!ptdesc);
> + if (!ptdesc)
> + return INVALID_PHYS_ADDR;
> +
> pa = page_to_phys(ptdesc_page(ptdesc));
>
> switch (pgtable_type) {
> @@ -509,16 +513,240 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
> return pa;
> }
>
> +static phys_addr_t
> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> +{
> + return __pgd_pgtable_alloc(&init_mm, pgtable_type);
> +}
> +
> static phys_addr_t __maybe_unused
> pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
> {
> - return __pgd_pgtable_alloc(&init_mm, pgtable_type);
> + phys_addr_t pa;
> +
> + pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
> + BUG_ON(pa == INVALID_PHYS_ADDR);
> + return pa;
> }
>
> static phys_addr_t
> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
> {
> - return __pgd_pgtable_alloc(NULL, pgtable_type);
> + phys_addr_t pa;
> +
> + pa = __pgd_pgtable_alloc(NULL, pgtable_type);
> + BUG_ON(pa == INVALID_PHYS_ADDR);
> + return pa;
> +}
> +
> +static void split_contpte(pte_t *ptep)
> +{
> + int i;
> +
> + ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
> + for (i = 0; i < CONT_PTES; i++, ptep++)
> + __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> +}
> +
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd)
> +{
> + pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> + unsigned long pfn = pmd_pfn(pmd);
> + pgprot_t prot = pmd_pgprot(pmd);
> + phys_addr_t pte_phys;
> + pte_t *ptep;
> + int i;
> +
> + pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
> + if (pte_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> + ptep = (pte_t *)phys_to_virt(pte_phys);
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + tableprot |= PMD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> + for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> + __set_pte(ptep, pfn_pte(pfn, prot));
> +
> + /*
> + * Ensure the pte entries are visible to the table walker by the time
> + * the pmd entry that points to the ptes is visible.
> + */
> + dsb(ishst);
> + __pmd_populate(pmdp, pte_phys, tableprot);
> +
> + return 0;
> +}
> +
> +static void split_contpmd(pmd_t *pmdp)
> +{
> + int i;
> +
> + pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
> + for (i = 0; i < CONT_PMDS; i++, pmdp++)
> + set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
> +}
> +
> +static int split_pud(pud_t *pudp, pud_t pud)
> +{
> + pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> + unsigned long pfn = pud_pfn(pud);
> + pgprot_t prot = pud_pgprot(pud);
> + phys_addr_t pmd_phys;
> + pmd_t *pmdp;
> + int i;
> +
> + pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
> + if (pmd_phys == INVALID_PHYS_ADDR)
> + return -ENOMEM;
> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
> +
> + if (pgprot_val(prot) & PMD_SECT_PXN)
> + tableprot |= PUD_TABLE_PXN;
> +
> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> +
> + for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
> + set_pmd(pmdp, pfn_pmd(pfn, prot));
> +
> + /*
> + * Ensure the pmd entries are visible to the table walker by the time
> + * the pud entry that points to the pmds is visible.
> + */
> + dsb(ishst);
> + __pud_populate(pudp, pmd_phys, tableprot);
> +
> + return 0;
> +}
> +
> +static int split_kernel_leaf_mapping_locked(unsigned long addr)
> +{
> + pgd_t *pgdp, pgd;
> + p4d_t *p4dp, p4d;
> + pud_t *pudp, pud;
> + pmd_t *pmdp, pmd;
> + pte_t *ptep, pte;
> + int ret = 0;
> +
> + /*
> + * PGD: If addr is PGD aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split.
> + */
> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
> + goto out;
> + pgdp = pgd_offset_k(addr);
> + pgd = pgdp_get(pgdp);
> + if (!pgd_present(pgd))
> + goto out;
> +
> + /*
> + * P4D: If addr is P4D aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split.
> + */
> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
> + goto out;
> + p4dp = p4d_offset(pgdp, addr);
> + p4d = p4dp_get(p4dp);
> + if (!p4d_present(p4d))
> + goto out;
> +
> + /*
> + * PUD: If addr is PUD aligned then addr already describes a leaf
> + * boundary. If not present then there is nothing to split. Otherwise,
> + * if we have a pud leaf, split to contpmd.
> + */
> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
> + goto out;
> + pudp = pud_offset(p4dp, addr);
> + pud = pudp_get(pudp);
> + if (!pud_present(pud))
> + goto out;
> + if (pud_leaf(pud)) {
> + ret = split_pud(pudp, pud);
> + if (ret)
> + goto out;
> + }
> +
> + /*
> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
> + * leaf boundary. If not present then there is nothing to split.
> + * Otherwise, if we have a contpmd leaf, split to pmd.
> + */
> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
> + goto out;
> + pmdp = pmd_offset(pudp, addr);
> + pmd = pmdp_get(pmdp);
> + if (!pmd_present(pmd))
> + goto out;
> + if (pmd_leaf(pmd)) {
> + if (pmd_cont(pmd))
> + split_contpmd(pmdp);
> + /*
> + * PMD: If addr is PMD aligned then addr already describes a
> + * leaf boundary. Otherwise, split to contpte.
> + */
> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> + goto out;
> + ret = split_pmd(pmdp, pmd);
> + if (ret)
> + goto out;
> + }
> +
> + /*
> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
> + * leaf boundary. If not present then there is nothing to split.
> + * Otherwise, if we have a contpte leaf, split to pte.
> + */
> + if (ALIGN_DOWN(addr, CONT_PTE_SIZE) == addr)
> + goto out;
> + ptep = pte_offset_kernel(pmdp, addr);
> + pte = __ptep_get(ptep);
> + if (!pte_present(pte))
> + goto out;
> + if (pte_cont(pte))
> + split_contpte(ptep);
> +
> +out:
> + return ret;
> +}
> +
> +static DEFINE_MUTEX(pgtable_split_lock);
> +
> +int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> +{
> + int ret;
> +
> + /*
> + * !BBML2_NOABORT systems should not be trying to change permissions on
> + * anything that is not pte-mapped in the first place. Just return early
> + * and let the permission change code raise a warning if not already
> + * pte-mapped.
> + */
> + if (!system_supports_bbml2_noabort())
> + return 0;
> +
> + /*
> + * Ensure start and end are at least page-aligned since this is the
> + * finest granularity we can split to.
> + */
> + if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
> + return -EINVAL;
> +
> + mutex_lock(&pgtable_split_lock);
> + arch_enter_lazy_mmu_mode();
There is a spec issue here: We are inside a lazy mmu section, for which the
documentation says is an atomic context so we can't sleep. But
split_kernel_leaf_mapping_locked() will allocate pgtable memory if needed in a
manner that might sleep.
This isn't a problem in practice for arm64 since it's lazy mmu implementation
allows sleeping. I propose just adding a comment here to explain this and leave
the logic as is. Are people happy with this approach?
> +
> + ret = split_kernel_leaf_mapping_locked(start);
> + if (!ret)
> + ret = split_kernel_leaf_mapping_locked(end);
> +
> + arch_leave_lazy_mmu_mode();
> + mutex_unlock(&pgtable_split_lock);
> + return ret;
> }
>
> /*
> @@ -640,6 +868,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> +static inline bool force_pte_mapping(void)
> +{
> + bool bbml2 = system_capabilities_finalized() ?
> + system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> +
> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> + is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> @@ -665,7 +903,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> early_kfence_pool = arm64_kfence_alloc_pool();
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> @@ -1367,7 +1605,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> - if (can_set_direct_map())
> + if (force_pte_mapping())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 6da8cbc32f46..0aba80a38cef 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
>
> + ret = split_kernel_leaf_mapping(start, start + size);
> + if (WARN_ON_ONCE(ret))
> + return ret;
> +
> arch_enter_lazy_mmu_mode();
>
> /*
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full
2025-09-04 11:15 ` Ryan Roberts
@ 2025-09-04 14:57 ` Yang Shi
0 siblings, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-04 14:57 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/4/25 4:15 AM, Ryan Roberts wrote:
> On 29/08/2025 12:52, Ryan Roberts wrote:
>> From: Yang Shi <yang@os.amperecomputing.com>
>>
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>> - performance degradation
>> - more TLB pressure
>> - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full. When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
>> with 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal: 258988984 kB
>> MemFree: 254821700 kB
>>
>> After:
>> MemTotal: 259505132 kB
>> MemFree: 255410264 kB
>>
>> Around 500MB more memory are free to use. The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
>> disk encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap \
>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
>> --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad
>> case). The bandwidth is increased and the avg clat is reduced
>> proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache
>> populated). The bandwidth is increased by 150%.
>>
>> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/include/asm/cpufeature.h | 2 +
>> arch/arm64/include/asm/mmu.h | 1 +
>> arch/arm64/include/asm/pgtable.h | 5 +
>> arch/arm64/kernel/cpufeature.c | 7 +-
>> arch/arm64/mm/mmu.c | 248 +++++++++++++++++++++++++++-
>> arch/arm64/mm/pageattr.c | 4 +
>> 6 files changed, 261 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index bf13d676aae2..e223cbf350e4 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -871,6 +871,8 @@ static inline bool system_supports_pmuv3(void)
>> return cpus_have_final_cap(ARM64_HAS_PMUV3);
>> }
>>
>> +bool cpu_supports_bbml2_noabort(void);
>> +
>> static inline bool system_supports_bbml2_noabort(void)
>> {
>> return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 6e8aa8e72601..56fca81f60ad 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -71,6 +71,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>> pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>> extern void mark_linear_text_alias_ro(void);
>> +extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
>>
>> /*
>> * This check is triggered during the early boot before the cpufeature
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index abd2dee416b3..aa89c2e67ebc 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
>> return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
>> }
>>
>> +static inline pmd_t pmd_mknoncont(pmd_t pmd)
>> +{
>> + return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
>> +}
>> +
>> #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
>> static inline int pte_uffd_wp(pte_t pte)
>> {
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index b93f4ee57176..a8936c1023ea 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2217,7 +2217,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
>> return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
>> }
>>
>> -static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>> +bool cpu_supports_bbml2_noabort(void)
>> {
>> /*
>> * We want to allow usage of BBML2 in as wide a range of kernel contexts
>> @@ -2251,6 +2251,11 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
>> return true;
>> }
>>
>> +static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
>> +{
>> + return cpu_supports_bbml2_noabort();
>> +}
>> +
>> #ifdef CONFIG_ARM64_PAN
>> static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
>> {
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 34e5d78af076..114b88216b0c 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -481,6 +481,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
>> int flags);
>> #endif
>>
>> +#define INVALID_PHYS_ADDR -1
>> +
>> static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> enum pgtable_type pgtable_type)
>> {
>> @@ -488,7 +490,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
>> phys_addr_t pa;
>>
>> - BUG_ON(!ptdesc);
>> + if (!ptdesc)
>> + return INVALID_PHYS_ADDR;
>> +
>> pa = page_to_phys(ptdesc_page(ptdesc));
>>
>> switch (pgtable_type) {
>> @@ -509,16 +513,240 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
>> return pa;
>> }
>>
>> +static phys_addr_t
>> +try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> +{
>> + return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> +}
>> +
>> static phys_addr_t __maybe_unused
>> pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
>> {
>> - return __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> + phys_addr_t pa;
>> +
>> + pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
>> + BUG_ON(pa == INVALID_PHYS_ADDR);
>> + return pa;
>> }
>>
>> static phys_addr_t
>> pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
>> {
>> - return __pgd_pgtable_alloc(NULL, pgtable_type);
>> + phys_addr_t pa;
>> +
>> + pa = __pgd_pgtable_alloc(NULL, pgtable_type);
>> + BUG_ON(pa == INVALID_PHYS_ADDR);
>> + return pa;
>> +}
>> +
>> +static void split_contpte(pte_t *ptep)
>> +{
>> + int i;
>> +
>> + ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
>> + for (i = 0; i < CONT_PTES; i++, ptep++)
>> + __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
>> +}
>> +
>> +static int split_pmd(pmd_t *pmdp, pmd_t pmd)
>> +{
>> + pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>> + unsigned long pfn = pmd_pfn(pmd);
>> + pgprot_t prot = pmd_pgprot(pmd);
>> + phys_addr_t pte_phys;
>> + pte_t *ptep;
>> + int i;
>> +
>> + pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
>> + if (pte_phys == INVALID_PHYS_ADDR)
>> + return -ENOMEM;
>> + ptep = (pte_t *)phys_to_virt(pte_phys);
>> +
>> + if (pgprot_val(prot) & PMD_SECT_PXN)
>> + tableprot |= PMD_TABLE_PXN;
>> +
>> + prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +
>> + for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> + __set_pte(ptep, pfn_pte(pfn, prot));
>> +
>> + /*
>> + * Ensure the pte entries are visible to the table walker by the time
>> + * the pmd entry that points to the ptes is visible.
>> + */
>> + dsb(ishst);
>> + __pmd_populate(pmdp, pte_phys, tableprot);
>> +
>> + return 0;
>> +}
>> +
>> +static void split_contpmd(pmd_t *pmdp)
>> +{
>> + int i;
>> +
>> + pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
>> + for (i = 0; i < CONT_PMDS; i++, pmdp++)
>> + set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
>> +}
>> +
>> +static int split_pud(pud_t *pudp, pud_t pud)
>> +{
>> + pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>> + unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> + unsigned long pfn = pud_pfn(pud);
>> + pgprot_t prot = pud_pgprot(pud);
>> + phys_addr_t pmd_phys;
>> + pmd_t *pmdp;
>> + int i;
>> +
>> + pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
>> + if (pmd_phys == INVALID_PHYS_ADDR)
>> + return -ENOMEM;
>> + pmdp = (pmd_t *)phys_to_virt(pmd_phys);
>> +
>> + if (pgprot_val(prot) & PMD_SECT_PXN)
>> + tableprot |= PUD_TABLE_PXN;
>> +
>> + prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> +
>> + for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
>> + set_pmd(pmdp, pfn_pmd(pfn, prot));
>> +
>> + /*
>> + * Ensure the pmd entries are visible to the table walker by the time
>> + * the pud entry that points to the pmds is visible.
>> + */
>> + dsb(ishst);
>> + __pud_populate(pudp, pmd_phys, tableprot);
>> +
>> + return 0;
>> +}
>> +
>> +static int split_kernel_leaf_mapping_locked(unsigned long addr)
>> +{
>> + pgd_t *pgdp, pgd;
>> + p4d_t *p4dp, p4d;
>> + pud_t *pudp, pud;
>> + pmd_t *pmdp, pmd;
>> + pte_t *ptep, pte;
>> + int ret = 0;
>> +
>> + /*
>> + * PGD: If addr is PGD aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split.
>> + */
>> + if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
>> + goto out;
>> + pgdp = pgd_offset_k(addr);
>> + pgd = pgdp_get(pgdp);
>> + if (!pgd_present(pgd))
>> + goto out;
>> +
>> + /*
>> + * P4D: If addr is P4D aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split.
>> + */
>> + if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
>> + goto out;
>> + p4dp = p4d_offset(pgdp, addr);
>> + p4d = p4dp_get(p4dp);
>> + if (!p4d_present(p4d))
>> + goto out;
>> +
>> + /*
>> + * PUD: If addr is PUD aligned then addr already describes a leaf
>> + * boundary. If not present then there is nothing to split. Otherwise,
>> + * if we have a pud leaf, split to contpmd.
>> + */
>> + if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
>> + goto out;
>> + pudp = pud_offset(p4dp, addr);
>> + pud = pudp_get(pudp);
>> + if (!pud_present(pud))
>> + goto out;
>> + if (pud_leaf(pud)) {
>> + ret = split_pud(pudp, pud);
>> + if (ret)
>> + goto out;
>> + }
>> +
>> + /*
>> + * CONTPMD: If addr is CONTPMD aligned then addr already describes a
>> + * leaf boundary. If not present then there is nothing to split.
>> + * Otherwise, if we have a contpmd leaf, split to pmd.
>> + */
>> + if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
>> + goto out;
>> + pmdp = pmd_offset(pudp, addr);
>> + pmd = pmdp_get(pmdp);
>> + if (!pmd_present(pmd))
>> + goto out;
>> + if (pmd_leaf(pmd)) {
>> + if (pmd_cont(pmd))
>> + split_contpmd(pmdp);
>> + /*
>> + * PMD: If addr is PMD aligned then addr already describes a
>> + * leaf boundary. Otherwise, split to contpte.
>> + */
>> + if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>> + goto out;
>> + ret = split_pmd(pmdp, pmd);
>> + if (ret)
>> + goto out;
>> + }
>> +
>> + /*
>> + * CONTPTE: If addr is CONTPTE aligned then addr already describes a
>> + * leaf boundary. If not present then there is nothing to split.
>> + * Otherwise, if we have a contpte leaf, split to pte.
>> + */
>> + if (ALIGN_DOWN(addr, CONT_PTE_SIZE) == addr)
>> + goto out;
>> + ptep = pte_offset_kernel(pmdp, addr);
>> + pte = __ptep_get(ptep);
>> + if (!pte_present(pte))
>> + goto out;
>> + if (pte_cont(pte))
>> + split_contpte(ptep);
>> +
>> +out:
>> + return ret;
>> +}
>> +
>> +static DEFINE_MUTEX(pgtable_split_lock);
>> +
>> +int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
>> +{
>> + int ret;
>> +
>> + /*
>> + * !BBML2_NOABORT systems should not be trying to change permissions on
>> + * anything that is not pte-mapped in the first place. Just return early
>> + * and let the permission change code raise a warning if not already
>> + * pte-mapped.
>> + */
>> + if (!system_supports_bbml2_noabort())
>> + return 0;
>> +
>> + /*
>> + * Ensure start and end are at least page-aligned since this is the
>> + * finest granularity we can split to.
>> + */
>> + if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
>> + return -EINVAL;
>> +
>> + mutex_lock(&pgtable_split_lock);
>> + arch_enter_lazy_mmu_mode();
> There is a spec issue here: We are inside a lazy mmu section, for which the
> documentation says is an atomic context so we can't sleep. But
> split_kernel_leaf_mapping_locked() will allocate pgtable memory if needed in a
> manner that might sleep.
>
> This isn't a problem in practice for arm64 since it's lazy mmu implementation
> allows sleeping. I propose just adding a comment here to explain this and leave
> the logic as is. Are people happy with this approach?
Yes, sounds good to me.
Thanks,
Yang
>
>> +
>> + ret = split_kernel_leaf_mapping_locked(start);
>> + if (!ret)
>> + ret = split_kernel_leaf_mapping_locked(end);
>> +
>> + arch_leave_lazy_mmu_mode();
>> + mutex_unlock(&pgtable_split_lock);
>> + return ret;
>> }
>>
>> /*
>> @@ -640,6 +868,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>>
>> #endif /* CONFIG_KFENCE */
>>
>> +static inline bool force_pte_mapping(void)
>> +{
>> + bool bbml2 = system_capabilities_finalized() ?
>> + system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
>> +
>> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
>> + is_realm_world())) ||
>> + debug_pagealloc_enabled();
>> +}
>> +
>> static void __init map_mem(pgd_t *pgdp)
>> {
>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> @@ -665,7 +903,7 @@ static void __init map_mem(pgd_t *pgdp)
>>
>> early_kfence_pool = arm64_kfence_alloc_pool();
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> /*
>> @@ -1367,7 +1605,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>
>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>
>> - if (can_set_direct_map())
>> + if (force_pte_mapping())
>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 6da8cbc32f46..0aba80a38cef 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -140,6 +140,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
>> data.set_mask = set_mask;
>> data.clear_mask = clear_mask;
>>
>> + ret = split_kernel_leaf_mapping(start, start + size);
>> + if (WARN_ON_ONCE(ret))
>> + return ret;
>> +
>> arch_enter_lazy_mmu_mode();
>>
>> /*
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (2 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 22:11 ` Yang Shi
2025-09-03 19:20 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
` (2 subsequent siblings)
6 siblings, 2 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
The common case for split_kernel_leaf_mapping() is for a single page.
Let's optimize this by only calling split_kernel_leaf_mapping_locked()
once.
Since the start and end address are PAGE_SIZE apart, they must be
contained within the same contpte block. Further, if start is at the
beginning of the block or end is at the end of the block, then the other
address must be in the _middle_ of the block. So if we split on this
middle-of-the-contpte-block address, it is guaranteed that the
containing contpte block is split to ptes and both start and end are
therefore mapped by pte.
This avoids the second call to split_kernel_leaf_mapping_locked()
meaning we only have to walk the pgtable once.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/mmu.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 114b88216b0c..8b5b19e1154b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -740,9 +740,21 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
mutex_lock(&pgtable_split_lock);
arch_enter_lazy_mmu_mode();
- ret = split_kernel_leaf_mapping_locked(start);
- if (!ret)
- ret = split_kernel_leaf_mapping_locked(end);
+ /*
+ * Optimize for the common case of splitting out a single page from a
+ * larger mapping. Here we can just split on the "least aligned" of
+ * start and end and this will guarantee that there must also be a split
+ * on the more aligned address since the both addresses must be in the
+ * same contpte block and it must have been split to ptes.
+ */
+ if (end - start == PAGE_SIZE) {
+ start = __ffs(start) < __ffs(end) ? start : end;
+ ret = split_kernel_leaf_mapping_locked(start);
+ } else {
+ ret = split_kernel_leaf_mapping_locked(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping_locked(end);
+ }
arch_leave_lazy_mmu_mode();
mutex_unlock(&pgtable_split_lock);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
@ 2025-08-29 22:11 ` Yang Shi
2025-09-03 19:20 ` Catalin Marinas
1 sibling, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-08-29 22:11 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 8/29/25 4:52 AM, Ryan Roberts wrote:
> The common case for split_kernel_leaf_mapping() is for a single page.
> Let's optimize this by only calling split_kernel_leaf_mapping_locked()
> once.
>
> Since the start and end address are PAGE_SIZE apart, they must be
> contained within the same contpte block. Further, if start is at the
> beginning of the block or end is at the end of the block, then the other
> address must be in the _middle_ of the block. So if we split on this
> middle-of-the-contpte-block address, it is guaranteed that the
> containing contpte block is split to ptes and both start and end are
> therefore mapped by pte.
>
> This avoids the second call to split_kernel_leaf_mapping_locked()
> meaning we only have to walk the pgtable once.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/mm/mmu.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 114b88216b0c..8b5b19e1154b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -740,9 +740,21 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> mutex_lock(&pgtable_split_lock);
> arch_enter_lazy_mmu_mode();
>
> - ret = split_kernel_leaf_mapping_locked(start);
> - if (!ret)
> - ret = split_kernel_leaf_mapping_locked(end);
> + /*
> + * Optimize for the common case of splitting out a single page from a
> + * larger mapping. Here we can just split on the "least aligned" of
> + * start and end and this will guarantee that there must also be a split
> + * on the more aligned address since the both addresses must be in the
> + * same contpte block and it must have been split to ptes.
> + */
> + if (end - start == PAGE_SIZE) {
> + start = __ffs(start) < __ffs(end) ? start : end;
> + ret = split_kernel_leaf_mapping_locked(start);
This makes sense to me. I suggested the same thing in the discussion
with Dev for v5. I'd like to have this patch squashed into patch #3.
Thanks,
Yang
> + } else {
> + ret = split_kernel_leaf_mapping_locked(start);
> + if (!ret)
> + ret = split_kernel_leaf_mapping_locked(end);
> + }
>
> arch_leave_lazy_mmu_mode();
> mutex_unlock(&pgtable_split_lock);
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
2025-08-29 22:11 ` Yang Shi
@ 2025-09-03 19:20 ` Catalin Marinas
2025-09-04 11:09 ` Ryan Roberts
1 sibling, 1 reply; 51+ messages in thread
From: Catalin Marinas @ 2025-09-03 19:20 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:45PM +0100, Ryan Roberts wrote:
> The common case for split_kernel_leaf_mapping() is for a single page.
> Let's optimize this by only calling split_kernel_leaf_mapping_locked()
> once.
>
> Since the start and end address are PAGE_SIZE apart, they must be
> contained within the same contpte block. Further, if start is at the
> beginning of the block or end is at the end of the block, then the other
> address must be in the _middle_ of the block. So if we split on this
> middle-of-the-contpte-block address, it is guaranteed that the
> containing contpte block is split to ptes and both start and end are
> therefore mapped by pte.
>
> This avoids the second call to split_kernel_leaf_mapping_locked()
> meaning we only have to walk the pgtable once.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
And I agree with Yang, you can just fold this into the previous patch.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping()
2025-09-03 19:20 ` Catalin Marinas
@ 2025-09-04 11:09 ` Ryan Roberts
0 siblings, 0 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 11:09 UTC (permalink / raw)
To: Catalin Marinas
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On 03/09/2025 20:20, Catalin Marinas wrote:
> On Fri, Aug 29, 2025 at 12:52:45PM +0100, Ryan Roberts wrote:
>> The common case for split_kernel_leaf_mapping() is for a single page.
>> Let's optimize this by only calling split_kernel_leaf_mapping_locked()
>> once.
>>
>> Since the start and end address are PAGE_SIZE apart, they must be
>> contained within the same contpte block. Further, if start is at the
>> beginning of the block or end is at the end of the block, then the other
>> address must be in the _middle_ of the block. So if we split on this
>> middle-of-the-contpte-block address, it is guaranteed that the
>> containing contpte block is split to ptes and both start and end are
>> therefore mapped by pte.
>>
>> This avoids the second call to split_kernel_leaf_mapping_locked()
>> meaning we only have to walk the pgtable once.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>
> And I agree with Yang, you can just fold this into the previous patch.
Yep, will do. Thanks for the review.
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (3 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-09-04 16:59 ` Catalin Marinas
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
6 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point. So the linear
mapping is determined by the capability of boot CPU only. If the boot
CPU supports BBML2, large block mappings will be used for linear
mapping.
But the secondary CPUs may not support BBML2, so repaint the linear
mapping if large block mapping is used and the secondary CPUs don't
support BBML2 once cpufeature is finalized on all CPUs.
If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.
Repainting is implemented by the boot CPU, which we know supports BBML2,
so it is safe for the live mapping size to change for this CPU. The
linear map region is walked using the pagewalk API and any discovered
large leaf mappings are split to pte mappings using the existing helper
functions. Since the repainting is performed inside of a stop_machine(),
we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
since we are still early in boot, it is expected that there is plenty of
memory available so we will never need to sleep for reclaim, and so
GFP_ATOMIC is acceptable here.
The secondary CPUs are all put into a waiting area with the idmap in
TTBR0 and reserved map in TTBR1 while this is performed since they
cannot be allowed to observe any size changes on the live mappings. Some
of this infrastructure is reused from the kpti case. Specifically we
share the same flag (was __idmap_kpti_flag, now idmap_kpti_bbml2_flag)
since it means we don't have to reserve any extra pgtable memory to
idmap the extra flag.
Co-developed-by: Yang Shi <yang@os.amperecomputing.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/mmu.h | 2 +
arch/arm64/kernel/cpufeature.c | 3 +
arch/arm64/mm/mmu.c | 168 +++++++++++++++++++++++++++++----
arch/arm64/mm/proc.S | 27 ++++--
4 files changed, 175 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 56fca81f60ad..2acfa7801d02 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -72,6 +72,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
+extern void init_idmap_kpti_bbml2_flag(void);
+extern void linear_map_maybe_split_to_ptes(void);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a8936c1023ea..461d286f40b1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -85,6 +85,7 @@
#include <asm/insn.h>
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
+#include <asm/mmu.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
@@ -2027,6 +2028,7 @@ static void __init kpti_install_ng_mappings(void)
if (arm64_use_ng_mappings)
return;
+ init_idmap_kpti_bbml2_flag();
stop_machine(__kpti_install_ng_mappings, NULL, cpu_online_mask);
}
@@ -3930,6 +3932,7 @@ void __init setup_system_features(void)
{
setup_system_capabilities();
+ linear_map_maybe_split_to_ptes();
kpti_install_ng_mappings();
sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8b5b19e1154b..6bd0b065bd97 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -27,6 +27,8 @@
#include <linux/kfence.h>
#include <linux/pkeys.h>
#include <linux/mm_inline.h>
+#include <linux/pagewalk.h>
+#include <linux/stop_machine.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -483,11 +485,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
#define INVALID_PHYS_ADDR -1
-static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
+static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
enum pgtable_type pgtable_type)
{
/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
- struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
+ struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
phys_addr_t pa;
if (!ptdesc)
@@ -514,9 +516,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t
-try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
}
static phys_addr_t __maybe_unused
@@ -524,7 +526,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -534,7 +536,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -548,7 +550,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -557,7 +559,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
pte_t *ptep;
int i;
- pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
ptep = (pte_t *)phys_to_virt(pte_phys);
@@ -590,7 +592,7 @@ static void split_contpmd(pmd_t *pmdp)
set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
}
-static int split_pud(pud_t *pudp, pud_t pud)
+static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
{
pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
@@ -600,7 +602,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
pmd_t *pmdp;
int i;
- pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pmdp = (pmd_t *)phys_to_virt(pmd_phys);
@@ -667,7 +669,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
if (!pud_present(pud))
goto out;
if (pud_leaf(pud)) {
- ret = split_pud(pudp, pud);
+ ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
if (ret)
goto out;
}
@@ -692,7 +694,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
if (ret)
goto out;
}
@@ -761,6 +763,132 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
return ret;
}
+static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pud_t pud = pudp_get(pudp);
+ int ret = 0;
+
+ if (pud_leaf(pud))
+ ret = split_pud(pudp, pud, GFP_ATOMIC);
+
+ return ret;
+}
+
+static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pmd_t pmd = pmdp_get(pmdp);
+ int ret = 0;
+
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
+ }
+
+ return ret;
+}
+
+static int __init split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pte_t pte = __ptep_get(ptep);
+
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+ return 0;
+}
+
+static const struct mm_walk_ops split_to_ptes_ops __initconst = {
+ .pud_entry = split_to_ptes_pud_entry,
+ .pmd_entry = split_to_ptes_pmd_entry,
+ .pte_entry = split_to_ptes_pte_entry,
+};
+
+static bool linear_map_requires_bbml2 __initdata;
+
+u32 idmap_kpti_bbml2_flag;
+
+void __init init_idmap_kpti_bbml2_flag(void)
+{
+ WRITE_ONCE(idmap_kpti_bbml2_flag, 1);
+ /* Must be visible to other CPUs before stop_machine() is called. */
+ smp_mb();
+}
+
+static int __init linear_map_split_to_ptes(void *__unused)
+{
+ /*
+ * Repainting the linear map must be done by CPU0 (the boot CPU) because
+ * that's the only CPU that we know supports BBML2. The other CPUs will
+ * be held in a waiting area with the idmap active.
+ */
+ if (!smp_processor_id()) {
+ unsigned long lstart = _PAGE_OFFSET(vabits_actual);
+ unsigned long lend = PAGE_END;
+ unsigned long kstart = (unsigned long)lm_alias(_stext);
+ unsigned long kend = (unsigned long)lm_alias(__init_begin);
+ int ret;
+
+ /*
+ * Wait for all secondary CPUs to be put into the waiting area.
+ */
+ smp_cond_load_acquire(&idmap_kpti_bbml2_flag, VAL == num_online_cpus());
+
+ /*
+ * Walk all of the linear map [lstart, lend), except the kernel
+ * linear map alias [kstart, kend), and split all mappings to
+ * PTE. The kernel alias remains static throughout runtime so
+ * can continue to be safely mapped with large mappings.
+ */
+ ret = walk_kernel_page_table_range_lockless(lstart, kstart,
+ &split_to_ptes_ops, NULL, NULL);
+ if (!ret)
+ ret = walk_kernel_page_table_range_lockless(kend, lend,
+ &split_to_ptes_ops, NULL, NULL);
+ if (ret)
+ panic("Failed to split linear map\n");
+ flush_tlb_kernel_range(lstart, lend);
+
+ /*
+ * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
+ * before any page table split operations.
+ */
+ WRITE_ONCE(idmap_kpti_bbml2_flag, 0);
+ } else {
+ typedef void (wait_split_fn)(void);
+ extern wait_split_fn wait_linear_map_split_to_ptes;
+ wait_split_fn *wait_fn;
+
+ wait_fn = (void *)__pa_symbol(wait_linear_map_split_to_ptes);
+
+ /*
+ * At least one secondary CPU doesn't support BBML2 so cannot
+ * tolerate the size of the live mappings changing. So have the
+ * secondary CPUs wait for the boot CPU to make the changes
+ * with the idmap active and init_mm inactive.
+ */
+ cpu_install_idmap();
+ wait_fn();
+ cpu_uninstall_idmap();
+ }
+
+ return 0;
+}
+
+void __init linear_map_maybe_split_to_ptes(void)
+{
+ if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()) {
+ init_idmap_kpti_bbml2_flag();
+ stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
+ }
+}
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
@@ -915,6 +1043,8 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
+ linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
+
if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
@@ -1048,7 +1178,7 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
- kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+ kpti_bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
static void __init create_idmap(void)
{
@@ -1060,15 +1190,17 @@ static void __init create_idmap(void)
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
- if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && !arm64_use_ng_mappings) {
- extern u32 __idmap_kpti_flag;
- u64 pa = __pa_symbol(&__idmap_kpti_flag);
+ if (linear_map_requires_bbml2 ||
+ (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && !arm64_use_ng_mappings)) {
+ u64 pa = __pa_symbol(&idmap_kpti_bbml2_flag);
/*
* The KPTI G-to-nG conversion code needs a read-write mapping
- * of its synchronization flag in the ID map.
+ * of its synchronization flag in the ID map. This is also used
+ * when splitting the linear map to ptes if a secondary CPU
+ * doesn't support bbml2.
*/
- ptep = __pa_symbol(kpti_ptes);
+ ptep = __pa_symbol(kpti_bbml2_ptes);
__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..86818511962b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -245,10 +245,6 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
*
* Called exactly once from stop_machine context by each CPU found during boot.
*/
- .pushsection ".data", "aw", %progbits
-SYM_DATA(__idmap_kpti_flag, .long 1)
- .popsection
-
SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings)
cpu .req w0
temp_pte .req x0
@@ -273,7 +269,7 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings)
mov x5, x3 // preserve temp_pte arg
mrs swapper_ttb, ttbr1_el1
- adr_l flag_ptr, __idmap_kpti_flag
+ adr_l flag_ptr, idmap_kpti_bbml2_flag
cbnz cpu, __idmap_kpti_secondary
@@ -416,7 +412,25 @@ alternative_else_nop_endif
__idmap_kpti_secondary:
/* Uninstall swapper before surgery begins */
__idmap_cpu_set_reserved_ttbr1 x16, x17
+ b scondary_cpu_wait
+
+ .unreq swapper_ttb
+ .unreq flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
+
+ .pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(wait_linear_map_split_to_ptes)
+ /* Must be same registers as in idmap_kpti_install_ng_mappings */
+ swapper_ttb .req x3
+ flag_ptr .req x4
+
+ mrs swapper_ttb, ttbr1_el1
+ adr_l flag_ptr, idmap_kpti_bbml2_flag
+ __idmap_cpu_set_reserved_ttbr1 x16, x17
+scondary_cpu_wait:
/* Increment the flag to let the boot CPU we're ready */
1: ldxr w16, [flag_ptr]
add w16, w16, #1
@@ -436,9 +450,8 @@ __idmap_kpti_secondary:
.unreq swapper_ttb
.unreq flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(wait_linear_map_split_to_ptes)
.popsection
-#endif
/*
* __cpu_setup
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
@ 2025-09-04 16:59 ` Catalin Marinas
2025-09-04 17:54 ` Yang Shi
2025-09-08 15:25 ` Ryan Roberts
0 siblings, 2 replies; 51+ messages in thread
From: Catalin Marinas @ 2025-09-04 16:59 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:46PM +0100, Ryan Roberts wrote:
> The kernel linear mapping is painted in very early stage of system boot.
> The cpufeature has not been finalized yet at this point. So the linear
> mapping is determined by the capability of boot CPU only. If the boot
> CPU supports BBML2, large block mappings will be used for linear
> mapping.
>
> But the secondary CPUs may not support BBML2, so repaint the linear
> mapping if large block mapping is used and the secondary CPUs don't
> support BBML2 once cpufeature is finalized on all CPUs.
>
> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
> same BBML2 capability with the boot CPU, repainting the linear mapping
> is not needed.
>
> Repainting is implemented by the boot CPU, which we know supports BBML2,
> so it is safe for the live mapping size to change for this CPU. The
> linear map region is walked using the pagewalk API and any discovered
> large leaf mappings are split to pte mappings using the existing helper
> functions. Since the repainting is performed inside of a stop_machine(),
> we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
> since we are still early in boot, it is expected that there is plenty of
> memory available so we will never need to sleep for reclaim, and so
> GFP_ATOMIC is acceptable here.
>
> The secondary CPUs are all put into a waiting area with the idmap in
> TTBR0 and reserved map in TTBR1 while this is performed since they
> cannot be allowed to observe any size changes on the live mappings. Some
> of this infrastructure is reused from the kpti case. Specifically we
> share the same flag (was __idmap_kpti_flag, now idmap_kpti_bbml2_flag)
> since it means we don't have to reserve any extra pgtable memory to
> idmap the extra flag.
>
> Co-developed-by: Yang Shi <yang@os.amperecomputing.com>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
I think this works, so:
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
However, I wonder how likely we are to find this combination in the
field to be worth carrying this code upstream. With kpti, we were aware
of platforms requiring it but is this also the case for BBM? If not, I'd
keep the patch out until we get a concrete example.
--
Catalin
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-09-04 16:59 ` Catalin Marinas
@ 2025-09-04 17:54 ` Yang Shi
2025-09-08 15:25 ` Ryan Roberts
1 sibling, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-04 17:54 UTC (permalink / raw)
To: Catalin Marinas, Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On 9/4/25 9:59 AM, Catalin Marinas wrote:
> On Fri, Aug 29, 2025 at 12:52:46PM +0100, Ryan Roberts wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point. So the linear
>> mapping is determined by the capability of boot CPU only. If the boot
>> CPU supports BBML2, large block mappings will be used for linear
>> mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear
>> mapping if large block mapping is used and the secondary CPUs don't
>> support BBML2 once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Repainting is implemented by the boot CPU, which we know supports BBML2,
>> so it is safe for the live mapping size to change for this CPU. The
>> linear map region is walked using the pagewalk API and any discovered
>> large leaf mappings are split to pte mappings using the existing helper
>> functions. Since the repainting is performed inside of a stop_machine(),
>> we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
>> since we are still early in boot, it is expected that there is plenty of
>> memory available so we will never need to sleep for reclaim, and so
>> GFP_ATOMIC is acceptable here.
>>
>> The secondary CPUs are all put into a waiting area with the idmap in
>> TTBR0 and reserved map in TTBR1 while this is performed since they
>> cannot be allowed to observe any size changes on the live mappings. Some
>> of this infrastructure is reused from the kpti case. Specifically we
>> share the same flag (was __idmap_kpti_flag, now idmap_kpti_bbml2_flag)
>> since it means we don't have to reserve any extra pgtable memory to
>> idmap the extra flag.
>>
>> Co-developed-by: Yang Shi <yang@os.amperecomputing.com>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> I think this works, so:
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thank you!
>
> However, I wonder how likely we are to find this combination in the
> field to be worth carrying this code upstream. With kpti, we were aware
> of platforms requiring it but is this also the case for BBM? If not, I'd
> keep the patch out until we get a concrete example.
At least we (Ampere) are very very unlikely to ship asymmetric systems
AFAICT.
Thanks,
Yang
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-09-04 16:59 ` Catalin Marinas
2025-09-04 17:54 ` Yang Shi
@ 2025-09-08 15:25 ` Ryan Roberts
1 sibling, 0 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-09-08 15:25 UTC (permalink / raw)
To: Catalin Marinas
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On 04/09/2025 17:59, Catalin Marinas wrote:
> On Fri, Aug 29, 2025 at 12:52:46PM +0100, Ryan Roberts wrote:
>> The kernel linear mapping is painted in very early stage of system boot.
>> The cpufeature has not been finalized yet at this point. So the linear
>> mapping is determined by the capability of boot CPU only. If the boot
>> CPU supports BBML2, large block mappings will be used for linear
>> mapping.
>>
>> But the secondary CPUs may not support BBML2, so repaint the linear
>> mapping if large block mapping is used and the secondary CPUs don't
>> support BBML2 once cpufeature is finalized on all CPUs.
>>
>> If the boot CPU doesn't support BBML2 or the secondary CPUs have the
>> same BBML2 capability with the boot CPU, repainting the linear mapping
>> is not needed.
>>
>> Repainting is implemented by the boot CPU, which we know supports BBML2,
>> so it is safe for the live mapping size to change for this CPU. The
>> linear map region is walked using the pagewalk API and any discovered
>> large leaf mappings are split to pte mappings using the existing helper
>> functions. Since the repainting is performed inside of a stop_machine(),
>> we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
>> since we are still early in boot, it is expected that there is plenty of
>> memory available so we will never need to sleep for reclaim, and so
>> GFP_ATOMIC is acceptable here.
>>
>> The secondary CPUs are all put into a waiting area with the idmap in
>> TTBR0 and reserved map in TTBR1 while this is performed since they
>> cannot be allowed to observe any size changes on the live mappings. Some
>> of this infrastructure is reused from the kpti case. Specifically we
>> share the same flag (was __idmap_kpti_flag, now idmap_kpti_bbml2_flag)
>> since it means we don't have to reserve any extra pgtable memory to
>> idmap the extra flag.
>>
>> Co-developed-by: Yang Shi <yang@os.amperecomputing.com>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> I think this works, so:
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks!
>
> However, I wonder how likely we are to find this combination in the
> field to be worth carrying this code upstream. With kpti, we were aware
> of platforms requiring it but is this also the case for BBM? If not, I'd
> keep the patch out until we get a concrete example.
Cortex-X4 supports BBML2_NOABORT (and is in the allow list). According to
Wikipedia [1], X4 is in:
- Google Tensor G4 [2]
- MediaTek Dimensity 9300/9300+ [3]
- Qualcomm Snapdragon 8 Gen 3 [4]
And in each of those SoCs, the X4s are paired with A720 and A520 cores.
To my knowledge, neither A720 nor A520 support BBML2_NOABORT. Certainly they are
not currently in the allow list. So on that basis, I think the require the
fallback path, assuming these platforms use one of the X4 cores as the boot CPU.
[1] https://en.wikipedia.org/wiki/ARM_Cortex-X4
[2] https://en.wikipedia.org/wiki/Google_Tensor
[3] https://en.wikipedia.org/wiki/List_of_MediaTek_systems_on_chips
[4] https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_systems_on_chips
Thanks,
Ryan
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (4 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
@ 2025-08-29 11:52 ` Ryan Roberts
2025-08-29 22:27 ` Yang Shi
2025-09-04 17:00 ` Catalin Marinas
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
6 siblings, 2 replies; 51+ messages in thread
From: Ryan Roberts @ 2025-08-29 11:52 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-mm
When splitting kernel leaf mappings, either via
split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
previously a leaf mapping was always split to the next size down. e.g.
pud -> contpmd -> pmd -> contpte -> pte. But for
linear_map_split_to_ptes() we can avoid the contpmd and contpte states
because we know we want to split all the way down to ptes.
This avoids visiting all the ptes in a table if it was created by
splitting a pmd, which is noticible on systems with a lot of memory.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/mmu.c | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6bd0b065bd97..8e45cd08bf3a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -550,7 +550,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -568,7 +568,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
tableprot |= PMD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
__set_pte(ptep, pfn_pte(pfn, prot));
@@ -592,7 +594,7 @@ static void split_contpmd(pmd_t *pmdp)
set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
}
-static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
+static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
{
pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
@@ -611,7 +613,9 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
tableprot |= PUD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
set_pmd(pmdp, pfn_pmd(pfn, prot));
@@ -669,7 +673,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
if (!pud_present(pud))
goto out;
if (pud_leaf(pud)) {
- ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
+ ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -694,7 +698,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -771,7 +775,7 @@ static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
int ret = 0;
if (pud_leaf(pud))
- ret = split_pud(pudp, pud, GFP_ATOMIC);
+ ret = split_pud(pudp, pud, GFP_ATOMIC, false);
return ret;
}
@@ -786,7 +790,13 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
if (pmd_leaf(pmd)) {
if (pmd_cont(pmd))
split_contpmd(pmdp);
- ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
+
+ /*
+ * We have split the pmd directly to ptes so there is no need to
+ * visit each pte to check if they are contpte.
+ */
+ walk->action = ACTION_CONTINUE;
}
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
@ 2025-08-29 22:27 ` Yang Shi
2025-09-04 11:10 ` Ryan Roberts
2025-09-04 17:00 ` Catalin Marinas
1 sibling, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-08-29 22:27 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 8/29/25 4:52 AM, Ryan Roberts wrote:
> When splitting kernel leaf mappings, either via
> split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
> previously a leaf mapping was always split to the next size down. e.g.
> pud -> contpmd -> pmd -> contpte -> pte. But for
> linear_map_split_to_ptes() we can avoid the contpmd and contpte states
> because we know we want to split all the way down to ptes.
>
> This avoids visiting all the ptes in a table if it was created by
> splitting a pmd, which is noticible on systems with a lot of memory.
Similar to patch #4, this patch should be squashed into patch #5 IMHO.
Thanks,
Yang
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/mm/mmu.c | 26 ++++++++++++++++++--------
> 1 file changed, 18 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 6bd0b065bd97..8e45cd08bf3a 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -550,7 +550,7 @@ static void split_contpte(pte_t *ptep)
> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
> }
>
> -static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
> {
> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
> unsigned long pfn = pmd_pfn(pmd);
> @@ -568,7 +568,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
> tableprot |= PMD_TABLE_PXN;
>
> prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
> + if (to_cont)
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>
> for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
> __set_pte(ptep, pfn_pte(pfn, prot));
> @@ -592,7 +594,7 @@ static void split_contpmd(pmd_t *pmdp)
> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
> }
>
> -static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
> {
> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
> @@ -611,7 +613,9 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
> tableprot |= PUD_TABLE_PXN;
>
> prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
> + if (to_cont)
> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>
> for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
> set_pmd(pmdp, pfn_pmd(pfn, prot));
> @@ -669,7 +673,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> if (!pud_present(pud))
> goto out;
> if (pud_leaf(pud)) {
> - ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
> if (ret)
> goto out;
> }
> @@ -694,7 +698,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> */
> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
> goto out;
> - ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
> if (ret)
> goto out;
> }
> @@ -771,7 +775,7 @@ static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
> int ret = 0;
>
> if (pud_leaf(pud))
> - ret = split_pud(pudp, pud, GFP_ATOMIC);
> + ret = split_pud(pudp, pud, GFP_ATOMIC, false);
>
> return ret;
> }
> @@ -786,7 +790,13 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
> if (pmd_leaf(pmd)) {
> if (pmd_cont(pmd))
> split_contpmd(pmdp);
> - ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
> +
> + /*
> + * We have split the pmd directly to ptes so there is no need to
> + * visit each pte to check if they are contpte.
> + */
> + walk->action = ACTION_CONTINUE;
> }
>
> return ret;
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-08-29 22:27 ` Yang Shi
@ 2025-09-04 11:10 ` Ryan Roberts
2025-09-04 14:58 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 11:10 UTC (permalink / raw)
To: Yang Shi, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 29/08/2025 23:27, Yang Shi wrote:
>
>
> On 8/29/25 4:52 AM, Ryan Roberts wrote:
>> When splitting kernel leaf mappings, either via
>> split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
>> previously a leaf mapping was always split to the next size down. e.g.
>> pud -> contpmd -> pmd -> contpte -> pte. But for
>> linear_map_split_to_ptes() we can avoid the contpmd and contpte states
>> because we know we want to split all the way down to ptes.
>>
>> This avoids visiting all the ptes in a table if it was created by
>> splitting a pmd, which is noticible on systems with a lot of memory.
>
> Similar to patch #4, this patch should be squashed into patch #5 IMHO.
That's fine by me. I was just trying to make the review easier by splitting
non-essential stuff out. Let's squash for next version.
>
> Thanks,
> Yang
>
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>> arch/arm64/mm/mmu.c | 26 ++++++++++++++++++--------
>> 1 file changed, 18 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 6bd0b065bd97..8e45cd08bf3a 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -550,7 +550,7 @@ static void split_contpte(pte_t *ptep)
>> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
>> }
>> -static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
>> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
>> {
>> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>> unsigned long pfn = pmd_pfn(pmd);
>> @@ -568,7 +568,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
>> tableprot |= PMD_TABLE_PXN;
>> prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
>> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>> + if (to_cont)
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>> __set_pte(ptep, pfn_pte(pfn, prot));
>> @@ -592,7 +594,7 @@ static void split_contpmd(pmd_t *pmdp)
>> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
>> }
>> -static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
>> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
>> {
>> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>> @@ -611,7 +613,9 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
>> tableprot |= PUD_TABLE_PXN;
>> prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
>> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>> + if (to_cont)
>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>> for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
>> set_pmd(pmdp, pfn_pmd(pfn, prot));
>> @@ -669,7 +673,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long
>> addr)
>> if (!pud_present(pud))
>> goto out;
>> if (pud_leaf(pud)) {
>> - ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
>> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
>> if (ret)
>> goto out;
>> }
>> @@ -694,7 +698,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long
>> addr)
>> */
>> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>> goto out;
>> - ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
>> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
>> if (ret)
>> goto out;
>> }
>> @@ -771,7 +775,7 @@ static int __init split_to_ptes_pud_entry(pud_t *pudp,
>> unsigned long addr,
>> int ret = 0;
>> if (pud_leaf(pud))
>> - ret = split_pud(pudp, pud, GFP_ATOMIC);
>> + ret = split_pud(pudp, pud, GFP_ATOMIC, false);
>> return ret;
>> }
>> @@ -786,7 +790,13 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp,
>> unsigned long addr,
>> if (pmd_leaf(pmd)) {
>> if (pmd_cont(pmd))
>> split_contpmd(pmdp);
>> - ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
>> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
>> +
>> + /*
>> + * We have split the pmd directly to ptes so there is no need to
>> + * visit each pte to check if they are contpte.
>> + */
>> + walk->action = ACTION_CONTINUE;
>> }
>> return ret;
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-09-04 11:10 ` Ryan Roberts
@ 2025-09-04 14:58 ` Yang Shi
0 siblings, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-04 14:58 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, Dev Jain,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/4/25 4:10 AM, Ryan Roberts wrote:
> On 29/08/2025 23:27, Yang Shi wrote:
>>
>> On 8/29/25 4:52 AM, Ryan Roberts wrote:
>>> When splitting kernel leaf mappings, either via
>>> split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
>>> previously a leaf mapping was always split to the next size down. e.g.
>>> pud -> contpmd -> pmd -> contpte -> pte. But for
>>> linear_map_split_to_ptes() we can avoid the contpmd and contpte states
>>> because we know we want to split all the way down to ptes.
>>>
>>> This avoids visiting all the ptes in a table if it was created by
>>> splitting a pmd, which is noticible on systems with a lot of memory.
>> Similar to patch #4, this patch should be squashed into patch #5 IMHO.
> That's fine by me. I was just trying to make the review easier by splitting
> non-essential stuff out. Let's squash for next version.
Understood.
Thanks,
Yang
>
>> Thanks,
>> Yang
>>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> ---
>>> arch/arm64/mm/mmu.c | 26 ++++++++++++++++++--------
>>> 1 file changed, 18 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 6bd0b065bd97..8e45cd08bf3a 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -550,7 +550,7 @@ static void split_contpte(pte_t *ptep)
>>> __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
>>> }
>>> -static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
>>> +static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
>>> {
>>> pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
>>> unsigned long pfn = pmd_pfn(pmd);
>>> @@ -568,7 +568,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp)
>>> tableprot |= PMD_TABLE_PXN;
>>> prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
>>> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>>> + if (to_cont)
>>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>> for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
>>> __set_pte(ptep, pfn_pte(pfn, prot));
>>> @@ -592,7 +594,7 @@ static void split_contpmd(pmd_t *pmdp)
>>> set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
>>> }
>>> -static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
>>> +static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
>>> {
>>> pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
>>> unsigned int step = PMD_SIZE >> PAGE_SHIFT;
>>> @@ -611,7 +613,9 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp)
>>> tableprot |= PUD_TABLE_PXN;
>>> prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
>>> - prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>> + prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
>>> + if (to_cont)
>>> + prot = __pgprot(pgprot_val(prot) | PTE_CONT);
>>> for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
>>> set_pmd(pmdp, pfn_pmd(pfn, prot));
>>> @@ -669,7 +673,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long
>>> addr)
>>> if (!pud_present(pud))
>>> goto out;
>>> if (pud_leaf(pud)) {
>>> - ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL);
>>> + ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
>>> if (ret)
>>> goto out;
>>> }
>>> @@ -694,7 +698,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long
>>> addr)
>>> */
>>> if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
>>> goto out;
>>> - ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL);
>>> + ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
>>> if (ret)
>>> goto out;
>>> }
>>> @@ -771,7 +775,7 @@ static int __init split_to_ptes_pud_entry(pud_t *pudp,
>>> unsigned long addr,
>>> int ret = 0;
>>> if (pud_leaf(pud))
>>> - ret = split_pud(pudp, pud, GFP_ATOMIC);
>>> + ret = split_pud(pudp, pud, GFP_ATOMIC, false);
>>> return ret;
>>> }
>>> @@ -786,7 +790,13 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp,
>>> unsigned long addr,
>>> if (pmd_leaf(pmd)) {
>>> if (pmd_cont(pmd))
>>> split_contpmd(pmdp);
>>> - ret = split_pmd(pmdp, pmd, GFP_ATOMIC);
>>> + ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
>>> +
>>> + /*
>>> + * We have split the pmd directly to ptes so there is no need to
>>> + * visit each pte to check if they are contpte.
>>> + */
>>> + walk->action = ACTION_CONTINUE;
>>> }
>>> return ret;
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes()
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
2025-08-29 22:27 ` Yang Shi
@ 2025-09-04 17:00 ` Catalin Marinas
1 sibling, 0 replies; 51+ messages in thread
From: Catalin Marinas @ 2025-09-04 17:00 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Yang Shi, Ard Biesheuvel, Dev Jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
On Fri, Aug 29, 2025 at 12:52:47PM +0100, Ryan Roberts wrote:
> When splitting kernel leaf mappings, either via
> split_kernel_leaf_mapping_locked() or linear_map_split_to_ptes(),
> previously a leaf mapping was always split to the next size down. e.g.
> pud -> contpmd -> pmd -> contpte -> pte. But for
> linear_map_split_to_ptes() we can avoid the contpmd and contpte states
> because we know we want to split all the way down to ptes.
>
> This avoids visiting all the ptes in a table if it was created by
> splitting a pmd, which is noticible on systems with a lot of memory.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
` (5 preceding siblings ...)
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
@ 2025-09-01 5:04 ` Dev Jain
2025-09-01 8:03 ` Ryan Roberts
6 siblings, 1 reply; 51+ messages in thread
From: Dev Jain @ 2025-09-01 5:04 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Yang Shi, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 29/08/25 5:22 pm, Ryan Roberts wrote:
> Hi All,
>
> This is a new version following on from the v6 RFC at [1] which itself is based
> on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
> map to be mapped with large blocks, even when rodata=full, and leads to some
> nice performance improvements.
>
> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> modes by hacking the BBML2 feature detection code:
>
> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
> initially uses large mappings but is then repainted to use pte mappings
>
> In all cases, mm selftests run and no regressions are observed. In all cases,
> ptdump of linear map is as expected:
>
> Mode 1:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF CON UXN MEM/NORMAL-TAGGED
> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF CON BLK UXN MEM/NORMAL-TAGGED
> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD AF BLK UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
> Mode 3:
> =======
> ---[ Linear Mapping start ]---
> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD AF BLK UXN MEM/NORMAL
> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD AF UXN MEM/NORMAL
> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED
> 0xffff000300000000-0xffff008000000000 500G PUD
> 0xffff008000000000-0xffff800000000000 130560G PGD
> ---[ Linear Mapping end ]---
>
>
> Performance Testing
> ===================
>
> Yang Shi has gathered some compelling results which are detailed in the commit
> log for patch #3. Additionally I have run this through a random selection of
> benchmarks on AmpereOne. None show any regressions, and various benchmarks show
> statistically significant improvement. I'm just showing those improvements here:
>
> +----------------------+----------------------------------------------------------+-------------------------+
> | Benchmark | Result Class | Improvement vs 6.17-rc1 |
> +======================+==========================================================+=========================+
> | micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | (I) -9.00% |
> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | (I) -6.93% |
> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | (I) -6.77% |
> | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | (I) -4.63% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | mmtests/hackbench | process-sockets-30 (seconds) | (I) -2.96% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | mmtests/kernbench | syst-192 (seconds) | (I) -12.77% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | pts/perl-benchmark | Test: Interpreter (Seconds) | (I) -4.86% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | pts/pgbench | Scale: 1 Clients: 1 Read Write (TPS) | (I) 5.07% |
> | | Scale: 1 Clients: 1 Read Write - Latency (ms) | (I) -4.72% |
> | | Scale: 100 Clients: 1000 Read Write (TPS) | (I) 2.58% |
> | | Scale: 100 Clients: 1000 Read Write - Latency (ms) | (I) -2.52% |
> +----------------------+----------------------------------------------------------+-------------------------+
> | pts/sqlite-speedtest | Timed Time - Size 1,000 (Seconds) | (I) -2.68% |
> +----------------------+----------------------------------------------------------+-------------------------+
>
>
> Changes since v6 [1]
> ====================
>
> - Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
> of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
> to the lockless variant for consistency (per Catalin).
> - Misc function/variable renames to improve clarity and consistency.
> - Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
> wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
> ~20K from kernel image.
> - Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
> - Only walk the pgtable once for the common "split single page" case.
> - Bypass split to contpmd and contpte when spllitting linear map to ptes.
>
>
> Applies on v6.17-rc3.
>
>
> [1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-ryan.roberts@arm.com/
>
> Thanks,
> Ryan
>
> Dev Jain (1):
> arm64: Enable permission change on arm64 kernel block mappings
>
> Ryan Roberts (3):
> arm64: mm: Optimize split_kernel_leaf_mapping()
> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
> arm64: mm: Optimize linear_map_split_to_ptes()
>
> Yang Shi (2):
> arm64: cpufeature: add AmpereOne to BBML2 allow list
> arm64: mm: support large block mapping when rodata=full
>
> arch/arm64/include/asm/cpufeature.h | 2 +
> arch/arm64/include/asm/mmu.h | 3 +
> arch/arm64/include/asm/pgtable.h | 5 +
> arch/arm64/kernel/cpufeature.c | 12 +-
> arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
> arch/arm64/mm/pageattr.c | 157 ++++++++---
> arch/arm64/mm/proc.S | 27 +-
> include/linux/pagewalk.h | 3 +
> mm/pagewalk.c | 36 ++-
> 9 files changed, 599 insertions(+), 64 deletions(-)
>
> --
> 2.43.0
>
Hi Yang and Ryan,
I observe there are various callsites which will ultimately use update_range_prot() (from patch 1),
that they do not check the return value. I am listing the ones I could find:
set_memory_ro() in bpf_jit_comp.c
set_memory_valid() in kernel_map_pages() in pageattr.c
set_direct_map_invalid_noflush() in vm_reset_perms() in vmalloc.c
set_direct_map_default_noflush() in vm_reset_perms() in vmalloc.c, and in secretmem.c
(the secretmem.c ones should be safe as explained in the commments therein)
The first one I think can be handled easily by returning -EFAULT.
For the second, we are already returning in case of !can_set_direct_map, which renders DEBUG_PAGEALLOC useless. So maybe it is
safe to ignore the ret from set_memory_valid?
For the third, the call chain is a sequence of must-succeed void functions. Notably, when using vfree(), we may have to allocate a single
pagetable page for splitting.
I am wondering whether we can just have a warn_on_once or something for the case when we fail to allocate a pagetable page. Or, Ryan had
suggested in an off-the-list conversation that we can maintain a cache of PTE tables for every PMD block mapping, which will give us
the same memory consumption as we do today, but not sure if this is worth it. x86 can already handle splitting but due to the callchains
I have described above, it has the same problem, and the code has been working for years :)
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
@ 2025-09-01 8:03 ` Ryan Roberts
2025-09-03 0:21 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-01 8:03 UTC (permalink / raw)
To: Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Yang Shi, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 01/09/2025 06:04, Dev Jain wrote:
>
> On 29/08/25 5:22 pm, Ryan Roberts wrote:
>> Hi All,
>>
>> This is a new version following on from the v6 RFC at [1] which itself is based
>> on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
>> map to be mapped with large blocks, even when rodata=full, and leads to some
>> nice performance improvements.
>>
>> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>> modes by hacking the BBML2 feature detection code:
>>
>> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
>> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
>> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
>> initially uses large mappings but is then repainted to use pte mappings
>>
>> In all cases, mm selftests run and no regressions are observed. In all cases,
>> ptdump of linear map is as expected:
>>
>> Mode 1:
>> =======
>> ---[ Linear Mapping start ]---
>> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>> AF BLK UXN MEM/NORMAL
>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF
>> CON UXN MEM/NORMAL-TAGGED
>> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF
>> CON BLK UXN MEM/NORMAL-TAGGED
>> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD
>> AF BLK UXN MEM/NORMAL-TAGGED
>> 0xffff000300000000-0xffff008000000000 500G PUD
>> 0xffff008000000000-0xffff800000000000 130560G PGD
>> ---[ Linear Mapping end ]---
>>
>> Mode 3:
>> =======
>> ---[ Linear Mapping start ]---
>> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>> AF BLK UXN MEM/NORMAL
>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>> AF UXN MEM/NORMAL
>> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD
>> AF UXN MEM/NORMAL-TAGGED
>> 0xffff000300000000-0xffff008000000000 500G PUD
>> 0xffff008000000000-0xffff800000000000 130560G PGD
>> ---[ Linear Mapping end ]---
>>
>>
>> Performance Testing
>> ===================
>>
>> Yang Shi has gathered some compelling results which are detailed in the commit
>> log for patch #3. Additionally I have run this through a random selection of
>> benchmarks on AmpereOne. None show any regressions, and various benchmarks show
>> statistically significant improvement. I'm just showing those improvements here:
>>
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | Benchmark | Result
>> Class | Improvement vs 6.17-rc1 |
>> +======================+==========================================================+=========================+
>> | micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000
>> (usec) | (I) -9.00% |
>> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000
>> (usec) | (I) -6.93% |
>> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000
>> (usec) | (I) -6.77% |
>> | | pcpu_alloc_test: p:1, h:0, l:500000
>> (usec) | (I) -4.63% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | mmtests/hackbench | process-sockets-30
>> (seconds) | (I) -2.96% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | mmtests/kernbench | syst-192
>> (seconds) | (I) -12.77% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | pts/perl-benchmark | Test: Interpreter
>> (Seconds) | (I) -4.86% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | pts/pgbench | Scale: 1 Clients: 1 Read Write
>> (TPS) | (I) 5.07% |
>> | | Scale: 1 Clients: 1 Read Write - Latency
>> (ms) | (I) -4.72% |
>> | | Scale: 100 Clients: 1000 Read Write
>> (TPS) | (I) 2.58% |
>> | | Scale: 100 Clients: 1000 Read Write - Latency
>> (ms) | (I) -2.52% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>> | pts/sqlite-speedtest | Timed Time - Size 1,000
>> (Seconds) | (I) -2.68% |
>> +----------------------
>> +----------------------------------------------------------
>> +-------------------------+
>>
>>
>> Changes since v6 [1]
>> ====================
>>
>> - Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
>> of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
>> to the lockless variant for consistency (per Catalin).
>> - Misc function/variable renames to improve clarity and consistency.
>> - Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
>> wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
>> ~20K from kernel image.
>> - Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
>> - Only walk the pgtable once for the common "split single page" case.
>> - Bypass split to contpmd and contpte when spllitting linear map to ptes.
>>
>>
>> Applies on v6.17-rc3.
>>
>>
>> [1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-
>> ryan.roberts@arm.com/
>>
>> Thanks,
>> Ryan
>>
>> Dev Jain (1):
>> arm64: Enable permission change on arm64 kernel block mappings
>>
>> Ryan Roberts (3):
>> arm64: mm: Optimize split_kernel_leaf_mapping()
>> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
>> arm64: mm: Optimize linear_map_split_to_ptes()
>>
>> Yang Shi (2):
>> arm64: cpufeature: add AmpereOne to BBML2 allow list
>> arm64: mm: support large block mapping when rodata=full
>>
>> arch/arm64/include/asm/cpufeature.h | 2 +
>> arch/arm64/include/asm/mmu.h | 3 +
>> arch/arm64/include/asm/pgtable.h | 5 +
>> arch/arm64/kernel/cpufeature.c | 12 +-
>> arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
>> arch/arm64/mm/pageattr.c | 157 ++++++++---
>> arch/arm64/mm/proc.S | 27 +-
>> include/linux/pagewalk.h | 3 +
>> mm/pagewalk.c | 36 ++-
>> 9 files changed, 599 insertions(+), 64 deletions(-)
>>
>> --
>> 2.43.0
>>
>
> Hi Yang and Ryan,
>
> I observe there are various callsites which will ultimately use
> update_range_prot() (from patch 1),
> that they do not check the return value. I am listing the ones I could find:
So your concern is that prior to patch #3 in this series, any error returned by
__change_memory_common() would be due to programming error only. But patch #3
introduces the possibility of dynamic error (-ENOMEM) due to the need to
allocate pgtable memory to split a mapping?
There is a WARN_ON_ONCE(ret) for the return code of split_kernel_leaf_mapping()
which will at least make the error visible, but I agree it's not a great solution.
>
> set_memory_ro() in bpf_jit_comp.c
There is a set_memory_rw() for the same region of memory directly above this,
which will return -EFAULT on failure. If that one succeeded, then the pgtable
must already be appropriately split for set_memory_ro() so that should never
fail in practice. I agree with improving the robustness of the code by returning
-EFAULT (or just propagate the error?) as you suggest though.
> set_memory_valid() in kernel_map_pages() in pageattr.c
This is used by CONFIG_DEBUG_PAGEALLOC to make pages in the linear map invalid
while they are not in use to catch programming errors. So if making a page
invalid during freeing fails would not technically lead to a huge issue, it just
reduces our capability of catching an errant access to that free memory.
In principle, if we were able to make the memory invalid, we should therefore be
able to make it valid again, because the mappings should be sufficiently split
already. But that doesn't actually work, because we might be allocating a
smaller order than was freed so we might not have split at free-time to the
granularity is required at allocation-time.
But as you say, for CONFIG_DEBUG_PAGEALLOC we disable this whole path anyway, so
no issue here.
> set_direct_map_invalid_noflush() in vm_reset_perms() in vmalloc.c
> set_direct_map_default_noflush() in vm_reset_perms() in vmalloc.c, and in
> secretmem.c
> (the secretmem.c ones should be safe as explained in the commments therein)
Agreed for secretmem. vmalloc looks like a problem though...
If vmalloc was only setting the linear map back to default permissions, I guess
this wouldn't be an issue because we must have split the linear map sucessfully
when changing away from default permissions in the first place. But the fact
that it is unconditionally setting the linear map pages to invalid then back to
default causes issues; I guess even without the risk of -ENOMEM, this will cause
the linear map to be split to PTEs over time as vmalloc allocs and frees?
We probably need to think through how we can solve this. It's not clear to me
why vm_reset_perms wants to unconditionally transiently set to invalid?
>
> The first one I think can be handled easily by returning -EFAULT.
>
> For the second, we are already returning in case of !can_set_direct_map, which
> renders DEBUG_PAGEALLOC useless. So maybe it is
> safe to ignore the ret from set_memory_valid?
>
> For the third, the call chain is a sequence of must-succeed void functions.
> Notably, when using vfree(), we may have to allocate a single
> pagetable page for splitting.
>
> I am wondering whether we can just have a warn_on_once or something for the case
> when we fail to allocate a pagetable page. Or, Ryan had
> suggested in an off-the-list conversation that we can maintain a cache of PTE
> tables for every PMD block mapping, which will give us
> the same memory consumption as we do today, but not sure if this is worth it.
> x86 can already handle splitting but due to the callchains
> I have described above, it has the same problem, and the code has been working
> for years :)
I think it's preferable to avoid having to keep a cache of pgtable memory if we
can...
Thanks,
Ryan
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-01 8:03 ` Ryan Roberts
@ 2025-09-03 0:21 ` Yang Shi
2025-09-03 0:50 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-03 0:21 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/1/25 1:03 AM, Ryan Roberts wrote:
> On 01/09/2025 06:04, Dev Jain wrote:
>> On 29/08/25 5:22 pm, Ryan Roberts wrote:
>>> Hi All,
>>>
>>> This is a new version following on from the v6 RFC at [1] which itself is based
>>> on Yang Shi's work. On systems with BBML2_NOABORT support, it causes the linear
>>> map to be mapped with large blocks, even when rodata=full, and leads to some
>>> nice performance improvements.
>>>
>>> I've tested this on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>>> modes by hacking the BBML2 feature detection code:
>>>
>>> - mode 1: All CPUs support BBML2 so the linear map uses large mappings
>>> - mode 2: Boot CPU does not support BBML2 so linear map uses pte mappings
>>> - mode 3: Boot CPU supports BBML2 but secondaries do not so linear map
>>> initially uses large mappings but is then repainted to use pte mappings
>>>
>>> In all cases, mm selftests run and no regressions are observed. In all cases,
>>> ptdump of linear map is as expected:
>>>
>>> Mode 1:
>>> =======
>>> ---[ Linear Mapping start ]---
>>> 0xffff000000000000-0xffff000000200000 2M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000000200000-0xffff000000210000 64K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>>> AF BLK UXN MEM/NORMAL
>>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000002550000-0xffff000002600000 704K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000002600000-0xffff000004000000 26M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000004000000-0xffff000040000000 960M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000040000000-0xffff000140000000 4G PUD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000140000000-0xffff000142000000 32M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000142000000-0xffff000142120000 1152K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142120000-0xffff000142128000 32K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142128000-0xffff000142159000 196K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142159000-0xffff000142160000 28K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142160000-0xffff000142240000 896K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142240000-0xffff00014224e000 56K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff00014224e000-0xffff000142250000 8K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142250000-0xffff000142260000 64K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142260000-0xffff000142280000 128K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142280000-0xffff000142288000 32K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142288000-0xffff000142290000 32K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142290000-0xffff0001422a0000 64K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff0001422a0000-0xffff000142465000 1812K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142465000-0xffff000142470000 44K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000142470000-0xffff000142600000 1600K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000142600000-0xffff000144000000 26M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000144000000-0xffff000180000000 960M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000180000000-0xffff000181a00000 26M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000181a00000-0xffff000181b90000 1600K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000181b90000-0xffff000181b9d000 52K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181b9d000-0xffff000181c80000 908K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181c80000-0xffff000181c90000 64K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181c90000-0xffff000181ca0000 64K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000181ca0000-0xffff000181dbd000 1140K PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181dbd000-0xffff000181dc0000 12K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181dc0000-0xffff000181e00000 256K PTE RW NX SHD AF
>>> CON UXN MEM/NORMAL-TAGGED
>>> 0xffff000181e00000-0xffff000182000000 2M PMD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000182000000-0xffff0001c0000000 992M PMD RW NX SHD AF
>>> CON BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff0001c0000000-0xffff000300000000 5G PUD RW NX SHD
>>> AF BLK UXN MEM/NORMAL-TAGGED
>>> 0xffff000300000000-0xffff008000000000 500G PUD
>>> 0xffff008000000000-0xffff800000000000 130560G PGD
>>> ---[ Linear Mapping end ]---
>>>
>>> Mode 3:
>>> =======
>>> ---[ Linear Mapping start ]---
>>> 0xffff000000000000-0xffff000000210000 2112K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000000210000-0xffff000000400000 1984K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000000400000-0xffff000002400000 32M PMD ro NX SHD
>>> AF BLK UXN MEM/NORMAL
>>> 0xffff000002400000-0xffff000002550000 1344K PTE ro NX SHD
>>> AF UXN MEM/NORMAL
>>> 0xffff000002550000-0xffff000143a61000 5264452K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000143a61000-0xffff000143c61000 2M PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000143c61000-0xffff000181b9a000 1015012K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181b9a000-0xffff000181d9a000 2M PTE ro NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000181d9a000-0xffff000300000000 6261144K PTE RW NX SHD
>>> AF UXN MEM/NORMAL-TAGGED
>>> 0xffff000300000000-0xffff008000000000 500G PUD
>>> 0xffff008000000000-0xffff800000000000 130560G PGD
>>> ---[ Linear Mapping end ]---
>>>
>>>
>>> Performance Testing
>>> ===================
>>>
>>> Yang Shi has gathered some compelling results which are detailed in the commit
>>> log for patch #3. Additionally I have run this through a random selection of
>>> benchmarks on AmpereOne. None show any regressions, and various benchmarks show
>>> statistically significant improvement. I'm just showing those improvements here:
>>>
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | Benchmark | Result
>>> Class | Improvement vs 6.17-rc1 |
>>> +======================+==========================================================+=========================+
>>> | micromm/vmalloc | full_fit_alloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -9.00% |
>>> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -6.93% |
>>> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -6.77% |
>>> | | pcpu_alloc_test: p:1, h:0, l:500000
>>> (usec) | (I) -4.63% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | mmtests/hackbench | process-sockets-30
>>> (seconds) | (I) -2.96% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | mmtests/kernbench | syst-192
>>> (seconds) | (I) -12.77% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | pts/perl-benchmark | Test: Interpreter
>>> (Seconds) | (I) -4.86% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | pts/pgbench | Scale: 1 Clients: 1 Read Write
>>> (TPS) | (I) 5.07% |
>>> | | Scale: 1 Clients: 1 Read Write - Latency
>>> (ms) | (I) -4.72% |
>>> | | Scale: 100 Clients: 1000 Read Write
>>> (TPS) | (I) 2.58% |
>>> | | Scale: 100 Clients: 1000 Read Write - Latency
>>> (ms) | (I) -2.52% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>> | pts/sqlite-speedtest | Timed Time - Size 1,000
>>> (Seconds) | (I) -2.68% |
>>> +----------------------
>>> +----------------------------------------------------------
>>> +-------------------------+
>>>
>>>
>>> Changes since v6 [1]
>>> ====================
>>>
>>> - Patch 1: Minor refactor to implement walk_kernel_page_table_range() in terms
>>> of walk_kernel_page_table_range_lockless(). Also lead to adding *pmd argument
>>> to the lockless variant for consistency (per Catalin).
>>> - Misc function/variable renames to improve clarity and consistency.
>>> - Share same syncrhonization flag between idmap_kpti_install_ng_mappings and
>>> wait_linear_map_split_to_ptes, which allows removal of bbml2_ptes[] to save
>>> ~20K from kernel image.
>>> - Only take pgtable_split_lock and enter lazy mmu mode once for both splits.
>>> - Only walk the pgtable once for the common "split single page" case.
>>> - Bypass split to contpmd and contpte when spllitting linear map to ptes.
>>>
>>>
>>> Applies on v6.17-rc3.
>>>
>>>
>>> [1] https://lore.kernel.org/linux-arm-kernel/20250805081350.3854670-1-
>>> ryan.roberts@arm.com/
>>>
>>> Thanks,
>>> Ryan
>>>
>>> Dev Jain (1):
>>> arm64: Enable permission change on arm64 kernel block mappings
>>>
>>> Ryan Roberts (3):
>>> arm64: mm: Optimize split_kernel_leaf_mapping()
>>> arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
>>> arm64: mm: Optimize linear_map_split_to_ptes()
>>>
>>> Yang Shi (2):
>>> arm64: cpufeature: add AmpereOne to BBML2 allow list
>>> arm64: mm: support large block mapping when rodata=full
>>>
>>> arch/arm64/include/asm/cpufeature.h | 2 +
>>> arch/arm64/include/asm/mmu.h | 3 +
>>> arch/arm64/include/asm/pgtable.h | 5 +
>>> arch/arm64/kernel/cpufeature.c | 12 +-
>>> arch/arm64/mm/mmu.c | 418 +++++++++++++++++++++++++++-
>>> arch/arm64/mm/pageattr.c | 157 ++++++++---
>>> arch/arm64/mm/proc.S | 27 +-
>>> include/linux/pagewalk.h | 3 +
>>> mm/pagewalk.c | 36 ++-
>>> 9 files changed, 599 insertions(+), 64 deletions(-)
>>>
>>> --
>>> 2.43.0
>>>
>> Hi Yang and Ryan,
>>
>> I observe there are various callsites which will ultimately use
>> update_range_prot() (from patch 1),
>> that they do not check the return value. I am listing the ones I could find:
> So your concern is that prior to patch #3 in this series, any error returned by
> __change_memory_common() would be due to programming error only. But patch #3
> introduces the possibility of dynamic error (-ENOMEM) due to the need to
> allocate pgtable memory to split a mapping?
>
> There is a WARN_ON_ONCE(ret) for the return code of split_kernel_leaf_mapping()
> which will at least make the error visible, but I agree it's not a great solution.
>
>> set_memory_ro() in bpf_jit_comp.c
Do you mean arch/arm64/net/bpf_jit_comp.c? If so I think you can just
check the return value then return -EFAULT just like what the above
set_memory_rw() does.
> There is a set_memory_rw() for the same region of memory directly above this,
> which will return -EFAULT on failure. If that one succeeded, then the pgtable
> must already be appropriately split for set_memory_ro() so that should never
> fail in practice. I agree with improving the robustness of the code by returning
> -EFAULT (or just propagate the error?) as you suggest though.
Yeah, I agree. This one should be easy to resolve.
>
>> set_memory_valid() in kernel_map_pages() in pageattr.c
> This is used by CONFIG_DEBUG_PAGEALLOC to make pages in the linear map invalid
> while they are not in use to catch programming errors. So if making a page
> invalid during freeing fails would not technically lead to a huge issue, it just
> reduces our capability of catching an errant access to that free memory.
>
> In principle, if we were able to make the memory invalid, we should therefore be
> able to make it valid again, because the mappings should be sufficiently split
> already. But that doesn't actually work, because we might be allocating a
> smaller order than was freed so we might not have split at free-time to the
> granularity is required at allocation-time.
>
> But as you say, for CONFIG_DEBUG_PAGEALLOC we disable this whole path anyway, so
> no issue here.
Yes, agreed.
>
>> set_direct_map_invalid_noflush() in vm_reset_perms() in vmalloc.c
>> set_direct_map_default_noflush() in vm_reset_perms() in vmalloc.c, and in
>> secretmem.c
>> (the secretmem.c ones should be safe as explained in the commments therein)
> Agreed for secretmem. vmalloc looks like a problem though...
>
> If vmalloc was only setting the linear map back to default permissions, I guess
> this wouldn't be an issue because we must have split the linear map sucessfully
> when changing away from default permissions in the first place. But the fact
Yes, agreed.
> that it is unconditionally setting the linear map pages to invalid then back to
> default causes issues; I guess even without the risk of -ENOMEM, this will cause
> the linear map to be split to PTEs over time as vmalloc allocs and frees?
It is possible. However, vm_reset_perms() is not called that often.
Theoretically there are plenty of other operations, for example,
loading/unloading modules, can cause the linear mapping to be split over
time. So this one is not that special IMHO.
>
> We probably need to think through how we can solve this. It's not clear to me
> why vm_reset_perms wants to unconditionally transiently set to invalid?
It seems like vm_reset_perms() is just called when VM_FLUSH_RESET_PERMS
flag is passed. It is just passed in for secretmem and hyperv. It sounds
like some preventive security measurement to me.
>
>> The first one I think can be handled easily by returning -EFAULT.
It may be not that simple. set_direct_map_invalid_noflush() is called on
page basis, so does update_range_prot(). If the split requires allocate
multiple page table pages, we may have some of the pages permission
changed (page table page allocation succeed), but the remaining is
skipped due to page table page allocation failure. The vfree() needs to
handle such case by setting pages permission back before returning any
errno.
Anyway it sounds like a general problem rather than ARM specific.
>>
>> For the second, we are already returning in case of !can_set_direct_map, which
>> renders DEBUG_PAGEALLOC useless. So maybe it is
>> safe to ignore the ret from set_memory_valid?
>>
>> For the third, the call chain is a sequence of must-succeed void functions.
>> Notably, when using vfree(), we may have to allocate a single
>> pagetable page for splitting.
>>
>> I am wondering whether we can just have a warn_on_once or something for the case
>> when we fail to allocate a pagetable page. Or, Ryan had
>> suggested in an off-the-list conversation that we can maintain a cache of PTE
>> tables for every PMD block mapping, which will give us
>> the same memory consumption as we do today, but not sure if this is worth it.
>> x86 can already handle splitting but due to the callchains
>> I have described above, it has the same problem, and the code has been working
>> for years :)
> I think it's preferable to avoid having to keep a cache of pgtable memory if we
> can...
Yes, I agree. We simply don't know how many pages we need to cache, and
it still can't guarantee 100% allocation success.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-03 0:21 ` Yang Shi
@ 2025-09-03 0:50 ` Yang Shi
2025-09-04 13:14 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-03 0:50 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
>>>
>>>
>>> I am wondering whether we can just have a warn_on_once or something
>>> for the case
>>> when we fail to allocate a pagetable page. Or, Ryan had
>>> suggested in an off-the-list conversation that we can maintain a
>>> cache of PTE
>>> tables for every PMD block mapping, which will give us
>>> the same memory consumption as we do today, but not sure if this is
>>> worth it.
>>> x86 can already handle splitting but due to the callchains
>>> I have described above, it has the same problem, and the code has
>>> been working
>>> for years :)
>> I think it's preferable to avoid having to keep a cache of pgtable
>> memory if we
>> can...
>
> Yes, I agree. We simply don't know how many pages we need to cache,
> and it still can't guarantee 100% allocation success.
This is wrong... We can know how many pages will be needed for splitting
linear mapping to PTEs for the worst case once linear mapping is
finalized. But it may require a few hundred megabytes memory to
guarantee allocation success. I don't think it is worth for such rare
corner case.
Thanks,
Yang
>
> Thanks,
> Yang
>
>>
>> Thanks,
>> Ryan
>>
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-03 0:50 ` Yang Shi
@ 2025-09-04 13:14 ` Ryan Roberts
2025-09-04 13:16 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 13:14 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 03/09/2025 01:50, Yang Shi wrote:
>>>>
>>>>
>>>> I am wondering whether we can just have a warn_on_once or something for the
>>>> case
>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>> suggested in an off-the-list conversation that we can maintain a cache of PTE
>>>> tables for every PMD block mapping, which will give us
>>>> the same memory consumption as we do today, but not sure if this is worth it.
>>>> x86 can already handle splitting but due to the callchains
>>>> I have described above, it has the same problem, and the code has been working
>>>> for years :)
>>> I think it's preferable to avoid having to keep a cache of pgtable memory if we
>>> can...
>>
>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>> still can't guarantee 100% allocation success.
>
> This is wrong... We can know how many pages will be needed for splitting linear
> mapping to PTEs for the worst case once linear mapping is finalized. But it may
> require a few hundred megabytes memory to guarantee allocation success. I don't
> think it is worth for such rare corner case.
Indeed, we know exactly how much memory we need for pgtables to map the linear
map by pte - that's exactly what we are doing today. So we _could_ keep a cache.
We would still get the benefit of improved performance but we would lose the
benefit of reduced memory.
I think we need to solve the vm_reset_perms() problem somehow, before we can
enable this.
Thanks,
Ryan
>
> Thanks,
> Yang
>
>>
>> Thanks,
>> Yang
>>
>>>
>>> Thanks,
>>> Ryan
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-04 13:14 ` Ryan Roberts
@ 2025-09-04 13:16 ` Ryan Roberts
2025-09-04 17:47 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-04 13:16 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 04/09/2025 14:14, Ryan Roberts wrote:
> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>
>>>>>
>>>>> I am wondering whether we can just have a warn_on_once or something for the
>>>>> case
>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>> suggested in an off-the-list conversation that we can maintain a cache of PTE
>>>>> tables for every PMD block mapping, which will give us
>>>>> the same memory consumption as we do today, but not sure if this is worth it.
>>>>> x86 can already handle splitting but due to the callchains
>>>>> I have described above, it has the same problem, and the code has been working
>>>>> for years :)
>>>> I think it's preferable to avoid having to keep a cache of pgtable memory if we
>>>> can...
>>>
>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>> still can't guarantee 100% allocation success.
>>
>> This is wrong... We can know how many pages will be needed for splitting linear
>> mapping to PTEs for the worst case once linear mapping is finalized. But it may
>> require a few hundred megabytes memory to guarantee allocation success. I don't
>> think it is worth for such rare corner case.
>
> Indeed, we know exactly how much memory we need for pgtables to map the linear
> map by pte - that's exactly what we are doing today. So we _could_ keep a cache.
> We would still get the benefit of improved performance but we would lose the
> benefit of reduced memory.
>
> I think we need to solve the vm_reset_perms() problem somehow, before we can
> enable this.
Sorry I realise this was not very clear... I am saying I think we need to fix it
somehow. A cache would likely work. But I'd prefer to avoid it if we can find a
better solution.
>
> Thanks,
> Ryan
>
>>
>> Thanks,
>> Yang
>>
>>>
>>> Thanks,
>>> Yang
>>>
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-04 13:16 ` Ryan Roberts
@ 2025-09-04 17:47 ` Yang Shi
2025-09-04 21:49 ` Yang Shi
2025-09-16 23:44 ` Yang Shi
0 siblings, 2 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-04 17:47 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/4/25 6:16 AM, Ryan Roberts wrote:
> On 04/09/2025 14:14, Ryan Roberts wrote:
>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>
>>>>>> I am wondering whether we can just have a warn_on_once or something for the
>>>>>> case
>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>> suggested in an off-the-list conversation that we can maintain a cache of PTE
>>>>>> tables for every PMD block mapping, which will give us
>>>>>> the same memory consumption as we do today, but not sure if this is worth it.
>>>>>> x86 can already handle splitting but due to the callchains
>>>>>> I have described above, it has the same problem, and the code has been working
>>>>>> for years :)
>>>>> I think it's preferable to avoid having to keep a cache of pgtable memory if we
>>>>> can...
>>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>>> still can't guarantee 100% allocation success.
>>> This is wrong... We can know how many pages will be needed for splitting linear
>>> mapping to PTEs for the worst case once linear mapping is finalized. But it may
>>> require a few hundred megabytes memory to guarantee allocation success. I don't
>>> think it is worth for such rare corner case.
>> Indeed, we know exactly how much memory we need for pgtables to map the linear
>> map by pte - that's exactly what we are doing today. So we _could_ keep a cache.
>> We would still get the benefit of improved performance but we would lose the
>> benefit of reduced memory.
>>
>> I think we need to solve the vm_reset_perms() problem somehow, before we can
>> enable this.
> Sorry I realise this was not very clear... I am saying I think we need to fix it
> somehow. A cache would likely work. But I'd prefer to avoid it if we can find a
> better solution.
Took a deeper look at vm_reset_perms(). It was introduced by commit
868b104d7379 ("mm/vmalloc: Add flag for freeing of special
permsissions"). The VM_FLUSH_RESET_PERMS flag is supposed to be set if
the vmalloc memory is RO and/or ROX. So set_memory_ro() or
set_memory_rox() is supposed to follow up vmalloc(). So the page table
should be already split before reaching vfree(). I think this why
vm_reset_perms() doesn't not check return value.
I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set. The
most of them has set_memory_ro() or set_memory_rox() followed. But there
are 3 places I don't see set_memory_ro()/set_memory_rox() is called.
1. BPF trampoline allocation. The BPF trampoline calls
arch_protect_bpf_trampoline(). The generic implementation does call
set_memory_rox(). But the x86 and arm64 implementation just simply
return 0. For x86, it is because execmem cache is used and it does call
set_memory_rox(). ARM64 doesn't need to split page table before this
series, so it should never fail. I think we just need to use the generic
implementation (remove arm64 implementation) if this series is merged.
2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS
set. But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
called set_memory_rox() before switching to execmem cache. The execmem
cache calls set_memory_rox(). I don't know why ARM64 doesn't call it.
So I think we just need to fix #1 and #3 per the above analysis. If this
analysis look correct to you guys, I will prepare two patches to fix them.
Thanks,
Yang
>
>
>> Thanks,
>> Ryan
>>
>>> Thanks,
>>> Yang
>>>
>>>> Thanks,
>>>> Yang
>>>>
>>>>> Thanks,
>>>>> Ryan
>>>>>
>>>>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-04 17:47 ` Yang Shi
@ 2025-09-04 21:49 ` Yang Shi
2025-09-08 16:34 ` Ryan Roberts
2025-09-16 23:44 ` Yang Shi
1 sibling, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-04 21:49 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/4/25 10:47 AM, Yang Shi wrote:
>
>
> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>
>>>>>>> I am wondering whether we can just have a warn_on_once or
>>>>>>> something for the
>>>>>>> case
>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>> suggested in an off-the-list conversation that we can maintain a
>>>>>>> cache of PTE
>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>> the same memory consumption as we do today, but not sure if this
>>>>>>> is worth it.
>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>> I have described above, it has the same problem, and the code
>>>>>>> has been working
>>>>>>> for years :)
>>>>>> I think it's preferable to avoid having to keep a cache of
>>>>>> pgtable memory if we
>>>>>> can...
>>>>> Yes, I agree. We simply don't know how many pages we need to
>>>>> cache, and it
>>>>> still can't guarantee 100% allocation success.
>>>> This is wrong... We can know how many pages will be needed for
>>>> splitting linear
>>>> mapping to PTEs for the worst case once linear mapping is
>>>> finalized. But it may
>>>> require a few hundred megabytes memory to guarantee allocation
>>>> success. I don't
>>>> think it is worth for such rare corner case.
>>> Indeed, we know exactly how much memory we need for pgtables to map
>>> the linear
>>> map by pte - that's exactly what we are doing today. So we _could_
>>> keep a cache.
>>> We would still get the benefit of improved performance but we would
>>> lose the
>>> benefit of reduced memory.
>>>
>>> I think we need to solve the vm_reset_perms() problem somehow,
>>> before we can
>>> enable this.
>> Sorry I realise this was not very clear... I am saying I think we
>> need to fix it
>> somehow. A cache would likely work. But I'd prefer to avoid it if we
>> can find a
>> better solution.
>
> Took a deeper look at vm_reset_perms(). It was introduced by commit
> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special
> permsissions"). The VM_FLUSH_RESET_PERMS flag is supposed to be set if
> the vmalloc memory is RO and/or ROX. So set_memory_ro() or
> set_memory_rox() is supposed to follow up vmalloc(). So the page table
> should be already split before reaching vfree(). I think this why
> vm_reset_perms() doesn't not check return value.
>
> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
> The most of them has set_memory_ro() or set_memory_rox() followed. But
> there are 3 places I don't see set_memory_ro()/set_memory_rox() is
> called.
>
> 1. BPF trampoline allocation. The BPF trampoline calls
> arch_protect_bpf_trampoline(). The generic implementation does call
> set_memory_rox(). But the x86 and arm64 implementation just simply
> return 0. For x86, it is because execmem cache is used and it does
> call set_memory_rox(). ARM64 doesn't need to split page table before
> this series, so it should never fail. I think we just need to use the
> generic implementation (remove arm64 implementation) if this series is
> merged.
>
> 2. BPF dispatcher. It calls execmem_alloc which has
> VM_FLUSH_RESET_PERMS set. But it is used for rw allocation, so
> VM_FLUSH_RESET_PERMS should be unnecessary IIUC. So it doesn't matter
> even though vm_reset_perms() fails.
>
> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86
> also called set_memory_rox() before switching to execmem cache. The
> execmem cache calls set_memory_rox(). I don't know why ARM64 doesn't
> call it.
>
> So I think we just need to fix #1 and #3 per the above analysis. If
> this analysis look correct to you guys, I will prepare two patches to
> fix them.
Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
kprobes. It seems work well.
diff --git a/arch/arm64/kernel/probes/kprobes.c
b/arch/arm64/kernel/probes/kprobes.c
index 0c5d408afd95..c4f8c4750f1e 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -10,6 +10,7 @@
#define pr_fmt(fmt) "kprobes: " fmt
+#include <linux/execmem.h>
#include <linux/extable.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
@@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
static void __kprobes
post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct
pt_regs *);
+void *alloc_insn_page(void)
+{
+ void *page;
+
+ page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
+ if (!page)
+ return NULL;
+ set_memory_rox((unsigned long)page, 1);
+ return page;
+}
+
static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
{
kprobe_opcode_t *addr = p->ainsn.xol_insn;
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 52ffe115a8c4..3e301bc2cd66 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image,
unsigned int size)
bpf_prog_pack_free(image, size);
}
-int arch_protect_bpf_trampoline(void *image, unsigned int size)
-{
- return 0;
-}
-
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void
*ro_image,
void *ro_image_end, const struct
btf_func_model *m,
u32 flags, struct bpf_tramp_links *tlinks,
>
> Thanks,
> Yang
>
>>
>>
>>> Thanks,
>>> Ryan
>>>
>>>> Thanks,
>>>> Yang
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>>>
>
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-04 21:49 ` Yang Shi
@ 2025-09-08 16:34 ` Ryan Roberts
2025-09-08 18:31 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-08 16:34 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 04/09/2025 22:49, Yang Shi wrote:
>
>
> On 9/4/25 10:47 AM, Yang Shi wrote:
>>
>>
>> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>>
>>>>>>>> I am wondering whether we can just have a warn_on_once or something for the
>>>>>>>> case
>>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>>> suggested in an off-the-list conversation that we can maintain a cache
>>>>>>>> of PTE
>>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>>> the same memory consumption as we do today, but not sure if this is
>>>>>>>> worth it.
>>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>>> I have described above, it has the same problem, and the code has been
>>>>>>>> working
>>>>>>>> for years :)
>>>>>>> I think it's preferable to avoid having to keep a cache of pgtable memory
>>>>>>> if we
>>>>>>> can...
>>>>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>>>>> still can't guarantee 100% allocation success.
>>>>> This is wrong... We can know how many pages will be needed for splitting
>>>>> linear
>>>>> mapping to PTEs for the worst case once linear mapping is finalized. But it
>>>>> may
>>>>> require a few hundred megabytes memory to guarantee allocation success. I
>>>>> don't
>>>>> think it is worth for such rare corner case.
>>>> Indeed, we know exactly how much memory we need for pgtables to map the linear
>>>> map by pte - that's exactly what we are doing today. So we _could_ keep a
>>>> cache.
>>>> We would still get the benefit of improved performance but we would lose the
>>>> benefit of reduced memory.
>>>>
>>>> I think we need to solve the vm_reset_perms() problem somehow, before we can
>>>> enable this.
>>> Sorry I realise this was not very clear... I am saying I think we need to fix it
>>> somehow. A cache would likely work. But I'd prefer to avoid it if we can find a
>>> better solution.
>>
>> Took a deeper look at vm_reset_perms(). It was introduced by commit
>> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions"). The
>> VM_FLUSH_RESET_PERMS flag is supposed to be set if the vmalloc memory is RO
>> and/or ROX. So set_memory_ro() or set_memory_rox() is supposed to follow up
>> vmalloc(). So the page table should be already split before reaching vfree().
>> I think this why vm_reset_perms() doesn't not check return value.
If vm_reset_perms() is assuming it can't/won't fail, I think it should at least
output a warning if it does?
>>
>> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
Just checking; I think you made a comment before about there only being a few
sites that set VM_FLUSH_RESET_PERMS. But one of them is the helper,
set_vm_flush_reset_perms(). So just making sure you also followed to the places
that use that helper?
>> The most
>> of them has set_memory_ro() or set_memory_rox() followed.
And are all callsites calling set_memory_*() for the entire cell that was
allocated by vmalloc? If there are cases where it only calls that for a portion
of it, then it's not gurranteed that the memory is correctly split.
>> But there are 3
>> places I don't see set_memory_ro()/set_memory_rox() is called.
>>
>> 1. BPF trampoline allocation. The BPF trampoline calls
>> arch_protect_bpf_trampoline(). The generic implementation does call
>> set_memory_rox(). But the x86 and arm64 implementation just simply return 0.
>> For x86, it is because execmem cache is used and it does call
>> set_memory_rox(). ARM64 doesn't need to split page table before this series,
>> so it should never fail. I think we just need to use the generic
>> implementation (remove arm64 implementation) if this series is merged.
I know zero about BPF. But it looks like the allocation happens in
arch_alloc_bpf_trampoline(), which for arm64, calls bpf_prog_pack_alloc(). And
for small sizes, it grabs some memory from a "pack". So doesn't this mean that
you are calling set_memory_rox() for a sub-region of the cell, so that doesn't
actually help at vm_reset_perms()-time?
>>
>> 2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS set.
>> But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
>> unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
>>
>> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
>> called set_memory_rox() before switching to execmem cache. The execmem cache
>> calls set_memory_rox(). I don't know why ARM64 doesn't call it.
>>
>> So I think we just need to fix #1 and #3 per the above analysis. If this
>> analysis look correct to you guys, I will prepare two patches to fix them.
This all seems quite fragile. I find it interesting that vm_reset_perms() is
doing break-before-make; it sets the PTEs as invalid, then flushes the TLB, then
sets them to default. But for arm64, at least, I think break-before-make is not
required. We are only changing the permissions so that can be done on live
mappings; essentially change the sequence to; set default, flush TLB.
If we do that, then if the memory was already default, then there is no need to
do anything (so no chance of allocation failure). If the memory was not default,
then it must have already been split to make it non-default, in which case we
can also gurrantee that no allocations are required.
What am I missing?
Thanks,
Ryan
>
> Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
> kprobes. It seems work well.
>
> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
> kprobes.c
> index 0c5d408afd95..c4f8c4750f1e 100644
> --- a/arch/arm64/kernel/probes/kprobes.c
> +++ b/arch/arm64/kernel/probes/kprobes.c
> @@ -10,6 +10,7 @@
>
> #define pr_fmt(fmt) "kprobes: " fmt
>
> +#include <linux/execmem.h>
> #include <linux/extable.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> static void __kprobes
> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
>
> +void *alloc_insn_page(void)
> +{
> + void *page;
> +
> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
> + if (!page)
> + return NULL;
> + set_memory_rox((unsigned long)page, 1);
> + return page;
> +}
> +
> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
> {
> kprobe_opcode_t *addr = p->ainsn.xol_insn;
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 52ffe115a8c4..3e301bc2cd66 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image, unsigned int
> size)
> bpf_prog_pack_free(image, size);
> }
>
> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
> -{
> - return 0;
> -}
> -
> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
> void *ro_image_end, const struct btf_func_model *m,
> u32 flags, struct bpf_tramp_links *tlinks,
>
>
>>
>> Thanks,
>> Yang
>>
>>>
>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>> Thanks,
>>>>>> Yang
>>>>>>
>>>>>>> Thanks,
>>>>>>> Ryan
>>>>>>>
>>>>>>>
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-08 16:34 ` Ryan Roberts
@ 2025-09-08 18:31 ` Yang Shi
2025-09-09 14:36 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-08 18:31 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/8/25 9:34 AM, Ryan Roberts wrote:
> On 04/09/2025 22:49, Yang Shi wrote:
>>
>> On 9/4/25 10:47 AM, Yang Shi wrote:
>>>
>>> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>>>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>>> I am wondering whether we can just have a warn_on_once or something for the
>>>>>>>>> case
>>>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>>>> suggested in an off-the-list conversation that we can maintain a cache
>>>>>>>>> of PTE
>>>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>>>> the same memory consumption as we do today, but not sure if this is
>>>>>>>>> worth it.
>>>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>>>> I have described above, it has the same problem, and the code has been
>>>>>>>>> working
>>>>>>>>> for years :)
>>>>>>>> I think it's preferable to avoid having to keep a cache of pgtable memory
>>>>>>>> if we
>>>>>>>> can...
>>>>>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>>>>>> still can't guarantee 100% allocation success.
>>>>>> This is wrong... We can know how many pages will be needed for splitting
>>>>>> linear
>>>>>> mapping to PTEs for the worst case once linear mapping is finalized. But it
>>>>>> may
>>>>>> require a few hundred megabytes memory to guarantee allocation success. I
>>>>>> don't
>>>>>> think it is worth for such rare corner case.
>>>>> Indeed, we know exactly how much memory we need for pgtables to map the linear
>>>>> map by pte - that's exactly what we are doing today. So we _could_ keep a
>>>>> cache.
>>>>> We would still get the benefit of improved performance but we would lose the
>>>>> benefit of reduced memory.
>>>>>
>>>>> I think we need to solve the vm_reset_perms() problem somehow, before we can
>>>>> enable this.
>>>> Sorry I realise this was not very clear... I am saying I think we need to fix it
>>>> somehow. A cache would likely work. But I'd prefer to avoid it if we can find a
>>>> better solution.
>>> Took a deeper look at vm_reset_perms(). It was introduced by commit
>>> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions"). The
>>> VM_FLUSH_RESET_PERMS flag is supposed to be set if the vmalloc memory is RO
>>> and/or ROX. So set_memory_ro() or set_memory_rox() is supposed to follow up
>>> vmalloc(). So the page table should be already split before reaching vfree().
>>> I think this why vm_reset_perms() doesn't not check return value.
> If vm_reset_perms() is assuming it can't/won't fail, I think it should at least
> output a warning if it does?
It should. Anyway warning will be raised if split fails. We have somehow
mitigation.
>
>>> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
> Just checking; I think you made a comment before about there only being a few
> sites that set VM_FLUSH_RESET_PERMS. But one of them is the helper,
> set_vm_flush_reset_perms(). So just making sure you also followed to the places
> that use that helper?
Yes, I did.
>
>>> The most
>>> of them has set_memory_ro() or set_memory_rox() followed.
> And are all callsites calling set_memory_*() for the entire cell that was
> allocated by vmalloc? If there are cases where it only calls that for a portion
> of it, then it's not gurranteed that the memory is correctly split.
Yes, all callsites call set_memory_*() for the entire range.
>
>>> But there are 3
>>> places I don't see set_memory_ro()/set_memory_rox() is called.
>>>
>>> 1. BPF trampoline allocation. The BPF trampoline calls
>>> arch_protect_bpf_trampoline(). The generic implementation does call
>>> set_memory_rox(). But the x86 and arm64 implementation just simply return 0.
>>> For x86, it is because execmem cache is used and it does call
>>> set_memory_rox(). ARM64 doesn't need to split page table before this series,
>>> so it should never fail. I think we just need to use the generic
>>> implementation (remove arm64 implementation) if this series is merged.
> I know zero about BPF. But it looks like the allocation happens in
> arch_alloc_bpf_trampoline(), which for arm64, calls bpf_prog_pack_alloc(). And
> for small sizes, it grabs some memory from a "pack". So doesn't this mean that
> you are calling set_memory_rox() for a sub-region of the cell, so that doesn't
> actually help at vm_reset_perms()-time?
Took a deeper look at bpf pack allocator. The "pack" is allocated by
alloc_new_pack(), which does:
bpf_jit_alloc_exec()
set_vm_flush_reset_perms()
set_memory_rox()
If the size is greater than the pack size, it calls:
bpf_jit_alloc_exec()
set_vm_flush_reset_perms()
set_memory_rox()
So it looks like bpf trampoline is good, and we don't need do anything.
It should be removed from the list. I didn't look deep enough for bpf
pack allocator in the first place.
>
>>> 2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS set.
>>> But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
>>> unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
>>>
>>> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
>>> called set_memory_rox() before switching to execmem cache. The execmem cache
>>> calls set_memory_rox(). I don't know why ARM64 doesn't call it.
>>>
>>> So I think we just need to fix #1 and #3 per the above analysis. If this
>>> analysis look correct to you guys, I will prepare two patches to fix them.
> This all seems quite fragile. I find it interesting that vm_reset_perms() is
> doing break-before-make; it sets the PTEs as invalid, then flushes the TLB, then
> sets them to default. But for arm64, at least, I think break-before-make is not
> required. We are only changing the permissions so that can be done on live
> mappings; essentially change the sequence to; set default, flush TLB.
Yeah, I agree it is a little bit fragile. I think this is the "contract"
for vmalloc users. You allocate ROX memory via vmalloc, you are required
to call set_memory_*(). But there is nothing to guarantee the "contract"
is followed. But I don't think this is the only case in kernel.
>
> If we do that, then if the memory was already default, then there is no need to
> do anything (so no chance of allocation failure). If the memory was not default,
> then it must have already been split to make it non-default, in which case we
> can also gurrantee that no allocations are required.
>
> What am I missing?
The comment says:
Set direct map to something invalid so that it won't be cached if there
are any accesses after the TLB flush, then flush the TLB and reset the
direct map permissions to the default.
IIUC, it guarantees the direct map can't be cached in TLB after TLB
flush from _vm_unmap_aliases() by setting them invalid because TLB never
cache invalid entries. Skipping set direct map to invalid seems break
this. Or "changing permission on live mappings" on ARM64 can achieve the
same goal?
Thanks,
Yang
> Thanks,
> Ryan
>
>
>> Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
>> kprobes. It seems work well.
>>
>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>> kprobes.c
>> index 0c5d408afd95..c4f8c4750f1e 100644
>> --- a/arch/arm64/kernel/probes/kprobes.c
>> +++ b/arch/arm64/kernel/probes/kprobes.c
>> @@ -10,6 +10,7 @@
>>
>> #define pr_fmt(fmt) "kprobes: " fmt
>>
>> +#include <linux/execmem.h>
>> #include <linux/extable.h>
>> #include <linux/kasan.h>
>> #include <linux/kernel.h>
>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>> static void __kprobes
>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
>>
>> +void *alloc_insn_page(void)
>> +{
>> + void *page;
>> +
>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>> + if (!page)
>> + return NULL;
>> + set_memory_rox((unsigned long)page, 1);
>> + return page;
>> +}
>> +
>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>> {
>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>> index 52ffe115a8c4..3e301bc2cd66 100644
>> --- a/arch/arm64/net/bpf_jit_comp.c
>> +++ b/arch/arm64/net/bpf_jit_comp.c
>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image, unsigned int
>> size)
>> bpf_prog_pack_free(image, size);
>> }
>>
>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>> -{
>> - return 0;
>> -}
>> -
>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
>> void *ro_image_end, const struct btf_func_model *m,
>> u32 flags, struct bpf_tramp_links *tlinks,
>>
>>
>>> Thanks,
>>> Yang
>>>
>>>>
>>>>> Thanks,
>>>>> Ryan
>>>>>
>>>>>> Thanks,
>>>>>> Yang
>>>>>>
>>>>>>> Thanks,
>>>>>>> Yang
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Ryan
>>>>>>>>
>>>>>>>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-08 18:31 ` Yang Shi
@ 2025-09-09 14:36 ` Ryan Roberts
2025-09-09 15:32 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-09 14:36 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 08/09/2025 19:31, Yang Shi wrote:
>
>
> On 9/8/25 9:34 AM, Ryan Roberts wrote:
>> On 04/09/2025 22:49, Yang Shi wrote:
>>>
>>> On 9/4/25 10:47 AM, Yang Shi wrote:
>>>>
>>>> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>>>>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>>>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>>>> I am wondering whether we can just have a warn_on_once or something
>>>>>>>>>> for the
>>>>>>>>>> case
>>>>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>>>>> suggested in an off-the-list conversation that we can maintain a cache
>>>>>>>>>> of PTE
>>>>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>>>>> the same memory consumption as we do today, but not sure if this is
>>>>>>>>>> worth it.
>>>>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>>>>> I have described above, it has the same problem, and the code has been
>>>>>>>>>> working
>>>>>>>>>> for years :)
>>>>>>>>> I think it's preferable to avoid having to keep a cache of pgtable memory
>>>>>>>>> if we
>>>>>>>>> can...
>>>>>>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>>>>>>> still can't guarantee 100% allocation success.
>>>>>>> This is wrong... We can know how many pages will be needed for splitting
>>>>>>> linear
>>>>>>> mapping to PTEs for the worst case once linear mapping is finalized. But it
>>>>>>> may
>>>>>>> require a few hundred megabytes memory to guarantee allocation success. I
>>>>>>> don't
>>>>>>> think it is worth for such rare corner case.
>>>>>> Indeed, we know exactly how much memory we need for pgtables to map the
>>>>>> linear
>>>>>> map by pte - that's exactly what we are doing today. So we _could_ keep a
>>>>>> cache.
>>>>>> We would still get the benefit of improved performance but we would lose the
>>>>>> benefit of reduced memory.
>>>>>>
>>>>>> I think we need to solve the vm_reset_perms() problem somehow, before we can
>>>>>> enable this.
>>>>> Sorry I realise this was not very clear... I am saying I think we need to
>>>>> fix it
>>>>> somehow. A cache would likely work. But I'd prefer to avoid it if we can
>>>>> find a
>>>>> better solution.
>>>> Took a deeper look at vm_reset_perms(). It was introduced by commit
>>>> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions"). The
>>>> VM_FLUSH_RESET_PERMS flag is supposed to be set if the vmalloc memory is RO
>>>> and/or ROX. So set_memory_ro() or set_memory_rox() is supposed to follow up
>>>> vmalloc(). So the page table should be already split before reaching vfree().
>>>> I think this why vm_reset_perms() doesn't not check return value.
>> If vm_reset_perms() is assuming it can't/won't fail, I think it should at least
>> output a warning if it does?
>
> It should. Anyway warning will be raised if split fails. We have somehow
> mitigation.
>
>>
>>>> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
>> Just checking; I think you made a comment before about there only being a few
>> sites that set VM_FLUSH_RESET_PERMS. But one of them is the helper,
>> set_vm_flush_reset_perms(). So just making sure you also followed to the places
>> that use that helper?
>
> Yes, I did.
>
>>
>>>> The most
>>>> of them has set_memory_ro() or set_memory_rox() followed.
>> And are all callsites calling set_memory_*() for the entire cell that was
>> allocated by vmalloc? If there are cases where it only calls that for a portion
>> of it, then it's not gurranteed that the memory is correctly split.
>
> Yes, all callsites call set_memory_*() for the entire range.
>
>>
>>>> But there are 3
>>>> places I don't see set_memory_ro()/set_memory_rox() is called.
>>>>
>>>> 1. BPF trampoline allocation. The BPF trampoline calls
>>>> arch_protect_bpf_trampoline(). The generic implementation does call
>>>> set_memory_rox(). But the x86 and arm64 implementation just simply return 0.
>>>> For x86, it is because execmem cache is used and it does call
>>>> set_memory_rox(). ARM64 doesn't need to split page table before this series,
>>>> so it should never fail. I think we just need to use the generic
>>>> implementation (remove arm64 implementation) if this series is merged.
>> I know zero about BPF. But it looks like the allocation happens in
>> arch_alloc_bpf_trampoline(), which for arm64, calls bpf_prog_pack_alloc(). And
>> for small sizes, it grabs some memory from a "pack". So doesn't this mean that
>> you are calling set_memory_rox() for a sub-region of the cell, so that doesn't
>> actually help at vm_reset_perms()-time?
>
> Took a deeper look at bpf pack allocator. The "pack" is allocated by
> alloc_new_pack(), which does:
> bpf_jit_alloc_exec()
> set_vm_flush_reset_perms()
> set_memory_rox()
>
> If the size is greater than the pack size, it calls:
> bpf_jit_alloc_exec()
> set_vm_flush_reset_perms()
> set_memory_rox()
>
> So it looks like bpf trampoline is good, and we don't need do anything. It
> should be removed from the list. I didn't look deep enough for bpf pack
> allocator in the first place.
>
>>
>>>> 2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS set.
>>>> But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
>>>> unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
>>>>
>>>> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
>>>> called set_memory_rox() before switching to execmem cache. The execmem cache
>>>> calls set_memory_rox(). I don't know why ARM64 doesn't call it.
>>>>
>>>> So I think we just need to fix #1 and #3 per the above analysis. If this
>>>> analysis look correct to you guys, I will prepare two patches to fix them.
>> This all seems quite fragile. I find it interesting that vm_reset_perms() is
>> doing break-before-make; it sets the PTEs as invalid, then flushes the TLB, then
>> sets them to default. But for arm64, at least, I think break-before-make is not
>> required. We are only changing the permissions so that can be done on live
>> mappings; essentially change the sequence to; set default, flush TLB.
>
> Yeah, I agree it is a little bit fragile. I think this is the "contract" for
> vmalloc users. You allocate ROX memory via vmalloc, you are required to call
> set_memory_*(). But there is nothing to guarantee the "contract" is followed.
> But I don't think this is the only case in kernel.
>
>>
>> If we do that, then if the memory was already default, then there is no need to
>> do anything (so no chance of allocation failure). If the memory was not default,
>> then it must have already been split to make it non-default, in which case we
>> can also gurrantee that no allocations are required.
>>
>> What am I missing?
>
> The comment says:
> Set direct map to something invalid so that it won't be cached if there are any
> accesses after the TLB flush, then flush the TLB and reset the direct map
> permissions to the default.
>
> IIUC, it guarantees the direct map can't be cached in TLB after TLB flush from
> _vm_unmap_aliases() by setting them invalid because TLB never cache invalid
> entries. Skipping set direct map to invalid seems break this. Or "changing
> permission on live mappings" on ARM64 can achieve the same goal?
Here's my understanding of the intent of the code:
Let's say we start with some memory that has been mapped RO. Our goal is to
reset the memory back to RW and ensure that no TLB entry remains in the TLB for
the old RO mapping. There are 2 ways to do that:
Approach 1 (used in current code):
1. set PTE to invalid
2. invalidate any TLB entry for the VA
3. set the PTE to RW
Approach 2:
1. set the PTE to RW
2. invalidate any TLB entry for the VA
The benefit of approach 1 is that it is guarranteed that it is impossible for
different CPUs to have different translations for the same VA in their
respective TLB. But for approach 2, it's possible that between steps 1 and 2, 1
CPU has a RO entry and another CPU has a RW entry. But that will get fixed once
the TLB is flushed - it's not really an issue.
(There is probably also an obscure way to end up with 2 TLB entries (one with RO
and one with RW) for the same CPU, but the arm64 architecture permits that as
long as it's only a permission mismatch).
Anyway, approach 2 is used when changing memory permissions on user mappings, so
I don't see why we can't take the same approach here. That would solve this
whole class of issue for us.
Thanks,
Ryan
>
> Thanks,
> Yang
>
>> Thanks,
>> Ryan
>>
>>
>>> Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
>>> kprobes. It seems work well.
>>>
>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>>> kprobes.c
>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>> @@ -10,6 +10,7 @@
>>>
>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>
>>> +#include <linux/execmem.h>
>>> #include <linux/extable.h>
>>> #include <linux/kasan.h>
>>> #include <linux/kernel.h>
>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>>> static void __kprobes
>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs
>>> *);
>>>
>>> +void *alloc_insn_page(void)
>>> +{
>>> + void *page;
>>> +
>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>> + if (!page)
>>> + return NULL;
>>> + set_memory_rox((unsigned long)page, 1);
>>> + return page;
>>> +}
>>> +
>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>> {
>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image, unsigned int
>>> size)
>>> bpf_prog_pack_free(image, size);
>>> }
>>>
>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>> -{
>>> - return 0;
>>> -}
>>> -
>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
>>> void *ro_image_end, const struct
>>> btf_func_model *m,
>>> u32 flags, struct bpf_tramp_links *tlinks,
>>>
>>>
>>>> Thanks,
>>>> Yang
>>>>
>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>>>> Thanks,
>>>>>>> Yang
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Yang
>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Ryan
>>>>>>>>>
>>>>>>>>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-09 14:36 ` Ryan Roberts
@ 2025-09-09 15:32 ` Yang Shi
2025-09-09 16:32 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-09 15:32 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/9/25 7:36 AM, Ryan Roberts wrote:
> On 08/09/2025 19:31, Yang Shi wrote:
>>
>> On 9/8/25 9:34 AM, Ryan Roberts wrote:
>>> On 04/09/2025 22:49, Yang Shi wrote:
>>>> On 9/4/25 10:47 AM, Yang Shi wrote:
>>>>> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>>>>>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>>>>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>>>>> I am wondering whether we can just have a warn_on_once or something
>>>>>>>>>>> for the
>>>>>>>>>>> case
>>>>>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>>>>>> suggested in an off-the-list conversation that we can maintain a cache
>>>>>>>>>>> of PTE
>>>>>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>>>>>> the same memory consumption as we do today, but not sure if this is
>>>>>>>>>>> worth it.
>>>>>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>>>>>> I have described above, it has the same problem, and the code has been
>>>>>>>>>>> working
>>>>>>>>>>> for years :)
>>>>>>>>>> I think it's preferable to avoid having to keep a cache of pgtable memory
>>>>>>>>>> if we
>>>>>>>>>> can...
>>>>>>>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>>>>>>>> still can't guarantee 100% allocation success.
>>>>>>>> This is wrong... We can know how many pages will be needed for splitting
>>>>>>>> linear
>>>>>>>> mapping to PTEs for the worst case once linear mapping is finalized. But it
>>>>>>>> may
>>>>>>>> require a few hundred megabytes memory to guarantee allocation success. I
>>>>>>>> don't
>>>>>>>> think it is worth for such rare corner case.
>>>>>>> Indeed, we know exactly how much memory we need for pgtables to map the
>>>>>>> linear
>>>>>>> map by pte - that's exactly what we are doing today. So we _could_ keep a
>>>>>>> cache.
>>>>>>> We would still get the benefit of improved performance but we would lose the
>>>>>>> benefit of reduced memory.
>>>>>>>
>>>>>>> I think we need to solve the vm_reset_perms() problem somehow, before we can
>>>>>>> enable this.
>>>>>> Sorry I realise this was not very clear... I am saying I think we need to
>>>>>> fix it
>>>>>> somehow. A cache would likely work. But I'd prefer to avoid it if we can
>>>>>> find a
>>>>>> better solution.
>>>>> Took a deeper look at vm_reset_perms(). It was introduced by commit
>>>>> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions"). The
>>>>> VM_FLUSH_RESET_PERMS flag is supposed to be set if the vmalloc memory is RO
>>>>> and/or ROX. So set_memory_ro() or set_memory_rox() is supposed to follow up
>>>>> vmalloc(). So the page table should be already split before reaching vfree().
>>>>> I think this why vm_reset_perms() doesn't not check return value.
>>> If vm_reset_perms() is assuming it can't/won't fail, I think it should at least
>>> output a warning if it does?
>> It should. Anyway warning will be raised if split fails. We have somehow
>> mitigation.
>>
>>>>> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
>>> Just checking; I think you made a comment before about there only being a few
>>> sites that set VM_FLUSH_RESET_PERMS. But one of them is the helper,
>>> set_vm_flush_reset_perms(). So just making sure you also followed to the places
>>> that use that helper?
>> Yes, I did.
>>
>>>>> The most
>>>>> of them has set_memory_ro() or set_memory_rox() followed.
>>> And are all callsites calling set_memory_*() for the entire cell that was
>>> allocated by vmalloc? If there are cases where it only calls that for a portion
>>> of it, then it's not gurranteed that the memory is correctly split.
>> Yes, all callsites call set_memory_*() for the entire range.
>>
>>>>> But there are 3
>>>>> places I don't see set_memory_ro()/set_memory_rox() is called.
>>>>>
>>>>> 1. BPF trampoline allocation. The BPF trampoline calls
>>>>> arch_protect_bpf_trampoline(). The generic implementation does call
>>>>> set_memory_rox(). But the x86 and arm64 implementation just simply return 0.
>>>>> For x86, it is because execmem cache is used and it does call
>>>>> set_memory_rox(). ARM64 doesn't need to split page table before this series,
>>>>> so it should never fail. I think we just need to use the generic
>>>>> implementation (remove arm64 implementation) if this series is merged.
>>> I know zero about BPF. But it looks like the allocation happens in
>>> arch_alloc_bpf_trampoline(), which for arm64, calls bpf_prog_pack_alloc(). And
>>> for small sizes, it grabs some memory from a "pack". So doesn't this mean that
>>> you are calling set_memory_rox() for a sub-region of the cell, so that doesn't
>>> actually help at vm_reset_perms()-time?
>> Took a deeper look at bpf pack allocator. The "pack" is allocated by
>> alloc_new_pack(), which does:
>> bpf_jit_alloc_exec()
>> set_vm_flush_reset_perms()
>> set_memory_rox()
>>
>> If the size is greater than the pack size, it calls:
>> bpf_jit_alloc_exec()
>> set_vm_flush_reset_perms()
>> set_memory_rox()
>>
>> So it looks like bpf trampoline is good, and we don't need do anything. It
>> should be removed from the list. I didn't look deep enough for bpf pack
>> allocator in the first place.
>>
>>>>> 2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS set.
>>>>> But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
>>>>> unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
>>>>>
>>>>> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
>>>>> called set_memory_rox() before switching to execmem cache. The execmem cache
>>>>> calls set_memory_rox(). I don't know why ARM64 doesn't call it.
>>>>>
>>>>> So I think we just need to fix #1 and #3 per the above analysis. If this
>>>>> analysis look correct to you guys, I will prepare two patches to fix them.
>>> This all seems quite fragile. I find it interesting that vm_reset_perms() is
>>> doing break-before-make; it sets the PTEs as invalid, then flushes the TLB, then
>>> sets them to default. But for arm64, at least, I think break-before-make is not
>>> required. We are only changing the permissions so that can be done on live
>>> mappings; essentially change the sequence to; set default, flush TLB.
>> Yeah, I agree it is a little bit fragile. I think this is the "contract" for
>> vmalloc users. You allocate ROX memory via vmalloc, you are required to call
>> set_memory_*(). But there is nothing to guarantee the "contract" is followed.
>> But I don't think this is the only case in kernel.
>>
>>> If we do that, then if the memory was already default, then there is no need to
>>> do anything (so no chance of allocation failure). If the memory was not default,
>>> then it must have already been split to make it non-default, in which case we
>>> can also gurrantee that no allocations are required.
>>>
>>> What am I missing?
>> The comment says:
>> Set direct map to something invalid so that it won't be cached if there are any
>> accesses after the TLB flush, then flush the TLB and reset the direct map
>> permissions to the default.
>>
>> IIUC, it guarantees the direct map can't be cached in TLB after TLB flush from
>> _vm_unmap_aliases() by setting them invalid because TLB never cache invalid
>> entries. Skipping set direct map to invalid seems break this. Or "changing
>> permission on live mappings" on ARM64 can achieve the same goal?
> Here's my understanding of the intent of the code:
>
> Let's say we start with some memory that has been mapped RO. Our goal is to
> reset the memory back to RW and ensure that no TLB entry remains in the TLB for
> the old RO mapping. There are 2 ways to do that:
>
> Approach 1 (used in current code):
> 1. set PTE to invalid
> 2. invalidate any TLB entry for the VA
> 3. set the PTE to RW
>
> Approach 2:
> 1. set the PTE to RW
> 2. invalidate any TLB entry for the VA
IIUC, the intent of the code is "reset direct map permission *without*
leaving a RW+X window". The TLB flush call actually flushes both VA and
direct map together.
So if this is the intent, approach #2 may have VA with X permission but
direct map may be RW at the mean time. It seems break the intent.
Thanks,
Yang
>
> The benefit of approach 1 is that it is guarranteed that it is impossible for
> different CPUs to have different translations for the same VA in their
> respective TLB. But for approach 2, it's possible that between steps 1 and 2, 1
> CPU has a RO entry and another CPU has a RW entry. But that will get fixed once
> the TLB is flushed - it's not really an issue.
>
> (There is probably also an obscure way to end up with 2 TLB entries (one with RO
> and one with RW) for the same CPU, but the arm64 architecture permits that as
> long as it's only a permission mismatch).
>
> Anyway, approach 2 is used when changing memory permissions on user mappings, so
> I don't see why we can't take the same approach here. That would solve this
> whole class of issue for us.
>
> Thanks,
> Ryan
>
>
>> Thanks,
>> Yang
>>
>>> Thanks,
>>> Ryan
>>>
>>>
>>>> Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
>>>> kprobes. It seems work well.
>>>>
>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>>>> kprobes.c
>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>> @@ -10,6 +10,7 @@
>>>>
>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>>
>>>> +#include <linux/execmem.h>
>>>> #include <linux/extable.h>
>>>> #include <linux/kasan.h>
>>>> #include <linux/kernel.h>
>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>>>> static void __kprobes
>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs
>>>> *);
>>>>
>>>> +void *alloc_insn_page(void)
>>>> +{
>>>> + void *page;
>>>> +
>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>> + if (!page)
>>>> + return NULL;
>>>> + set_memory_rox((unsigned long)page, 1);
>>>> + return page;
>>>> +}
>>>> +
>>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>>> {
>>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image, unsigned int
>>>> size)
>>>> bpf_prog_pack_free(image, size);
>>>> }
>>>>
>>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>>> -{
>>>> - return 0;
>>>> -}
>>>> -
>>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
>>>> void *ro_image_end, const struct
>>>> btf_func_model *m,
>>>> u32 flags, struct bpf_tramp_links *tlinks,
>>>>
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>>> Thanks,
>>>>>>> Ryan
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Yang
>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Yang
>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Ryan
>>>>>>>>>>
>>>>>>>>>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-09 15:32 ` Yang Shi
@ 2025-09-09 16:32 ` Ryan Roberts
2025-09-09 17:32 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-09 16:32 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 09/09/2025 16:32, Yang Shi wrote:
>
>
> On 9/9/25 7:36 AM, Ryan Roberts wrote:
>> On 08/09/2025 19:31, Yang Shi wrote:
>>>
>>> On 9/8/25 9:34 AM, Ryan Roberts wrote:
>>>> On 04/09/2025 22:49, Yang Shi wrote:
>>>>> On 9/4/25 10:47 AM, Yang Shi wrote:
>>>>>> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>>>>>>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>>>>>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>>>>>> I am wondering whether we can just have a warn_on_once or something
>>>>>>>>>>>> for the
>>>>>>>>>>>> case
>>>>>>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>>>>>>> suggested in an off-the-list conversation that we can maintain a cache
>>>>>>>>>>>> of PTE
>>>>>>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>>>>>>> the same memory consumption as we do today, but not sure if this is
>>>>>>>>>>>> worth it.
>>>>>>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>>>>>>> I have described above, it has the same problem, and the code has been
>>>>>>>>>>>> working
>>>>>>>>>>>> for years :)
>>>>>>>>>>> I think it's preferable to avoid having to keep a cache of pgtable
>>>>>>>>>>> memory
>>>>>>>>>>> if we
>>>>>>>>>>> can...
>>>>>>>>>> Yes, I agree. We simply don't know how many pages we need to cache,
>>>>>>>>>> and it
>>>>>>>>>> still can't guarantee 100% allocation success.
>>>>>>>>> This is wrong... We can know how many pages will be needed for splitting
>>>>>>>>> linear
>>>>>>>>> mapping to PTEs for the worst case once linear mapping is finalized.
>>>>>>>>> But it
>>>>>>>>> may
>>>>>>>>> require a few hundred megabytes memory to guarantee allocation success. I
>>>>>>>>> don't
>>>>>>>>> think it is worth for such rare corner case.
>>>>>>>> Indeed, we know exactly how much memory we need for pgtables to map the
>>>>>>>> linear
>>>>>>>> map by pte - that's exactly what we are doing today. So we _could_ keep a
>>>>>>>> cache.
>>>>>>>> We would still get the benefit of improved performance but we would lose
>>>>>>>> the
>>>>>>>> benefit of reduced memory.
>>>>>>>>
>>>>>>>> I think we need to solve the vm_reset_perms() problem somehow, before we
>>>>>>>> can
>>>>>>>> enable this.
>>>>>>> Sorry I realise this was not very clear... I am saying I think we need to
>>>>>>> fix it
>>>>>>> somehow. A cache would likely work. But I'd prefer to avoid it if we can
>>>>>>> find a
>>>>>>> better solution.
>>>>>> Took a deeper look at vm_reset_perms(). It was introduced by commit
>>>>>> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions").
>>>>>> The
>>>>>> VM_FLUSH_RESET_PERMS flag is supposed to be set if the vmalloc memory is RO
>>>>>> and/or ROX. So set_memory_ro() or set_memory_rox() is supposed to follow up
>>>>>> vmalloc(). So the page table should be already split before reaching vfree().
>>>>>> I think this why vm_reset_perms() doesn't not check return value.
>>>> If vm_reset_perms() is assuming it can't/won't fail, I think it should at least
>>>> output a warning if it does?
>>> It should. Anyway warning will be raised if split fails. We have somehow
>>> mitigation.
>>>
>>>>>> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
>>>> Just checking; I think you made a comment before about there only being a few
>>>> sites that set VM_FLUSH_RESET_PERMS. But one of them is the helper,
>>>> set_vm_flush_reset_perms(). So just making sure you also followed to the places
>>>> that use that helper?
>>> Yes, I did.
>>>
>>>>>> The most
>>>>>> of them has set_memory_ro() or set_memory_rox() followed.
>>>> And are all callsites calling set_memory_*() for the entire cell that was
>>>> allocated by vmalloc? If there are cases where it only calls that for a portion
>>>> of it, then it's not gurranteed that the memory is correctly split.
>>> Yes, all callsites call set_memory_*() for the entire range.
>>>
>>>>>> But there are 3
>>>>>> places I don't see set_memory_ro()/set_memory_rox() is called.
>>>>>>
>>>>>> 1. BPF trampoline allocation. The BPF trampoline calls
>>>>>> arch_protect_bpf_trampoline(). The generic implementation does call
>>>>>> set_memory_rox(). But the x86 and arm64 implementation just simply return 0.
>>>>>> For x86, it is because execmem cache is used and it does call
>>>>>> set_memory_rox(). ARM64 doesn't need to split page table before this series,
>>>>>> so it should never fail. I think we just need to use the generic
>>>>>> implementation (remove arm64 implementation) if this series is merged.
>>>> I know zero about BPF. But it looks like the allocation happens in
>>>> arch_alloc_bpf_trampoline(), which for arm64, calls bpf_prog_pack_alloc(). And
>>>> for small sizes, it grabs some memory from a "pack". So doesn't this mean that
>>>> you are calling set_memory_rox() for a sub-region of the cell, so that doesn't
>>>> actually help at vm_reset_perms()-time?
>>> Took a deeper look at bpf pack allocator. The "pack" is allocated by
>>> alloc_new_pack(), which does:
>>> bpf_jit_alloc_exec()
>>> set_vm_flush_reset_perms()
>>> set_memory_rox()
>>>
>>> If the size is greater than the pack size, it calls:
>>> bpf_jit_alloc_exec()
>>> set_vm_flush_reset_perms()
>>> set_memory_rox()
>>>
>>> So it looks like bpf trampoline is good, and we don't need do anything. It
>>> should be removed from the list. I didn't look deep enough for bpf pack
>>> allocator in the first place.
>>>
>>>>>> 2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS set.
>>>>>> But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
>>>>>> unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
>>>>>>
>>>>>> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
>>>>>> called set_memory_rox() before switching to execmem cache. The execmem cache
>>>>>> calls set_memory_rox(). I don't know why ARM64 doesn't call it.
>>>>>>
>>>>>> So I think we just need to fix #1 and #3 per the above analysis. If this
>>>>>> analysis look correct to you guys, I will prepare two patches to fix them.
>>>> This all seems quite fragile. I find it interesting that vm_reset_perms() is
>>>> doing break-before-make; it sets the PTEs as invalid, then flushes the TLB,
>>>> then
>>>> sets them to default. But for arm64, at least, I think break-before-make is not
>>>> required. We are only changing the permissions so that can be done on live
>>>> mappings; essentially change the sequence to; set default, flush TLB.
>>> Yeah, I agree it is a little bit fragile. I think this is the "contract" for
>>> vmalloc users. You allocate ROX memory via vmalloc, you are required to call
>>> set_memory_*(). But there is nothing to guarantee the "contract" is followed.
>>> But I don't think this is the only case in kernel.
>>>
>>>> If we do that, then if the memory was already default, then there is no need to
>>>> do anything (so no chance of allocation failure). If the memory was not
>>>> default,
>>>> then it must have already been split to make it non-default, in which case we
>>>> can also gurrantee that no allocations are required.
>>>>
>>>> What am I missing?
>>> The comment says:
>>> Set direct map to something invalid so that it won't be cached if there are any
>>> accesses after the TLB flush, then flush the TLB and reset the direct map
>>> permissions to the default.
>>>
>>> IIUC, it guarantees the direct map can't be cached in TLB after TLB flush from
>>> _vm_unmap_aliases() by setting them invalid because TLB never cache invalid
>>> entries. Skipping set direct map to invalid seems break this. Or "changing
>>> permission on live mappings" on ARM64 can achieve the same goal?
>> Here's my understanding of the intent of the code:
>>
>> Let's say we start with some memory that has been mapped RO. Our goal is to
>> reset the memory back to RW and ensure that no TLB entry remains in the TLB for
>> the old RO mapping. There are 2 ways to do that:
>
>
>
>>
>> Approach 1 (used in current code):
>> 1. set PTE to invalid
>> 2. invalidate any TLB entry for the VA
>> 3. set the PTE to RW
>>
>> Approach 2:
>> 1. set the PTE to RW
>> 2. invalidate any TLB entry for the VA
>
> IIUC, the intent of the code is "reset direct map permission *without* leaving a
> RW+X window". The TLB flush call actually flushes both VA and direct map together.
> So if this is the intent, approach #2 may have VA with X permission but direct
> map may be RW at the mean time. It seems break the intent.
Ahh! Thanks, it's starting to make more sense now.
Though on first sight it seems a bit mad to me to form a tlb flush range that
covers all the direct map pages and all the lazy vunmap regions. Is that
intended to be a perf optimization or something else? It's not clear from the
history.
Could this be split into 2 operations?
1. unmap the aliases (+ tlbi the aliases).
2. set the direct memory back to default (+ tlbi the direct map region).
The only 2 potential problems I can think of are;
- Performance: 2 tlbis instead of 1, but conversely we probably avoid flushing
a load of TLB entries that we didn't really need to.
- Given there is now no lock around the tlbis (currently it's under
vmap_purge_lock) is there a race where a new alias can appear between steps 1
and 2? I don't think so, because the memory is allocated to the current mapping
so how is it going to get re-mapped?
Could this solve it?
>
> Thanks,
> Yang
>
>>
>> The benefit of approach 1 is that it is guarranteed that it is impossible for
>> different CPUs to have different translations for the same VA in their
>> respective TLB. But for approach 2, it's possible that between steps 1 and 2, 1
>> CPU has a RO entry and another CPU has a RW entry. But that will get fixed once
>> the TLB is flushed - it's not really an issue.
>>
>> (There is probably also an obscure way to end up with 2 TLB entries (one with RO
>> and one with RW) for the same CPU, but the arm64 architecture permits that as
>> long as it's only a permission mismatch).
>>
>> Anyway, approach 2 is used when changing memory permissions on user mappings, so
>> I don't see why we can't take the same approach here. That would solve this
>> whole class of issue for us.
>>
>> Thanks,
>> Ryan
>>
>>
>>> Thanks,
>>> Yang
>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>>> Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
>>>>> kprobes. It seems work well.
>>>>>
>>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>>>>> kprobes.c
>>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>>> @@ -10,6 +10,7 @@
>>>>>
>>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>>>
>>>>> +#include <linux/execmem.h>
>>>>> #include <linux/extable.h>
>>>>> #include <linux/kasan.h>
>>>>> #include <linux/kernel.h>
>>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>>>>> static void __kprobes
>>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs
>>>>> *);
>>>>>
>>>>> +void *alloc_insn_page(void)
>>>>> +{
>>>>> + void *page;
>>>>> +
>>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>>> + if (!page)
>>>>> + return NULL;
>>>>> + set_memory_rox((unsigned long)page, 1);
>>>>> + return page;
>>>>> +}
>>>>> +
>>>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>>>> {
>>>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>>>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image, unsigned int
>>>>> size)
>>>>> bpf_prog_pack_free(image, size);
>>>>> }
>>>>>
>>>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>>>> -{
>>>>> - return 0;
>>>>> -}
>>>>> -
>>>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
>>>>> void *ro_image_end, const struct
>>>>> btf_func_model *m,
>>>>> u32 flags, struct bpf_tramp_links *tlinks,
>>>>>
>>>>>
>>>>>> Thanks,
>>>>>> Yang
>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Ryan
>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Yang
>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Yang
>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Ryan
>>>>>>>>>>>
>>>>>>>>>>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-09 16:32 ` Ryan Roberts
@ 2025-09-09 17:32 ` Yang Shi
2025-09-11 22:03 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-09 17:32 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/9/25 9:32 AM, Ryan Roberts wrote:
> On 09/09/2025 16:32, Yang Shi wrote:
>>
>> On 9/9/25 7:36 AM, Ryan Roberts wrote:
>>> On 08/09/2025 19:31, Yang Shi wrote:
>>>> On 9/8/25 9:34 AM, Ryan Roberts wrote:
>>>>> On 04/09/2025 22:49, Yang Shi wrote:
>>>>>> On 9/4/25 10:47 AM, Yang Shi wrote:
>>>>>>> On 9/4/25 6:16 AM, Ryan Roberts wrote:
>>>>>>>> On 04/09/2025 14:14, Ryan Roberts wrote:
>>>>>>>>> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>>>>>>>>> I am wondering whether we can just have a warn_on_once or something
>>>>>>>>>>>>> for the
>>>>>>>>>>>>> case
>>>>>>>>>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>>>>>>>>>> suggested in an off-the-list conversation that we can maintain a cache
>>>>>>>>>>>>> of PTE
>>>>>>>>>>>>> tables for every PMD block mapping, which will give us
>>>>>>>>>>>>> the same memory consumption as we do today, but not sure if this is
>>>>>>>>>>>>> worth it.
>>>>>>>>>>>>> x86 can already handle splitting but due to the callchains
>>>>>>>>>>>>> I have described above, it has the same problem, and the code has been
>>>>>>>>>>>>> working
>>>>>>>>>>>>> for years :)
>>>>>>>>>>>> I think it's preferable to avoid having to keep a cache of pgtable
>>>>>>>>>>>> memory
>>>>>>>>>>>> if we
>>>>>>>>>>>> can...
>>>>>>>>>>> Yes, I agree. We simply don't know how many pages we need to cache,
>>>>>>>>>>> and it
>>>>>>>>>>> still can't guarantee 100% allocation success.
>>>>>>>>>> This is wrong... We can know how many pages will be needed for splitting
>>>>>>>>>> linear
>>>>>>>>>> mapping to PTEs for the worst case once linear mapping is finalized.
>>>>>>>>>> But it
>>>>>>>>>> may
>>>>>>>>>> require a few hundred megabytes memory to guarantee allocation success. I
>>>>>>>>>> don't
>>>>>>>>>> think it is worth for such rare corner case.
>>>>>>>>> Indeed, we know exactly how much memory we need for pgtables to map the
>>>>>>>>> linear
>>>>>>>>> map by pte - that's exactly what we are doing today. So we _could_ keep a
>>>>>>>>> cache.
>>>>>>>>> We would still get the benefit of improved performance but we would lose
>>>>>>>>> the
>>>>>>>>> benefit of reduced memory.
>>>>>>>>>
>>>>>>>>> I think we need to solve the vm_reset_perms() problem somehow, before we
>>>>>>>>> can
>>>>>>>>> enable this.
>>>>>>>> Sorry I realise this was not very clear... I am saying I think we need to
>>>>>>>> fix it
>>>>>>>> somehow. A cache would likely work. But I'd prefer to avoid it if we can
>>>>>>>> find a
>>>>>>>> better solution.
>>>>>>> Took a deeper look at vm_reset_perms(). It was introduced by commit
>>>>>>> 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions").
>>>>>>> The
>>>>>>> VM_FLUSH_RESET_PERMS flag is supposed to be set if the vmalloc memory is RO
>>>>>>> and/or ROX. So set_memory_ro() or set_memory_rox() is supposed to follow up
>>>>>>> vmalloc(). So the page table should be already split before reaching vfree().
>>>>>>> I think this why vm_reset_perms() doesn't not check return value.
>>>>> If vm_reset_perms() is assuming it can't/won't fail, I think it should at least
>>>>> output a warning if it does?
>>>> It should. Anyway warning will be raised if split fails. We have somehow
>>>> mitigation.
>>>>
>>>>>>> I scrutinized all the callsites with VM_FLUSH_RESET_PERMS flag set.
>>>>> Just checking; I think you made a comment before about there only being a few
>>>>> sites that set VM_FLUSH_RESET_PERMS. But one of them is the helper,
>>>>> set_vm_flush_reset_perms(). So just making sure you also followed to the places
>>>>> that use that helper?
>>>> Yes, I did.
>>>>
>>>>>>> The most
>>>>>>> of them has set_memory_ro() or set_memory_rox() followed.
>>>>> And are all callsites calling set_memory_*() for the entire cell that was
>>>>> allocated by vmalloc? If there are cases where it only calls that for a portion
>>>>> of it, then it's not gurranteed that the memory is correctly split.
>>>> Yes, all callsites call set_memory_*() for the entire range.
>>>>
>>>>>>> But there are 3
>>>>>>> places I don't see set_memory_ro()/set_memory_rox() is called.
>>>>>>>
>>>>>>> 1. BPF trampoline allocation. The BPF trampoline calls
>>>>>>> arch_protect_bpf_trampoline(). The generic implementation does call
>>>>>>> set_memory_rox(). But the x86 and arm64 implementation just simply return 0.
>>>>>>> For x86, it is because execmem cache is used and it does call
>>>>>>> set_memory_rox(). ARM64 doesn't need to split page table before this series,
>>>>>>> so it should never fail. I think we just need to use the generic
>>>>>>> implementation (remove arm64 implementation) if this series is merged.
>>>>> I know zero about BPF. But it looks like the allocation happens in
>>>>> arch_alloc_bpf_trampoline(), which for arm64, calls bpf_prog_pack_alloc(). And
>>>>> for small sizes, it grabs some memory from a "pack". So doesn't this mean that
>>>>> you are calling set_memory_rox() for a sub-region of the cell, so that doesn't
>>>>> actually help at vm_reset_perms()-time?
>>>> Took a deeper look at bpf pack allocator. The "pack" is allocated by
>>>> alloc_new_pack(), which does:
>>>> bpf_jit_alloc_exec()
>>>> set_vm_flush_reset_perms()
>>>> set_memory_rox()
>>>>
>>>> If the size is greater than the pack size, it calls:
>>>> bpf_jit_alloc_exec()
>>>> set_vm_flush_reset_perms()
>>>> set_memory_rox()
>>>>
>>>> So it looks like bpf trampoline is good, and we don't need do anything. It
>>>> should be removed from the list. I didn't look deep enough for bpf pack
>>>> allocator in the first place.
>>>>
>>>>>>> 2. BPF dispatcher. It calls execmem_alloc which has VM_FLUSH_RESET_PERMS set.
>>>>>>> But it is used for rw allocation, so VM_FLUSH_RESET_PERMS should be
>>>>>>> unnecessary IIUC. So it doesn't matter even though vm_reset_perms() fails.
>>>>>>>
>>>>>>> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86 also
>>>>>>> called set_memory_rox() before switching to execmem cache. The execmem cache
>>>>>>> calls set_memory_rox(). I don't know why ARM64 doesn't call it.
>>>>>>>
>>>>>>> So I think we just need to fix #1 and #3 per the above analysis. If this
>>>>>>> analysis look correct to you guys, I will prepare two patches to fix them.
>>>>> This all seems quite fragile. I find it interesting that vm_reset_perms() is
>>>>> doing break-before-make; it sets the PTEs as invalid, then flushes the TLB,
>>>>> then
>>>>> sets them to default. But for arm64, at least, I think break-before-make is not
>>>>> required. We are only changing the permissions so that can be done on live
>>>>> mappings; essentially change the sequence to; set default, flush TLB.
>>>> Yeah, I agree it is a little bit fragile. I think this is the "contract" for
>>>> vmalloc users. You allocate ROX memory via vmalloc, you are required to call
>>>> set_memory_*(). But there is nothing to guarantee the "contract" is followed.
>>>> But I don't think this is the only case in kernel.
>>>>
>>>>> If we do that, then if the memory was already default, then there is no need to
>>>>> do anything (so no chance of allocation failure). If the memory was not
>>>>> default,
>>>>> then it must have already been split to make it non-default, in which case we
>>>>> can also gurrantee that no allocations are required.
>>>>>
>>>>> What am I missing?
>>>> The comment says:
>>>> Set direct map to something invalid so that it won't be cached if there are any
>>>> accesses after the TLB flush, then flush the TLB and reset the direct map
>>>> permissions to the default.
>>>>
>>>> IIUC, it guarantees the direct map can't be cached in TLB after TLB flush from
>>>> _vm_unmap_aliases() by setting them invalid because TLB never cache invalid
>>>> entries. Skipping set direct map to invalid seems break this. Or "changing
>>>> permission on live mappings" on ARM64 can achieve the same goal?
>>> Here's my understanding of the intent of the code:
>>>
>>> Let's say we start with some memory that has been mapped RO. Our goal is to
>>> reset the memory back to RW and ensure that no TLB entry remains in the TLB for
>>> the old RO mapping. There are 2 ways to do that:
>>
>>
>>> Approach 1 (used in current code):
>>> 1. set PTE to invalid
>>> 2. invalidate any TLB entry for the VA
>>> 3. set the PTE to RW
>>>
>>> Approach 2:
>>> 1. set the PTE to RW
>>> 2. invalidate any TLB entry for the VA
>> IIUC, the intent of the code is "reset direct map permission *without* leaving a
>> RW+X window". The TLB flush call actually flushes both VA and direct map together.
>> So if this is the intent, approach #2 may have VA with X permission but direct
>> map may be RW at the mean time. It seems break the intent.
> Ahh! Thanks, it's starting to make more sense now.
>
> Though on first sight it seems a bit mad to me to form a tlb flush range that
> covers all the direct map pages and all the lazy vunmap regions. Is that
> intended to be a perf optimization or something else? It's not clear from the
> history.
I think it should be mainly performance driven. I can't see how come two
TLB flushes (for vmap and direct map respectively) don't work if I don't
miss something.
>
>
> Could this be split into 2 operations?
>
> 1. unmap the aliases (+ tlbi the aliases).
> 2. set the direct memory back to default (+ tlbi the direct map region).
>
> The only 2 potential problems I can think of are;
>
> - Performance: 2 tlbis instead of 1, but conversely we probably avoid flushing
> a load of TLB entries that we didn't really need to.
The two tlbis should work. But performance is definitely a concern. It
may be hard to justify how much performance impact caused by over flush,
but multiple TLBIs is definitely not preferred, particularly on some
large scale machines. We have experienced some scalability issues with
TLBI due to the large core count on Ampere systems.
>
> - Given there is now no lock around the tlbis (currently it's under
> vmap_purge_lock) is there a race where a new alias can appear between steps 1
> and 2? I don't think so, because the memory is allocated to the current mapping
> so how is it going to get re-mapped?
Yes, I agree. I don't think the race is real. The physical pages will
not be freed until vm_reset_perms() is done. The VA may be reallocated,
but it will be mapped to different physical pages.
>
>
> Could this solve it?
I think it could. But the potential performance impact (two TLBIs) is a
real concern.
Anyway the vmalloc user should call set_memory_*() for any RO/ROX
mapping, set_memory_*() should split the page table before reaching
vm_reset_perms() so it should not fail. If set_memory_*() is not called,
it is a bug, it should be fixed, like ARM64 kprobes.
It is definitely welcome to make it more robust, although the warning
from split may mitigate this somehow. But I don't think this should be a
blocker for this series IMHO.
Thanks,
Yang
>
>
>
>> Thanks,
>> Yang
>>
>>> The benefit of approach 1 is that it is guarranteed that it is impossible for
>>> different CPUs to have different translations for the same VA in their
>>> respective TLB. But for approach 2, it's possible that between steps 1 and 2, 1
>>> CPU has a RO entry and another CPU has a RW entry. But that will get fixed once
>>> the TLB is flushed - it's not really an issue.
>>>
>>> (There is probably also an obscure way to end up with 2 TLB entries (one with RO
>>> and one with RW) for the same CPU, but the arm64 architecture permits that as
>>> long as it's only a permission mismatch).
>>>
>>> Anyway, approach 2 is used when changing memory permissions on user mappings, so
>>> I don't see why we can't take the same approach here. That would solve this
>>> whole class of issue for us.
>>>
>>> Thanks,
>>> Ryan
>>>
>>>
>>>> Thanks,
>>>> Yang
>>>>
>>>>> Thanks,
>>>>> Ryan
>>>>>
>>>>>
>>>>>> Tested the below patch with bpftrace kfunc (allocate bpf trampoline) and
>>>>>> kprobes. It seems work well.
>>>>>>
>>>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>>>>>> kprobes.c
>>>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>>>> @@ -10,6 +10,7 @@
>>>>>>
>>>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>>>>
>>>>>> +#include <linux/execmem.h>
>>>>>> #include <linux/extable.h>
>>>>>> #include <linux/kasan.h>
>>>>>> #include <linux/kernel.h>
>>>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>>>>>> static void __kprobes
>>>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs
>>>>>> *);
>>>>>>
>>>>>> +void *alloc_insn_page(void)
>>>>>> +{
>>>>>> + void *page;
>>>>>> +
>>>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>>>> + if (!page)
>>>>>> + return NULL;
>>>>>> + set_memory_rox((unsigned long)page, 1);
>>>>>> + return page;
>>>>>> +}
>>>>>> +
>>>>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>>>>> {
>>>>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>>>>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void *image, unsigned int
>>>>>> size)
>>>>>> bpf_prog_pack_free(image, size);
>>>>>> }
>>>>>>
>>>>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>>>>> -{
>>>>>> - return 0;
>>>>>> -}
>>>>>> -
>>>>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
>>>>>> void *ro_image_end, const struct
>>>>>> btf_func_model *m,
>>>>>> u32 flags, struct bpf_tramp_links *tlinks,
>>>>>>
>>>>>>
>>>>>>> Thanks,
>>>>>>> Yang
>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Ryan
>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Yang
>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Yang
>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Ryan
>>>>>>>>>>>>
>>>>>>>>>>>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-09 17:32 ` Yang Shi
@ 2025-09-11 22:03 ` Yang Shi
2025-09-17 16:28 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-11 22:03 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
>>> IIUC, the intent of the code is "reset direct map permission
>>> *without* leaving a
>>> RW+X window". The TLB flush call actually flushes both VA and direct
>>> map together.
>>> So if this is the intent, approach #2 may have VA with X permission
>>> but direct
>>> map may be RW at the mean time. It seems break the intent.
>> Ahh! Thanks, it's starting to make more sense now.
>>
>> Though on first sight it seems a bit mad to me to form a tlb flush
>> range that
>> covers all the direct map pages and all the lazy vunmap regions. Is that
>> intended to be a perf optimization or something else? It's not clear
>> from the
>> history.
>
> I think it should be mainly performance driven. I can't see how come
> two TLB flushes (for vmap and direct map respectively) don't work if I
> don't miss something.
>
>>
>>
>> Could this be split into 2 operations?
>>
>> 1. unmap the aliases (+ tlbi the aliases).
>> 2. set the direct memory back to default (+ tlbi the direct map region).
>>
>> The only 2 potential problems I can think of are;
>>
>> - Performance: 2 tlbis instead of 1, but conversely we probably
>> avoid flushing
>> a load of TLB entries that we didn't really need to.
>
> The two tlbis should work. But performance is definitely a concern. It
> may be hard to justify how much performance impact caused by over
> flush, but multiple TLBIs is definitely not preferred, particularly on
> some large scale machines. We have experienced some scalability issues
> with TLBI due to the large core count on Ampere systems.
>>
>> - Given there is now no lock around the tlbis (currently it's under
>> vmap_purge_lock) is there a race where a new alias can appear between
>> steps 1
>> and 2? I don't think so, because the memory is allocated to the
>> current mapping
>> so how is it going to get re-mapped?
>
> Yes, I agree. I don't think the race is real. The physical pages will
> not be freed until vm_reset_perms() is done. The VA may be
> reallocated, but it will be mapped to different physical pages.
>
>>
>>
>> Could this solve it?
>
> I think it could. But the potential performance impact (two TLBIs) is
> a real concern.
>
> Anyway the vmalloc user should call set_memory_*() for any RO/ROX
> mapping, set_memory_*() should split the page table before reaching
> vm_reset_perms() so it should not fail. If set_memory_*() is not
> called, it is a bug, it should be fixed, like ARM64 kprobes.
>
> It is definitely welcome to make it more robust, although the warning
> from split may mitigate this somehow. But I don't think this should be
> a blocker for this series IMHO.
Hi Ryan & Catalin,
Any more concerns about this? Shall we move forward with v8? We can
include the fix to kprobes in v8 or I can send it separately, either is
fine to me. Hopefully we can make v6.18.
Thanks,
Yang
>
> Thanks,
> Yang
>
>>
>>
>>
>>> Thanks,
>>> Yang
>>>
>>>> The benefit of approach 1 is that it is guarranteed that it is
>>>> impossible for
>>>> different CPUs to have different translations for the same VA in their
>>>> respective TLB. But for approach 2, it's possible that between
>>>> steps 1 and 2, 1
>>>> CPU has a RO entry and another CPU has a RW entry. But that will
>>>> get fixed once
>>>> the TLB is flushed - it's not really an issue.
>>>>
>>>> (There is probably also an obscure way to end up with 2 TLB entries
>>>> (one with RO
>>>> and one with RW) for the same CPU, but the arm64 architecture
>>>> permits that as
>>>> long as it's only a permission mismatch).
>>>>
>>>> Anyway, approach 2 is used when changing memory permissions on user
>>>> mappings, so
>>>> I don't see why we can't take the same approach here. That would
>>>> solve this
>>>> whole class of issue for us.
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>>>
>>>>>>> Tested the below patch with bpftrace kfunc (allocate bpf
>>>>>>> trampoline) and
>>>>>>> kprobes. It seems work well.
>>>>>>>
>>>>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c
>>>>>>> b/arch/arm64/kernel/probes/
>>>>>>> kprobes.c
>>>>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>>>>> @@ -10,6 +10,7 @@
>>>>>>>
>>>>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>>>>>
>>>>>>> +#include <linux/execmem.h>
>>>>>>> #include <linux/extable.h>
>>>>>>> #include <linux/kasan.h>
>>>>>>> #include <linux/kernel.h>
>>>>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk,
>>>>>>> kprobe_ctlblk);
>>>>>>> static void __kprobes
>>>>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *,
>>>>>>> struct pt_regs
>>>>>>> *);
>>>>>>>
>>>>>>> +void *alloc_insn_page(void)
>>>>>>> +{
>>>>>>> + void *page;
>>>>>>> +
>>>>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>>>>> + if (!page)
>>>>>>> + return NULL;
>>>>>>> + set_memory_rox((unsigned long)page, 1);
>>>>>>> + return page;
>>>>>>> +}
>>>>>>> +
>>>>>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>>>>>> {
>>>>>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>>>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c
>>>>>>> b/arch/arm64/net/bpf_jit_comp.c
>>>>>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>>>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>>>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void
>>>>>>> *image, unsigned int
>>>>>>> size)
>>>>>>> bpf_prog_pack_free(image, size);
>>>>>>> }
>>>>>>>
>>>>>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>>>>>> -{
>>>>>>> - return 0;
>>>>>>> -}
>>>>>>> -
>>>>>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>>>>>>> void *ro_image,
>>>>>>> void *ro_image_end, const struct
>>>>>>> btf_func_model *m,
>>>>>>> u32 flags, struct
>>>>>>> bpf_tramp_links *tlinks,
>>>>>>>
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Yang
>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Ryan
>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Yang
>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Yang
>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-11 22:03 ` Yang Shi
@ 2025-09-17 16:28 ` Ryan Roberts
2025-09-17 17:21 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-17 16:28 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
Hi Yang,
Sorry for the slow reply; I'm just getting back to this...
On 11/09/2025 23:03, Yang Shi wrote:
> Hi Ryan & Catalin,
>
> Any more concerns about this?
I've been trying to convince myself that your assertion that all users that set
the VM_FLUSH_RESET_PERMS also call set_memory_*() for the entire range that was
returned my vmalloc. I agree that if that is the contract and everyone is
following it, then there is no problem here.
But I haven't been able to convince myself...
Some examples (these might intersect with examples you previously raised):
1. bpf_dispatcher_change_prog() -> bpf_jit_alloc_exec() -> execmem_alloc() ->
sets VM_FLUSH_RESET_PERMS. But I don't see it calling set_memory_*() for rw_image.
2. module_memory_alloc() -> execmem_alloc_rw() -> execmem_alloc() -> sets
VM_FLUSH_RESET_PERMS (note that execmem_force_rw() is nop for arm64).
set_memory_*() is not called until much later on in module_set_memory(). Another
error in the meantime could cause the memory to be vfreed before that point.
3. When set_vm_flush_reset_perms() is set for the range, it is called before
set_memory_*() which might then fail to split prior to vfree.
But I guess as long as set_memory_*() is never successfully called for a
*sub-range* of the vmalloc'ed region, then for all of the above issues, the
memory must still be RW at vfree-time, so this issue should be benign... I think?
In summary this all looks horribly fragile. But I *think* it works. It would be
good to clean it all up and have some clearly documented rules regardless. But I
think that could be a follow up series.
> Shall we move forward with v8?
Yes; Do you wnat me to post that or would you prefer to do it? I'm happy to do
it; there are a few other tidy ups in pageattr.c I want to make which I spotted.
> We can include the
> fix to kprobes in v8 or I can send it separately, either is fine to me.
Post it on list, and I'll also incorporate into the series.
> Hopefully we can make v6.18.
It's probably getting a bit late now. Anyway, I'll aim to get v8 out tomorrow or
Friday and we will see what Will thinks.
Thanks,
Ryan
>
> Thanks,
> Yang
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-17 16:28 ` Ryan Roberts
@ 2025-09-17 17:21 ` Yang Shi
2025-09-17 18:58 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-17 17:21 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/17/25 9:28 AM, Ryan Roberts wrote:
> Hi Yang,
>
> Sorry for the slow reply; I'm just getting back to this...
>
> On 11/09/2025 23:03, Yang Shi wrote:
>> Hi Ryan & Catalin,
>>
>> Any more concerns about this?
> I've been trying to convince myself that your assertion that all users that set
> the VM_FLUSH_RESET_PERMS also call set_memory_*() for the entire range that was
> returned my vmalloc. I agree that if that is the contract and everyone is
> following it, then there is no problem here.
>
> But I haven't been able to convince myself...
>
> Some examples (these might intersect with examples you previously raised):
>
> 1. bpf_dispatcher_change_prog() -> bpf_jit_alloc_exec() -> execmem_alloc() ->
> sets VM_FLUSH_RESET_PERMS. But I don't see it calling set_memory_*() for rw_image.
Yes, it doesn't call set_memory_*(). I spotted this in the earlier
email. But it is actually RW, so it should be ok to miss the call. The
later set_direct_map_invalid call in vfree() may fail, but
set_direct_map_default call will set RW permission back. But I think it
doesn't have to use execmem_alloc(), the plain vmalloc() should be good
enough.
>
> 2. module_memory_alloc() -> execmem_alloc_rw() -> execmem_alloc() -> sets
> VM_FLUSH_RESET_PERMS (note that execmem_force_rw() is nop for arm64).
> set_memory_*() is not called until much later on in module_set_memory(). Another
> error in the meantime could cause the memory to be vfreed before that point.
IIUC, execmem_alloc_rw() is used to allocate memory for modules' text
section and data section. The code will set mod->mem[type].is_rox
according to the type of the section. It is true for text, false for
data. Then set_memory_rox() will be called later if it is true *after*
insns are copied to the memory. So it is still RW before that point.
>
> 3. When set_vm_flush_reset_perms() is set for the range, it is called before
> set_memory_*() which might then fail to split prior to vfree.
Yes, all call sites check the return value and bail out if
set_memory_*() failed if I don't miss anything.
>
> But I guess as long as set_memory_*() is never successfully called for a
> *sub-range* of the vmalloc'ed region, then for all of the above issues, the
> memory must still be RW at vfree-time, so this issue should be benign... I think?
Yes, it is true.
>
> In summary this all looks horribly fragile. But I *think* it works. It would be
> good to clean it all up and have some clearly documented rules regardless. But I
> think that could be a follow up series.
Yeah, absolutely agreed.
>
>> Shall we move forward with v8?
> Yes; Do you wnat me to post that or would you prefer to do it? I'm happy to do
> it; there are a few other tidy ups in pageattr.c I want to make which I spotted.
I actually just had v8 ready in my tree. I removed pageattr_pgd_entry
and pageattr_pud_entry in pageattr.c and fixed pmd_leaf/pud_leaf as you
suggested. Is it the cleanup you are supposed to do? And I also rebased
it on top of Shijie's series
(https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/commit/?id=bfbbb0d3215f)
which has been picked up by Will.
>
>> We can include the
>> fix to kprobes in v8 or I can send it separately, either is fine to me.
> Post it on list, and I'll also incorporate into the series.
I can include it in v8 series.
>
>> Hopefully we can make v6.18.
> It's probably getting a bit late now. Anyway, I'll aim to get v8 out tomorrow or
> Friday and we will see what Will thinks.
Thank you. I can post v8 today.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-17 17:21 ` Yang Shi
@ 2025-09-17 18:58 ` Ryan Roberts
2025-09-17 19:15 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-17 18:58 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 17/09/2025 18:21, Yang Shi wrote:
>
>
> On 9/17/25 9:28 AM, Ryan Roberts wrote:
>> Hi Yang,
>>
>> Sorry for the slow reply; I'm just getting back to this...
>>
>> On 11/09/2025 23:03, Yang Shi wrote:
>>> Hi Ryan & Catalin,
>>>
>>> Any more concerns about this?
>> I've been trying to convince myself that your assertion that all users that set
>> the VM_FLUSH_RESET_PERMS also call set_memory_*() for the entire range that was
>> returned my vmalloc. I agree that if that is the contract and everyone is
>> following it, then there is no problem here.
>>
>> But I haven't been able to convince myself...
>>
>> Some examples (these might intersect with examples you previously raised):
>>
>> 1. bpf_dispatcher_change_prog() -> bpf_jit_alloc_exec() -> execmem_alloc() ->
>> sets VM_FLUSH_RESET_PERMS. But I don't see it calling set_memory_*() for
>> rw_image.
>
> Yes, it doesn't call set_memory_*(). I spotted this in the earlier email. But it
> is actually RW, so it should be ok to miss the call. The later
> set_direct_map_invalid call in vfree() may fail, but set_direct_map_default call
> will set RW permission back. But I think it doesn't have to use execmem_alloc(),
> the plain vmalloc() should be good enough.
>
>>
>> 2. module_memory_alloc() -> execmem_alloc_rw() -> execmem_alloc() -> sets
>> VM_FLUSH_RESET_PERMS (note that execmem_force_rw() is nop for arm64).
>> set_memory_*() is not called until much later on in module_set_memory(). Another
>> error in the meantime could cause the memory to be vfreed before that point.
>
> IIUC, execmem_alloc_rw() is used to allocate memory for modules' text section
> and data section. The code will set mod->mem[type].is_rox according to the type
> of the section. It is true for text, false for data. Then set_memory_rox() will
> be called later if it is true *after* insns are copied to the memory. So it is
> still RW before that point.
>
>>
>> 3. When set_vm_flush_reset_perms() is set for the range, it is called before
>> set_memory_*() which might then fail to split prior to vfree.
>
> Yes, all call sites check the return value and bail out if set_memory_*() failed
> if I don't miss anything.
>
>>
>> But I guess as long as set_memory_*() is never successfully called for a
>> *sub-range* of the vmalloc'ed region, then for all of the above issues, the
>> memory must still be RW at vfree-time, so this issue should be benign... I think?
>
> Yes, it is true.
So to summarise, all freshly vmalloc'ed memory starts as RW. set_memory_*() may
only be called if VM_FLUSH_RESET_PERMS has already been set. If set_memory_*()
is called at all, the first call MUST be for the whole range.
If those requirements are all met, then if VM_FLUSH_RESET_PERMS was set but
set_memory_*() was never called, the worst that can happen is for both the
set_direct_map_invalid() and set_direct_map_default() calls to fail due to not
enough memory. But that is safe because the memory was always RW. If
set_memory_*() was called for the whole range and failed, it's the same as if it
was never called. If it was called for the whole range and succeeded, then the
split must have happened already and set_direct_map_invalid() and
set_direct_map_default() will therefore definitely succeed.
The only way this could be a problem is if someone vmallocs a range then
performs a set_memory_*() on a sub-region without having first done it for the
whole region. But we have not found any evidence that there are any users that
do that.
In fact, by that logic, I think alloc_insn_page() must also be safe; it only
allocates 1 page, so if set_memory_*() is subsequently called for it, it must by
definition be covering the whole allocation; 1 page is the smallest amount that
can be protected.
So I agree we are safe.
>
>>
>> In summary this all looks horribly fragile. But I *think* it works. It would be
>> good to clean it all up and have some clearly documented rules regardless. But I
>> think that could be a follow up series.
>
> Yeah, absolutely agreed.
>
>>
>>> Shall we move forward with v8?
>> Yes; Do you wnat me to post that or would you prefer to do it? I'm happy to do
>> it; there are a few other tidy ups in pageattr.c I want to make which I spotted.
>
> I actually just had v8 ready in my tree. I removed pageattr_pgd_entry and
> pageattr_pud_entry in pageattr.c and fixed pmd_leaf/pud_leaf as you suggested.
> Is it the cleanup you are supposed to do?
I was also going to fix up the comment in change_memory_common() which is now stale.
> And I also rebased it on top of
> Shijie's series (https://git.kernel.org/pub/scm/linux/kernel/git/arm64/
> linux.git/commit/?id=bfbbb0d3215f) which has been picked up by Will.
>
>>
>>> We can include the
>>> fix to kprobes in v8 or I can send it separately, either is fine to me.
>> Post it on list, and I'll also incorporate into the series.
>
> I can include it in v8 series.
>
>>
>>> Hopefully we can make v6.18.
>> It's probably getting a bit late now. Anyway, I'll aim to get v8 out tomorrow or
>> Friday and we will see what Will thinks.
>
> Thank you. I can post v8 today.
OK great - I'll leave it all to you then - thanks!
>
> Thanks,
> Yang
>
>>
>> Thanks,
>> Ryan
>>
>>> Thanks,
>>> Yang
>>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-17 18:58 ` Ryan Roberts
@ 2025-09-17 19:15 ` Yang Shi
2025-09-17 19:40 ` Ryan Roberts
0 siblings, 1 reply; 51+ messages in thread
From: Yang Shi @ 2025-09-17 19:15 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/17/25 11:58 AM, Ryan Roberts wrote:
> On 17/09/2025 18:21, Yang Shi wrote:
>>
>> On 9/17/25 9:28 AM, Ryan Roberts wrote:
>>> Hi Yang,
>>>
>>> Sorry for the slow reply; I'm just getting back to this...
>>>
>>> On 11/09/2025 23:03, Yang Shi wrote:
>>>> Hi Ryan & Catalin,
>>>>
>>>> Any more concerns about this?
>>> I've been trying to convince myself that your assertion that all users that set
>>> the VM_FLUSH_RESET_PERMS also call set_memory_*() for the entire range that was
>>> returned my vmalloc. I agree that if that is the contract and everyone is
>>> following it, then there is no problem here.
>>>
>>> But I haven't been able to convince myself...
>>>
>>> Some examples (these might intersect with examples you previously raised):
>>>
>>> 1. bpf_dispatcher_change_prog() -> bpf_jit_alloc_exec() -> execmem_alloc() ->
>>> sets VM_FLUSH_RESET_PERMS. But I don't see it calling set_memory_*() for
>>> rw_image.
>> Yes, it doesn't call set_memory_*(). I spotted this in the earlier email. But it
>> is actually RW, so it should be ok to miss the call. The later
>> set_direct_map_invalid call in vfree() may fail, but set_direct_map_default call
>> will set RW permission back. But I think it doesn't have to use execmem_alloc(),
>> the plain vmalloc() should be good enough.
>>
>>> 2. module_memory_alloc() -> execmem_alloc_rw() -> execmem_alloc() -> sets
>>> VM_FLUSH_RESET_PERMS (note that execmem_force_rw() is nop for arm64).
>>> set_memory_*() is not called until much later on in module_set_memory(). Another
>>> error in the meantime could cause the memory to be vfreed before that point.
>> IIUC, execmem_alloc_rw() is used to allocate memory for modules' text section
>> and data section. The code will set mod->mem[type].is_rox according to the type
>> of the section. It is true for text, false for data. Then set_memory_rox() will
>> be called later if it is true *after* insns are copied to the memory. So it is
>> still RW before that point.
>>
>>> 3. When set_vm_flush_reset_perms() is set for the range, it is called before
>>> set_memory_*() which might then fail to split prior to vfree.
>> Yes, all call sites check the return value and bail out if set_memory_*() failed
>> if I don't miss anything.
>>
>>> But I guess as long as set_memory_*() is never successfully called for a
>>> *sub-range* of the vmalloc'ed region, then for all of the above issues, the
>>> memory must still be RW at vfree-time, so this issue should be benign... I think?
>> Yes, it is true.
> So to summarise, all freshly vmalloc'ed memory starts as RW. set_memory_*() may
> only be called if VM_FLUSH_RESET_PERMS has already been set. If set_memory_*()
> is called at all, the first call MUST be for the whole range.
Whether the default permission is RW or not depends on the type passed
in by execmem_alloc(). It is defined by execmem_info in
arch/arm64/mm/init.c. For ARM64, module and BPF have PAGE_KERNEL
permission (RW) by default, but kprobes is PAGE_KERNEL_ROX (ROX).
> If those requirements are all met, then if VM_FLUSH_RESET_PERMS was set but
> set_memory_*() was never called, the worst that can happen is for both the
> set_direct_map_invalid() and set_direct_map_default() calls to fail due to not
> enough memory. But that is safe because the memory was always RW. If
> set_memory_*() was called for the whole range and failed, it's the same as if it
> was never called. If it was called for the whole range and succeeded, then the
> split must have happened already and set_direct_map_invalid() and
> set_direct_map_default() will therefore definitely succeed.
>
> The only way this could be a problem is if someone vmallocs a range then
> performs a set_memory_*() on a sub-region without having first done it for the
> whole region. But we have not found any evidence that there are any users that
> do that.
Yes, exactly.
>
> In fact, by that logic, I think alloc_insn_page() must also be safe; it only
> allocates 1 page, so if set_memory_*() is subsequently called for it, it must by
> definition be covering the whole allocation; 1 page is the smallest amount that
> can be protected.
Yes, but kprobes default permission is ROX.
>
> So I agree we are safe.
>
>
>>> In summary this all looks horribly fragile. But I *think* it works. It would be
>>> good to clean it all up and have some clearly documented rules regardless. But I
>>> think that could be a follow up series.
>> Yeah, absolutely agreed.
>>
>>>> Shall we move forward with v8?
>>> Yes; Do you wnat me to post that or would you prefer to do it? I'm happy to do
>>> it; there are a few other tidy ups in pageattr.c I want to make which I spotted.
>> I actually just had v8 ready in my tree. I removed pageattr_pgd_entry and
>> pageattr_pud_entry in pageattr.c and fixed pmd_leaf/pud_leaf as you suggested.
>> Is it the cleanup you are supposed to do?
> I was also going to fix up the comment in change_memory_common() which is now stale.
Oops, I missed that in my v8. Please just comment for v8, I can fix it
up later.
Thanks,
Yang
>
>> And I also rebased it on top of
>> Shijie's series (https://git.kernel.org/pub/scm/linux/kernel/git/arm64/
>> linux.git/commit/?id=bfbbb0d3215f) which has been picked up by Will.
>>
>>>> We can include the
>>>> fix to kprobes in v8 or I can send it separately, either is fine to me.
>>> Post it on list, and I'll also incorporate into the series.
>> I can include it in v8 series.
>>
>>>> Hopefully we can make v6.18.
>>> It's probably getting a bit late now. Anyway, I'll aim to get v8 out tomorrow or
>>> Friday and we will see what Will thinks.
>> Thank you. I can post v8 today.
> OK great - I'll leave it all to you then - thanks!
>
>> Thanks,
>> Yang
>>
>>> Thanks,
>>> Ryan
>>>
>>>> Thanks,
>>>> Yang
>>>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-17 19:15 ` Yang Shi
@ 2025-09-17 19:40 ` Ryan Roberts
2025-09-17 19:59 ` Yang Shi
0 siblings, 1 reply; 51+ messages in thread
From: Ryan Roberts @ 2025-09-17 19:40 UTC (permalink / raw)
To: Yang Shi, Dev Jain, Catalin Marinas, Will Deacon, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel, scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 17/09/2025 20:15, Yang Shi wrote:
>
>
> On 9/17/25 11:58 AM, Ryan Roberts wrote:
>> On 17/09/2025 18:21, Yang Shi wrote:
>>>
>>> On 9/17/25 9:28 AM, Ryan Roberts wrote:
>>>> Hi Yang,
>>>>
>>>> Sorry for the slow reply; I'm just getting back to this...
>>>>
>>>> On 11/09/2025 23:03, Yang Shi wrote:
>>>>> Hi Ryan & Catalin,
>>>>>
>>>>> Any more concerns about this?
>>>> I've been trying to convince myself that your assertion that all users that set
>>>> the VM_FLUSH_RESET_PERMS also call set_memory_*() for the entire range that was
>>>> returned my vmalloc. I agree that if that is the contract and everyone is
>>>> following it, then there is no problem here.
>>>>
>>>> But I haven't been able to convince myself...
>>>>
>>>> Some examples (these might intersect with examples you previously raised):
>>>>
>>>> 1. bpf_dispatcher_change_prog() -> bpf_jit_alloc_exec() -> execmem_alloc() ->
>>>> sets VM_FLUSH_RESET_PERMS. But I don't see it calling set_memory_*() for
>>>> rw_image.
>>> Yes, it doesn't call set_memory_*(). I spotted this in the earlier email. But it
>>> is actually RW, so it should be ok to miss the call. The later
>>> set_direct_map_invalid call in vfree() may fail, but set_direct_map_default call
>>> will set RW permission back. But I think it doesn't have to use execmem_alloc(),
>>> the plain vmalloc() should be good enough.
>>>
>>>> 2. module_memory_alloc() -> execmem_alloc_rw() -> execmem_alloc() -> sets
>>>> VM_FLUSH_RESET_PERMS (note that execmem_force_rw() is nop for arm64).
>>>> set_memory_*() is not called until much later on in module_set_memory().
>>>> Another
>>>> error in the meantime could cause the memory to be vfreed before that point.
>>> IIUC, execmem_alloc_rw() is used to allocate memory for modules' text section
>>> and data section. The code will set mod->mem[type].is_rox according to the type
>>> of the section. It is true for text, false for data. Then set_memory_rox() will
>>> be called later if it is true *after* insns are copied to the memory. So it is
>>> still RW before that point.
>>>
>>>> 3. When set_vm_flush_reset_perms() is set for the range, it is called before
>>>> set_memory_*() which might then fail to split prior to vfree.
>>> Yes, all call sites check the return value and bail out if set_memory_*() failed
>>> if I don't miss anything.
>>>
>>>> But I guess as long as set_memory_*() is never successfully called for a
>>>> *sub-range* of the vmalloc'ed region, then for all of the above issues, the
>>>> memory must still be RW at vfree-time, so this issue should be benign... I
>>>> think?
>>> Yes, it is true.
>> So to summarise, all freshly vmalloc'ed memory starts as RW. set_memory_*() may
>> only be called if VM_FLUSH_RESET_PERMS has already been set. If set_memory_*()
>> is called at all, the first call MUST be for the whole range.
>
> Whether the default permission is RW or not depends on the type passed in by
> execmem_alloc(). It is defined by execmem_info in arch/arm64/mm/init.c. For
> ARM64, module and BPF have PAGE_KERNEL permission (RW) by default, but kprobes
> is PAGE_KERNEL_ROX (ROX).
Perhaps I missed it, but as far as I could tell the prot that the arch sets for
the type only determines the prot that is set for the vmalloc map. It doesn't
look like the linear map is modified at all... which feels like a bug to me
since the linear map will be RW while the vmalloc map will be ROX... I guess I
must have missed something...
>
>> If those requirements are all met, then if VM_FLUSH_RESET_PERMS was set but
>> set_memory_*() was never called, the worst that can happen is for both the
>> set_direct_map_invalid() and set_direct_map_default() calls to fail due to not
>> enough memory. But that is safe because the memory was always RW. If
>> set_memory_*() was called for the whole range and failed, it's the same as if it
>> was never called. If it was called for the whole range and succeeded, then the
>> split must have happened already and set_direct_map_invalid() and
>> set_direct_map_default() will therefore definitely succeed.
>>
>> The only way this could be a problem is if someone vmallocs a range then
>> performs a set_memory_*() on a sub-region without having first done it for the
>> whole region. But we have not found any evidence that there are any users that
>> do that.
>
> Yes, exactly.
>
>>
>> In fact, by that logic, I think alloc_insn_page() must also be safe; it only
>> allocates 1 page, so if set_memory_*() is subsequently called for it, it must by
>> definition be covering the whole allocation; 1 page is the smallest amount that
>> can be protected.
>
> Yes, but kprobes default permission is ROX.
>
>>
>> So I agree we are safe.
>>
>>
>>>> In summary this all looks horribly fragile. But I *think* it works. It would be
>>>> good to clean it all up and have some clearly documented rules regardless.
>>>> But I
>>>> think that could be a follow up series.
>>> Yeah, absolutely agreed.
>>>
>>>>> Shall we move forward with v8?
>>>> Yes; Do you wnat me to post that or would you prefer to do it? I'm happy to do
>>>> it; there are a few other tidy ups in pageattr.c I want to make which I
>>>> spotted.
>>> I actually just had v8 ready in my tree. I removed pageattr_pgd_entry and
>>> pageattr_pud_entry in pageattr.c and fixed pmd_leaf/pud_leaf as you suggested.
>>> Is it the cleanup you are supposed to do?
>> I was also going to fix up the comment in change_memory_common() which is now
>> stale.
>
> Oops, I missed that in my v8. Please just comment for v8, I can fix it up later.
Ahh no biggy. If there is a chance Will will take the series, let's not hold it
up for a comment.
>
> Thanks,
> Yang
>
>
>>
>>> And I also rebased it on top of
>>> Shijie's series (https://git.kernel.org/pub/scm/linux/kernel/git/arm64/
>>> linux.git/commit/?id=bfbbb0d3215f) which has been picked up by Will.
>>>
>>>>> We can include the
>>>>> fix to kprobes in v8 or I can send it separately, either is fine to me.
>>>> Post it on list, and I'll also incorporate into the series.
>>> I can include it in v8 series.
>>>
>>>>> Hopefully we can make v6.18.
>>>> It's probably getting a bit late now. Anyway, I'll aim to get v8 out
>>>> tomorrow or
>>>> Friday and we will see what Will thinks.
>>> Thank you. I can post v8 today.
>> OK great - I'll leave it all to you then - thanks!
>>
>>> Thanks,
>>> Yang
>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-17 19:40 ` Ryan Roberts
@ 2025-09-17 19:59 ` Yang Shi
0 siblings, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-17 19:59 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
On 9/17/25 12:40 PM, Ryan Roberts wrote:
> On 17/09/2025 20:15, Yang Shi wrote:
>>
>> On 9/17/25 11:58 AM, Ryan Roberts wrote:
>>> On 17/09/2025 18:21, Yang Shi wrote:
>>>> On 9/17/25 9:28 AM, Ryan Roberts wrote:
>>>>> Hi Yang,
>>>>>
>>>>> Sorry for the slow reply; I'm just getting back to this...
>>>>>
>>>>> On 11/09/2025 23:03, Yang Shi wrote:
>>>>>> Hi Ryan & Catalin,
>>>>>>
>>>>>> Any more concerns about this?
>>>>> I've been trying to convince myself that your assertion that all users that set
>>>>> the VM_FLUSH_RESET_PERMS also call set_memory_*() for the entire range that was
>>>>> returned my vmalloc. I agree that if that is the contract and everyone is
>>>>> following it, then there is no problem here.
>>>>>
>>>>> But I haven't been able to convince myself...
>>>>>
>>>>> Some examples (these might intersect with examples you previously raised):
>>>>>
>>>>> 1. bpf_dispatcher_change_prog() -> bpf_jit_alloc_exec() -> execmem_alloc() ->
>>>>> sets VM_FLUSH_RESET_PERMS. But I don't see it calling set_memory_*() for
>>>>> rw_image.
>>>> Yes, it doesn't call set_memory_*(). I spotted this in the earlier email. But it
>>>> is actually RW, so it should be ok to miss the call. The later
>>>> set_direct_map_invalid call in vfree() may fail, but set_direct_map_default call
>>>> will set RW permission back. But I think it doesn't have to use execmem_alloc(),
>>>> the plain vmalloc() should be good enough.
>>>>
>>>>> 2. module_memory_alloc() -> execmem_alloc_rw() -> execmem_alloc() -> sets
>>>>> VM_FLUSH_RESET_PERMS (note that execmem_force_rw() is nop for arm64).
>>>>> set_memory_*() is not called until much later on in module_set_memory().
>>>>> Another
>>>>> error in the meantime could cause the memory to be vfreed before that point.
>>>> IIUC, execmem_alloc_rw() is used to allocate memory for modules' text section
>>>> and data section. The code will set mod->mem[type].is_rox according to the type
>>>> of the section. It is true for text, false for data. Then set_memory_rox() will
>>>> be called later if it is true *after* insns are copied to the memory. So it is
>>>> still RW before that point.
>>>>
>>>>> 3. When set_vm_flush_reset_perms() is set for the range, it is called before
>>>>> set_memory_*() which might then fail to split prior to vfree.
>>>> Yes, all call sites check the return value and bail out if set_memory_*() failed
>>>> if I don't miss anything.
>>>>
>>>>> But I guess as long as set_memory_*() is never successfully called for a
>>>>> *sub-range* of the vmalloc'ed region, then for all of the above issues, the
>>>>> memory must still be RW at vfree-time, so this issue should be benign... I
>>>>> think?
>>>> Yes, it is true.
>>> So to summarise, all freshly vmalloc'ed memory starts as RW. set_memory_*() may
>>> only be called if VM_FLUSH_RESET_PERMS has already been set. If set_memory_*()
>>> is called at all, the first call MUST be for the whole range.
>> Whether the default permission is RW or not depends on the type passed in by
>> execmem_alloc(). It is defined by execmem_info in arch/arm64/mm/init.c. For
>> ARM64, module and BPF have PAGE_KERNEL permission (RW) by default, but kprobes
>> is PAGE_KERNEL_ROX (ROX).
> Perhaps I missed it, but as far as I could tell the prot that the arch sets for
> the type only determines the prot that is set for the vmalloc map. It doesn't
> look like the linear map is modified at all... which feels like a bug to me
> since the linear map will be RW while the vmalloc map will be ROX... I guess I
> must have missed something...
Yes, it just sets the permission for vmalloc area. The set_memory_*()
must be called to change permission for direct map.
>
>>> If those requirements are all met, then if VM_FLUSH_RESET_PERMS was set but
>>> set_memory_*() was never called, the worst that can happen is for both the
>>> set_direct_map_invalid() and set_direct_map_default() calls to fail due to not
>>> enough memory. But that is safe because the memory was always RW. If
>>> set_memory_*() was called for the whole range and failed, it's the same as if it
>>> was never called. If it was called for the whole range and succeeded, then the
>>> split must have happened already and set_direct_map_invalid() and
>>> set_direct_map_default() will therefore definitely succeed.
>>>
>>> The only way this could be a problem is if someone vmallocs a range then
>>> performs a set_memory_*() on a sub-region without having first done it for the
>>> whole region. But we have not found any evidence that there are any users that
>>> do that.
>> Yes, exactly.
>>
>>> In fact, by that logic, I think alloc_insn_page() must also be safe; it only
>>> allocates 1 page, so if set_memory_*() is subsequently called for it, it must by
>>> definition be covering the whole allocation; 1 page is the smallest amount that
>>> can be protected.
>> Yes, but kprobes default permission is ROX.
>>
>>> So I agree we are safe.
>>>
>>>
>>>>> In summary this all looks horribly fragile. But I *think* it works. It would be
>>>>> good to clean it all up and have some clearly documented rules regardless.
>>>>> But I
>>>>> think that could be a follow up series.
>>>> Yeah, absolutely agreed.
>>>>
>>>>>> Shall we move forward with v8?
>>>>> Yes; Do you wnat me to post that or would you prefer to do it? I'm happy to do
>>>>> it; there are a few other tidy ups in pageattr.c I want to make which I
>>>>> spotted.
>>>> I actually just had v8 ready in my tree. I removed pageattr_pgd_entry and
>>>> pageattr_pud_entry in pageattr.c and fixed pmd_leaf/pud_leaf as you suggested.
>>>> Is it the cleanup you are supposed to do?
>>> I was also going to fix up the comment in change_memory_common() which is now
>>> stale.
>> Oops, I missed that in my v8. Please just comment for v8, I can fix it up later.
> Ahh no biggy. If there is a chance Will will take the series, let's not hold it
> up for a comment.
Yeah, sure, thank you.
Yang
>
>> Thanks,
>> Yang
>>
>>
>>>> And I also rebased it on top of
>>>> Shijie's series (https://git.kernel.org/pub/scm/linux/kernel/git/arm64/
>>>> linux.git/commit/?id=bfbbb0d3215f) which has been picked up by Will.
>>>>
>>>>>> We can include the
>>>>>> fix to kprobes in v8 or I can send it separately, either is fine to me.
>>>>> Post it on list, and I'll also incorporate into the series.
>>>> I can include it in v8 series.
>>>>
>>>>>> Hopefully we can make v6.18.
>>>>> It's probably getting a bit late now. Anyway, I'll aim to get v8 out
>>>>> tomorrow or
>>>>> Friday and we will see what Will thinks.
>>>> Thank you. I can post v8 today.
>>> OK great - I'll leave it all to you then - thanks!
>>>
>>>> Thanks,
>>>> Yang
>>>>
>>>>> Thanks,
>>>>> Ryan
>>>>>
>>>>>> Thanks,
>>>>>> Yang
>>>>>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-04 17:47 ` Yang Shi
2025-09-04 21:49 ` Yang Shi
@ 2025-09-16 23:44 ` Yang Shi
1 sibling, 0 replies; 51+ messages in thread
From: Yang Shi @ 2025-09-16 23:44 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, Catalin Marinas, Will Deacon,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Ard Biesheuvel,
scott, cl
Cc: linux-arm-kernel, linux-kernel, linux-mm
>
>
> 3. kprobe. S390's alloc_insn_page() does call set_memory_rox(), x86
> also called set_memory_rox() before switching to execmem cache. The
> execmem cache calls set_memory_rox(). I don't know why ARM64 doesn't
> call it.
When I'm trying to find out the proper fix tag for this, I happened to
figure out why ARM64 doesn't call it. ARM64 actually called
set_memory_ro() before commit 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX
directly in alloc_insn_page"). But this commit removed it. It seems like
the author and reviewers overlooked set_memory_ro() also changes direct
map permission. So I believe adding set_memory_rox() is the right fix.
Thanks,
Yang
>
> So I think we just need to fix #1 and #3 per the above analysis. If
> this analysis look correct to you guys, I will prepare two patches to
> fix them.
>
> Thanks,
> Yang
>
>>
>>
>>> Thanks,
>>> Ryan
>>>
>>>> Thanks,
>>>> Yang
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread