* [PATCH 0/2] Two minor fixes for BBML2_NOABORT
@ 2025-10-13 23:27 Yang Shi
2025-10-13 23:27 ` [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range Yang Shi
2025-10-13 23:27 ` [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported Yang Shi
0 siblings, 2 replies; 14+ messages in thread
From: Yang Shi @ 2025-10-13 23:27 UTC (permalink / raw)
To: ryan.roberts, dev.jain, cl, catalin.marinas, will
Cc: yang, linux-arm-kernel, linux-kernel
Yang Shi (2):
arm64: mm: make linear mapping permission update more robust for patial range
arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
arch/arm64/mm/pageattr.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range
2025-10-13 23:27 [PATCH 0/2] Two minor fixes for BBML2_NOABORT Yang Shi
@ 2025-10-13 23:27 ` Yang Shi
2025-10-14 8:05 ` Ryan Roberts
2025-10-15 6:46 ` Dev Jain
2025-10-13 23:27 ` [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported Yang Shi
1 sibling, 2 replies; 14+ messages in thread
From: Yang Shi @ 2025-10-13 23:27 UTC (permalink / raw)
To: ryan.roberts, dev.jain, cl, catalin.marinas, will
Cc: yang, linux-arm-kernel, linux-kernel
The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
permissions for vmalloc_huge mappings") made permission update for
partial range more robust. But the linear mapping permission update
still assumes update the whole range by iterating from the first page
all the way to the last page of the area.
Make it more robust by updating the linear mapping permission from the
page mapped by start address, and update the number of numpages.
Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings")
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/mm/pageattr.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 5135f2d66958..c21a2c319028 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
unsigned long size = PAGE_SIZE * numpages;
unsigned long end = start + size;
struct vm_struct *area;
- int i;
if (!PAGE_ALIGNED(addr)) {
start &= PAGE_MASK;
@@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
*/
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
pgprot_val(clear_mask) == PTE_RDONLY)) {
- for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
+ for (int i = 0; i < numpages; i++) {
+ __change_memory_common((u64)page_address(area->pages[idx++]),
PAGE_SIZE, set_mask, clear_mask);
}
}
--
2.47.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-13 23:27 [PATCH 0/2] Two minor fixes for BBML2_NOABORT Yang Shi
2025-10-13 23:27 ` [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range Yang Shi
@ 2025-10-13 23:27 ` Yang Shi
2025-10-14 8:08 ` Ryan Roberts
2025-10-15 6:50 ` Dev Jain
1 sibling, 2 replies; 14+ messages in thread
From: Yang Shi @ 2025-10-13 23:27 UTC (permalink / raw)
To: ryan.roberts, dev.jain, cl, catalin.marinas, will
Cc: yang, linux-arm-kernel, linux-kernel
When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
exclueded because kernel can't split the va mapping if it is called on
partial range.
It is no longer true if the machines support BBML2_NOABORT after commit
a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full").
So we can relax this restriction and update the comments accordingly.
Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/mm/pageattr.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c21a2c319028..b4dcae6273a8 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -157,13 +157,13 @@ static int change_memory_common(unsigned long addr, int numpages,
/*
* Kernel VA mappings are always live, and splitting live section
- * mappings into page mappings may cause TLB conflicts. This means
- * we have to ensure that changing the permission bits of the range
- * we are operating on does not result in such splitting.
+ * mappings into page mappings may cause TLB conflicts on the machines
+ * which don't support BBML2_NOABORT.
*
* Let's restrict ourselves to mappings created by vmalloc (or vmap).
- * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
- * mappings are updated and splitting is never needed.
+ * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't support
+ * BBML2_NOABORT to guarantee that only page mappings are updated and
+ * splitting is never needed on those machines.
*
* So check whether the [addr, addr + size) interval is entirely
* covered by precisely one VM area that has the VM_ALLOC flag set.
@@ -171,7 +171,8 @@ static int change_memory_common(unsigned long addr, int numpages,
area = find_vm_area((void *)addr);
if (!area ||
end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
- ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
+ !(area->flags & VM_ALLOC) || ((area->flags & VM_ALLOW_HUGE_VMAP) &&
+ !system_supports_bbml2_noabort()))
return -EINVAL;
if (!numpages)
--
2.47.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range
2025-10-13 23:27 ` [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range Yang Shi
@ 2025-10-14 8:05 ` Ryan Roberts
2025-10-14 20:15 ` Yang Shi
2025-10-15 6:46 ` Dev Jain
1 sibling, 1 reply; 14+ messages in thread
From: Ryan Roberts @ 2025-10-14 8:05 UTC (permalink / raw)
To: Yang Shi, dev.jain, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 14/10/2025 00:27, Yang Shi wrote:
> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
> permissions for vmalloc_huge mappings") made permission update for
> partial range more robust. But the linear mapping permission update
> still assumes update the whole range by iterating from the first page
> all the way to the last page of the area.
>
> Make it more robust by updating the linear mapping permission from the
> page mapped by start address, and update the number of numpages.
>
> Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings")
I don't think this is the correct commit. I think this theoretical issue was
present before that. But it is only theoretical AFAIK? In which case, I'd be
inclined to just drop the tag.
Otherwise, LGTM:
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/mm/pageattr.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..c21a2c319028 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> - int i;
>
> if (!PAGE_ALIGNED(addr)) {
> start &= PAGE_MASK;
> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
> */
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> - for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
> + for (int i = 0; i < numpages; i++) {
> + __change_memory_common((u64)page_address(area->pages[idx++]),
> PAGE_SIZE, set_mask, clear_mask);
> }
> }
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-13 23:27 ` [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported Yang Shi
@ 2025-10-14 8:08 ` Ryan Roberts
2025-10-14 20:23 ` Yang Shi
2025-10-15 6:50 ` Dev Jain
1 sibling, 1 reply; 14+ messages in thread
From: Ryan Roberts @ 2025-10-14 8:08 UTC (permalink / raw)
To: Yang Shi, dev.jain, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 14/10/2025 00:27, Yang Shi wrote:
> When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
> exclueded because kernel can't split the va mapping if it is called on
> partial range.
> It is no longer true if the machines support BBML2_NOABORT after commit
> a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full").
> So we can relax this restriction and update the comments accordingly.
Is there actually any user that benefits from this modified behaviour in the
current kernel? If not, then I'd prefer to leave this for Dev to modify
systematically as part of his series to enable VM_ALLOW_HUGE_VMAP by default for
arm64. I believe he's planning to post that soon.
Thanks,
Ryan
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/mm/pageattr.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index c21a2c319028..b4dcae6273a8 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -157,13 +157,13 @@ static int change_memory_common(unsigned long addr, int numpages,
>
> /*
> * Kernel VA mappings are always live, and splitting live section
> - * mappings into page mappings may cause TLB conflicts. This means
> - * we have to ensure that changing the permission bits of the range
> - * we are operating on does not result in such splitting.
> + * mappings into page mappings may cause TLB conflicts on the machines
> + * which don't support BBML2_NOABORT.
> *
> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
> - * mappings are updated and splitting is never needed.
> + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't support
> + * BBML2_NOABORT to guarantee that only page mappings are updated and
> + * splitting is never needed on those machines.
> *
> * So check whether the [addr, addr + size) interval is entirely
> * covered by precisely one VM area that has the VM_ALLOC flag set.
> @@ -171,7 +171,8 @@ static int change_memory_common(unsigned long addr, int numpages,
> area = find_vm_area((void *)addr);
> if (!area ||
> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
> + !(area->flags & VM_ALLOC) || ((area->flags & VM_ALLOW_HUGE_VMAP) &&
> + !system_supports_bbml2_noabort()))
> return -EINVAL;
>
> if (!numpages)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range
2025-10-14 8:05 ` Ryan Roberts
@ 2025-10-14 20:15 ` Yang Shi
0 siblings, 0 replies; 14+ messages in thread
From: Yang Shi @ 2025-10-14 20:15 UTC (permalink / raw)
To: Ryan Roberts, dev.jain, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 10/14/25 1:05 AM, Ryan Roberts wrote:
> On 14/10/2025 00:27, Yang Shi wrote:
>> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
>> permissions for vmalloc_huge mappings") made permission update for
>> partial range more robust. But the linear mapping permission update
>> still assumes update the whole range by iterating from the first page
>> all the way to the last page of the area.
>>
>> Make it more robust by updating the linear mapping permission from the
>> page mapped by start address, and update the number of numpages.
>>
>> Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings")
> I don't think this is the correct commit. I think this theoretical issue was
> present before that. But it is only theoretical AFAIK? In which case, I'd be
> inclined to just drop the tag.
OK, I will drop the tag.
>
> Otherwise, LGTM:
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Thank you.
Yang
>
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/mm/pageattr.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 5135f2d66958..c21a2c319028 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
>> unsigned long size = PAGE_SIZE * numpages;
>> unsigned long end = start + size;
>> struct vm_struct *area;
>> - int i;
>>
>> if (!PAGE_ALIGNED(addr)) {
>> start &= PAGE_MASK;
>> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>> */
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> - for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
>> + for (int i = 0; i < numpages; i++) {
>> + __change_memory_common((u64)page_address(area->pages[idx++]),
>> PAGE_SIZE, set_mask, clear_mask);
>> }
>> }
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-14 8:08 ` Ryan Roberts
@ 2025-10-14 20:23 ` Yang Shi
0 siblings, 0 replies; 14+ messages in thread
From: Yang Shi @ 2025-10-14 20:23 UTC (permalink / raw)
To: Ryan Roberts, dev.jain, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 10/14/25 1:08 AM, Ryan Roberts wrote:
> On 14/10/2025 00:27, Yang Shi wrote:
>> When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
>> exclueded because kernel can't split the va mapping if it is called on
>> partial range.
>> It is no longer true if the machines support BBML2_NOABORT after commit
>> a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full").
>> So we can relax this restriction and update the comments accordingly.
> Is there actually any user that benefits from this modified behaviour in the
> current kernel? If not, then I'd prefer to leave this for Dev to modify
> systematically as part of his series to enable VM_ALLOW_HUGE_VMAP by default for
> arm64. I believe he's planning to post that soon.
I actually just wanted to fix the stale comment about "splitting is
never needed" in the first place as we discussed in earlier review, but
I realized it doesn't make too much sense to just update the comment
itself w/o updating the behavior. Because the skipping
VM_ALLOW_HUGE_VMAP behavior was added because va mapping can't be split,
so it makes more sense to relax it if va mapping can be split.
I looked at this patch more like a follow-up fix for split kernel page
table than an enhancement for VM_ALLOW_HUGE_VMAP. And it also seems like
a prerequisite for Dev's series IMHO.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/mm/pageattr.c | 13 +++++++------
>> 1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c21a2c319028..b4dcae6273a8 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -157,13 +157,13 @@ static int change_memory_common(unsigned long addr, int numpages,
>>
>> /*
>> * Kernel VA mappings are always live, and splitting live section
>> - * mappings into page mappings may cause TLB conflicts. This means
>> - * we have to ensure that changing the permission bits of the range
>> - * we are operating on does not result in such splitting.
>> + * mappings into page mappings may cause TLB conflicts on the machines
>> + * which don't support BBML2_NOABORT.
>> *
>> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
>> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
>> - * mappings are updated and splitting is never needed.
>> + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't support
>> + * BBML2_NOABORT to guarantee that only page mappings are updated and
>> + * splitting is never needed on those machines.
>> *
>> * So check whether the [addr, addr + size) interval is entirely
>> * covered by precisely one VM area that has the VM_ALLOC flag set.
>> @@ -171,7 +171,8 @@ static int change_memory_common(unsigned long addr, int numpages,
>> area = find_vm_area((void *)addr);
>> if (!area ||
>> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
>> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>> + !(area->flags & VM_ALLOC) || ((area->flags & VM_ALLOW_HUGE_VMAP) &&
>> + !system_supports_bbml2_noabort()))
>> return -EINVAL;
>>
>> if (!numpages)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range
2025-10-13 23:27 ` [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range Yang Shi
2025-10-14 8:05 ` Ryan Roberts
@ 2025-10-15 6:46 ` Dev Jain
2025-10-16 18:45 ` Yang Shi
1 sibling, 1 reply; 14+ messages in thread
From: Dev Jain @ 2025-10-15 6:46 UTC (permalink / raw)
To: Yang Shi, ryan.roberts, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 14/10/25 4:57 am, Yang Shi wrote:
> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
> permissions for vmalloc_huge mappings") made permission update for
> partial range more robust. But the linear mapping permission update
> still assumes update the whole range by iterating from the first page
> all the way to the last page of the area.
>
> Make it more robust by updating the linear mapping permission from the
> page mapped by start address, and update the number of numpages.
>
> Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings")
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/mm/pageattr.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..c21a2c319028 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> - int i;
>
> if (!PAGE_ALIGNED(addr)) {
> start &= PAGE_MASK;
> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
> */
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> - for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
> + for (int i = 0; i < numpages; i++) {
> + __change_memory_common((u64)page_address(area->pages[idx++]),
> PAGE_SIZE, set_mask, clear_mask);
Why not just use idx as the iterator in the for loop? Using i and then incrementing
idx is confusing.
As noted by Ryan, the fixes commit is wrong. The issues persists from commit c55191e.
After fixing these:
Reviewed-by: Dev Jain <dev.jain@arm.com>
> }
> }
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-13 23:27 ` [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported Yang Shi
2025-10-14 8:08 ` Ryan Roberts
@ 2025-10-15 6:50 ` Dev Jain
2025-10-16 18:50 ` Yang Shi
1 sibling, 1 reply; 14+ messages in thread
From: Dev Jain @ 2025-10-15 6:50 UTC (permalink / raw)
To: Yang Shi, ryan.roberts, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 14/10/25 4:57 am, Yang Shi wrote:
> When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
> exclueded because kernel can't split the va mapping if it is called on
> partial range.
> It is no longer true if the machines support BBML2_NOABORT after commit
> a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full").
> So we can relax this restriction and update the comments accordingly.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/mm/pageattr.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index c21a2c319028..b4dcae6273a8 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -157,13 +157,13 @@ static int change_memory_common(unsigned long addr, int numpages,
>
> /*
> * Kernel VA mappings are always live, and splitting live section
> - * mappings into page mappings may cause TLB conflicts. This means
> - * we have to ensure that changing the permission bits of the range
> - * we are operating on does not result in such splitting.
> + * mappings into page mappings may cause TLB conflicts on the machines
> + * which don't support BBML2_NOABORT.
> *
> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
> - * mappings are updated and splitting is never needed.
> + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't support
> + * BBML2_NOABORT to guarantee that only page mappings are updated and
> + * splitting is never needed on those machines.
> *
> * So check whether the [addr, addr + size) interval is entirely
> * covered by precisely one VM area that has the VM_ALLOC flag set.
> @@ -171,7 +171,8 @@ static int change_memory_common(unsigned long addr, int numpages,
> area = find_vm_area((void *)addr);
> if (!area ||
> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
> + !(area->flags & VM_ALLOC) || ((area->flags & VM_ALLOW_HUGE_VMAP) &&
> + !system_supports_bbml2_noabort()))
> return -EINVAL;
>
> if (!numpages)
This will conflict with my upcoming vmalloc-huge series, so best to leave it to me,
I already have this included :)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range
2025-10-15 6:46 ` Dev Jain
@ 2025-10-16 18:45 ` Yang Shi
2025-10-16 18:47 ` Dev Jain
0 siblings, 1 reply; 14+ messages in thread
From: Yang Shi @ 2025-10-16 18:45 UTC (permalink / raw)
To: Dev Jain, ryan.roberts, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 10/14/25 11:46 PM, Dev Jain wrote:
>
> On 14/10/25 4:57 am, Yang Shi wrote:
>> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when
>> changing
>> permissions for vmalloc_huge mappings") made permission update for
>> partial range more robust. But the linear mapping permission update
>> still assumes update the whole range by iterating from the first page
>> all the way to the last page of the area.
>>
>> Make it more robust by updating the linear mapping permission from the
>> page mapped by start address, and update the number of numpages.
>>
>> Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when
>> changing permissions for vmalloc_huge mappings")
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/mm/pageattr.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 5135f2d66958..c21a2c319028 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long
>> addr, int numpages,
>> unsigned long size = PAGE_SIZE * numpages;
>> unsigned long end = start + size;
>> struct vm_struct *area;
>> - int i;
>> if (!PAGE_ALIGNED(addr)) {
>> start &= PAGE_MASK;
>> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long
>> addr, int numpages,
>> */
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> - for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + unsigned long idx = (start - (unsigned long)area->addr) >>
>> PAGE_SHIFT;
>> + for (int i = 0; i < numpages; i++) {
>> + __change_memory_common((u64)page_address(area->pages[idx++]),
>> PAGE_SIZE, set_mask, clear_mask);
>
> Why not just use idx as the iterator in the for loop? Using i and then
> incrementing
> idx is confusing.
You meant something like:
while (idx < idx + numpages)
It is fine to me.
>
> As noted by Ryan, the fixes commit is wrong. The issues persists from
> commit c55191e.
>
> After fixing these:
>
> Reviewed-by: Dev Jain <dev.jain@arm.com>
Thank you.
Yang
>
>> }
>> }
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range
2025-10-16 18:45 ` Yang Shi
@ 2025-10-16 18:47 ` Dev Jain
0 siblings, 0 replies; 14+ messages in thread
From: Dev Jain @ 2025-10-16 18:47 UTC (permalink / raw)
To: Yang Shi, ryan.roberts, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 17/10/25 12:15 am, Yang Shi wrote:
>
>
> On 10/14/25 11:46 PM, Dev Jain wrote:
>>
>> On 14/10/25 4:57 am, Yang Shi wrote:
>>> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when
>>> changing
>>> permissions for vmalloc_huge mappings") made permission update for
>>> partial range more robust. But the linear mapping permission update
>>> still assumes update the whole range by iterating from the first page
>>> all the way to the last page of the area.
>>>
>>> Make it more robust by updating the linear mapping permission from the
>>> page mapped by start address, and update the number of numpages.
>>>
>>> Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when
>>> changing permissions for vmalloc_huge mappings")
>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>>> ---
>>> arch/arm64/mm/pageattr.c | 6 +++---
>>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> index 5135f2d66958..c21a2c319028 100644
>>> --- a/arch/arm64/mm/pageattr.c
>>> +++ b/arch/arm64/mm/pageattr.c
>>> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long
>>> addr, int numpages,
>>> unsigned long size = PAGE_SIZE * numpages;
>>> unsigned long end = start + size;
>>> struct vm_struct *area;
>>> - int i;
>>> if (!PAGE_ALIGNED(addr)) {
>>> start &= PAGE_MASK;
>>> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long
>>> addr, int numpages,
>>> */
>>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>>> - for (i = 0; i < area->nr_pages; i++) {
>>> - __change_memory_common((u64)page_address(area->pages[i]),
>>> + unsigned long idx = (start - (unsigned long)area->addr) >>
>>> PAGE_SHIFT;
>>> + for (int i = 0; i < numpages; i++) {
>>> + __change_memory_common((u64)page_address(area->pages[idx++]),
>>> PAGE_SIZE, set_mask, clear_mask);
>>
>> Why not just use idx as the iterator in the for loop? Using i and
>> then incrementing
>> idx is confusing.
>
> You meant something like:
>
> while (idx < idx + numpages)
>
> It is fine to me.
for loop is simpler :)
>
>>
>> As noted by Ryan, the fixes commit is wrong. The issues persists from
>> commit c55191e.
>>
>> After fixing these:
>>
>> Reviewed-by: Dev Jain <dev.jain@arm.com>
>
> Thank you.
>
> Yang
>
>>
>>> }
>>> }
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-15 6:50 ` Dev Jain
@ 2025-10-16 18:50 ` Yang Shi
2025-10-17 15:50 ` Dev Jain
0 siblings, 1 reply; 14+ messages in thread
From: Yang Shi @ 2025-10-16 18:50 UTC (permalink / raw)
To: Dev Jain, ryan.roberts, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 10/14/25 11:50 PM, Dev Jain wrote:
>
> On 14/10/25 4:57 am, Yang Shi wrote:
>> When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
>> exclueded because kernel can't split the va mapping if it is called on
>> partial range.
>> It is no longer true if the machines support BBML2_NOABORT after commit
>> a166563e7ec3 ("arm64: mm: support large block mapping when
>> rodata=full").
>> So we can relax this restriction and update the comments accordingly.
>>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when
>> rodata=full")
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/mm/pageattr.c | 13 +++++++------
>> 1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c21a2c319028..b4dcae6273a8 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -157,13 +157,13 @@ static int change_memory_common(unsigned long
>> addr, int numpages,
>> /*
>> * Kernel VA mappings are always live, and splitting live section
>> - * mappings into page mappings may cause TLB conflicts. This means
>> - * we have to ensure that changing the permission bits of the range
>> - * we are operating on does not result in such splitting.
>> + * mappings into page mappings may cause TLB conflicts on the
>> machines
>> + * which don't support BBML2_NOABORT.
>> *
>> * Let's restrict ourselves to mappings created by vmalloc (or
>> vmap).
>> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
>> - * mappings are updated and splitting is never needed.
>> + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't
>> support
>> + * BBML2_NOABORT to guarantee that only page mappings are
>> updated and
>> + * splitting is never needed on those machines.
>> *
>> * So check whether the [addr, addr + size) interval is entirely
>> * covered by precisely one VM area that has the VM_ALLOC flag
>> set.
>> @@ -171,7 +171,8 @@ static int change_memory_common(unsigned long
>> addr, int numpages,
>> area = find_vm_area((void *)addr);
>> if (!area ||
>> end > (unsigned long)kasan_reset_tag(area->addr) +
>> area->size ||
>> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>> + !(area->flags & VM_ALLOC) || ((area->flags &
>> VM_ALLOW_HUGE_VMAP) &&
>> + !system_supports_bbml2_noabort()))
>> return -EINVAL;
>> if (!numpages)
>
> This will conflict with my upcoming vmalloc-huge series, so best to
> leave it to me,
> I already have this included :)
My point is that I hope this can be merged as a hotfix for 6.18. I have
no strong opinion on either the maintainers take this one or from your
series. But if this will go into 6.18 as a hotfix, it should be also a
prerequisite patch (standalone) in your series, and the rest of your
series should be based on top of it. Of course this argument will not
stand if we don't care to have it fixed for 6.18.
Thanks,
Yang
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-16 18:50 ` Yang Shi
@ 2025-10-17 15:50 ` Dev Jain
2025-10-30 13:16 ` Will Deacon
0 siblings, 1 reply; 14+ messages in thread
From: Dev Jain @ 2025-10-17 15:50 UTC (permalink / raw)
To: Yang Shi, ryan.roberts, cl, catalin.marinas, will
Cc: linux-arm-kernel, linux-kernel
On 17/10/25 12:20 am, Yang Shi wrote:
>
>
> On 10/14/25 11:50 PM, Dev Jain wrote:
>>
>> On 14/10/25 4:57 am, Yang Shi wrote:
>>> When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
>>> exclueded because kernel can't split the va mapping if it is called on
>>> partial range.
>>> It is no longer true if the machines support BBML2_NOABORT after commit
>>> a166563e7ec3 ("arm64: mm: support large block mapping when
>>> rodata=full").
>>> So we can relax this restriction and update the comments accordingly.
>>>
>>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when
>>> rodata=full")
>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>>> ---
>>> arch/arm64/mm/pageattr.c | 13 +++++++------
>>> 1 file changed, 7 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> index c21a2c319028..b4dcae6273a8 100644
>>> --- a/arch/arm64/mm/pageattr.c
>>> +++ b/arch/arm64/mm/pageattr.c
>>> @@ -157,13 +157,13 @@ static int change_memory_common(unsigned long
>>> addr, int numpages,
>>> /*
>>> * Kernel VA mappings are always live, and splitting live section
>>> - * mappings into page mappings may cause TLB conflicts. This means
>>> - * we have to ensure that changing the permission bits of the
>>> range
>>> - * we are operating on does not result in such splitting.
>>> + * mappings into page mappings may cause TLB conflicts on the
>>> machines
>>> + * which don't support BBML2_NOABORT.
>>> *
>>> * Let's restrict ourselves to mappings created by vmalloc (or
>>> vmap).
>>> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only
>>> page
>>> - * mappings are updated and splitting is never needed.
>>> + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't
>>> support
>>> + * BBML2_NOABORT to guarantee that only page mappings are
>>> updated and
>>> + * splitting is never needed on those machines.
>>> *
>>> * So check whether the [addr, addr + size) interval is entirely
>>> * covered by precisely one VM area that has the VM_ALLOC flag
>>> set.
>>> @@ -171,7 +171,8 @@ static int change_memory_common(unsigned long
>>> addr, int numpages,
>>> area = find_vm_area((void *)addr);
>>> if (!area ||
>>> end > (unsigned long)kasan_reset_tag(area->addr) +
>>> area->size ||
>>> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>>> + !(area->flags & VM_ALLOC) || ((area->flags &
>>> VM_ALLOW_HUGE_VMAP) &&
>>> + !system_supports_bbml2_noabort()))
>>> return -EINVAL;
>>> if (!numpages)
>>
>> This will conflict with my upcoming vmalloc-huge series, so best to
>> leave it to me,
>> I already have this included :)
>
> My point is that I hope this can be merged as a hotfix for 6.18. I
> have no strong opinion on either the maintainers take this one or from
> your series. But if this will go into 6.18 as a hotfix, it should be
> also a prerequisite patch (standalone) in your series, and the rest
> of your series should be based on top of it. Of course this argument
> will not stand if we don't care to have it fixed for 6.18.
I see what you mean, but I don't think this patch should be treated as a
hotfix. We forgot to relax a
restriction - that's fine. That is not an incorrectness in the linear
map series. A fix usually
fixes an incorrectness.
>
> Thanks,
> Yang
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
2025-10-17 15:50 ` Dev Jain
@ 2025-10-30 13:16 ` Will Deacon
0 siblings, 0 replies; 14+ messages in thread
From: Will Deacon @ 2025-10-30 13:16 UTC (permalink / raw)
To: Dev Jain
Cc: Yang Shi, ryan.roberts, cl, catalin.marinas, linux-arm-kernel,
linux-kernel
On Fri, Oct 17, 2025 at 09:20:10PM +0530, Dev Jain wrote:
> On 17/10/25 12:20 am, Yang Shi wrote:
> > On 10/14/25 11:50 PM, Dev Jain wrote:
> > > On 14/10/25 4:57 am, Yang Shi wrote:
> > > > When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
> > > > exclueded because kernel can't split the va mapping if it is called on
> > > > partial range.
> > > > It is no longer true if the machines support BBML2_NOABORT after commit
> > > > a166563e7ec3 ("arm64: mm: support large block mapping when
> > > > rodata=full").
> > > > So we can relax this restriction and update the comments accordingly.
> > > >
> > > > Fixes: a166563e7ec3 ("arm64: mm: support large block mapping
> > > > when rodata=full")
> > > > Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> > > > ---
> > > > arch/arm64/mm/pageattr.c | 13 +++++++------
> > > > 1 file changed, 7 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > > > index c21a2c319028..b4dcae6273a8 100644
> > > > --- a/arch/arm64/mm/pageattr.c
> > > > +++ b/arch/arm64/mm/pageattr.c
> > > > @@ -157,13 +157,13 @@ static int change_memory_common(unsigned
> > > > long addr, int numpages,
> > > > /*
> > > > * Kernel VA mappings are always live, and splitting live section
> > > > - * mappings into page mappings may cause TLB conflicts. This means
> > > > - * we have to ensure that changing the permission bits of
> > > > the range
> > > > - * we are operating on does not result in such splitting.
> > > > + * mappings into page mappings may cause TLB conflicts on
> > > > the machines
> > > > + * which don't support BBML2_NOABORT.
> > > > *
> > > > * Let's restrict ourselves to mappings created by vmalloc
> > > > (or vmap).
> > > > - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that
> > > > only page
> > > > - * mappings are updated and splitting is never needed.
> > > > + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems
> > > > don't support
> > > > + * BBML2_NOABORT to guarantee that only page mappings are
> > > > updated and
> > > > + * splitting is never needed on those machines.
> > > > *
> > > > * So check whether the [addr, addr + size) interval is entirely
> > > > * covered by precisely one VM area that has the VM_ALLOC
> > > > flag set.
> > > > @@ -171,7 +171,8 @@ static int change_memory_common(unsigned
> > > > long addr, int numpages,
> > > > area = find_vm_area((void *)addr);
> > > > if (!area ||
> > > > end > (unsigned long)kasan_reset_tag(area->addr) +
> > > > area->size ||
> > > > - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
> > > > + !(area->flags & VM_ALLOC) || ((area->flags &
> > > > VM_ALLOW_HUGE_VMAP) &&
> > > > + !system_supports_bbml2_noabort()))
> > > > return -EINVAL;
> > > > if (!numpages)
> > >
> > > This will conflict with my upcoming vmalloc-huge series, so best to
> > > leave it to me,
> > > I already have this included :)
> >
> > My point is that I hope this can be merged as a hotfix for 6.18. I have
> > no strong opinion on either the maintainers take this one or from your
> > series. But if this will go into 6.18 as a hotfix, it should be also a
> > prerequisite patch (standalone) in your series, and the rest of your
> > series should be based on top of it. Of course this argument will not
> > stand if we don't care to have it fixed for 6.18.
>
> I see what you mean, but I don't think this patch should be treated as a
> hotfix. We forgot to relax a
>
> restriction - that's fine. That is not an incorrectness in the linear map
> series. A fix usually
>
> fixes an incorrectness.
Right, unless there's a real functional issue that is affecting people
then I don't see the need to take any of this for 6.18.
Will
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-10-30 13:17 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-13 23:27 [PATCH 0/2] Two minor fixes for BBML2_NOABORT Yang Shi
2025-10-13 23:27 ` [PATCH 1/2] arm64: mm: make linear mapping permission update more robust for patial range Yang Shi
2025-10-14 8:05 ` Ryan Roberts
2025-10-14 20:15 ` Yang Shi
2025-10-15 6:46 ` Dev Jain
2025-10-16 18:45 ` Yang Shi
2025-10-16 18:47 ` Dev Jain
2025-10-13 23:27 ` [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported Yang Shi
2025-10-14 8:08 ` Ryan Roberts
2025-10-14 20:23 ` Yang Shi
2025-10-15 6:50 ` Dev Jain
2025-10-16 18:50 ` Yang Shi
2025-10-17 15:50 ` Dev Jain
2025-10-30 13:16 ` Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).