* [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common()
@ 2025-11-12 6:27 Dev Jain
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Dev Jain @ 2025-11-12 6:27 UTC (permalink / raw)
To: catalin.marinas, will
Cc: ryan.roberts, rppt, shijie, yang, linux-arm-kernel, linux-kernel,
Dev Jain
The first patch is a fix to prevent suppression of error encountered during
linear map alias permission change. This is needed because the rodata=on
security measure requires us to proceed only if we first do the linear map
alias protection change.
The second patch documents why linear map split failure during the vfreeing
path isn't problematic.
Dev Jain (2):
arm64/pageattr: Propagate return value from __change_memory_common
arm64/mm: Document why linear map split failure upon vm_reset_perms is
not problematic
arch/arm64/mm/pageattr.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Dev Jain
@ 2025-11-12 6:27 ` Dev Jain
2025-11-12 22:27 ` Yang Shi
` (2 more replies)
2025-11-12 6:27 ` [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic Dev Jain
2025-11-28 15:52 ` [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Catalin Marinas
2 siblings, 3 replies; 16+ messages in thread
From: Dev Jain @ 2025-11-12 6:27 UTC (permalink / raw)
To: catalin.marinas, will
Cc: ryan.roberts, rppt, shijie, yang, linux-arm-kernel, linux-kernel,
Dev Jain
The rodata=on security measure requires that any code path which does
vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
too. Therefore, if such a call fails, we must abort set_memory_* and caller
must take appropriate action; currently we are suppressing the error, and
there is a real chance of such an error arising post commit a166563e7ec3
("arm64: mm: support large block mapping when rodata=full"). Therefore,
propagate any error to the caller.
Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
I have dropped stable since no real chance of failure was there.
arch/arm64/mm/pageattr.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 5135f2d66958..b4ea86cd3a71 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
unsigned long size = PAGE_SIZE * numpages;
unsigned long end = start + size;
struct vm_struct *area;
+ int ret;
int i;
if (!PAGE_ALIGNED(addr)) {
@@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
pgprot_val(clear_mask) == PTE_RDONLY)) {
for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ ret = __change_memory_common((u64)page_address(area->pages[i]),
PAGE_SIZE, set_mask, clear_mask);
+ if (ret)
+ return ret;
}
}
--
2.30.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic
2025-11-12 6:27 [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Dev Jain
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
@ 2025-11-12 6:27 ` Dev Jain
2025-11-12 22:34 ` Yang Shi
2025-11-13 12:25 ` Ryan Roberts
2025-11-28 15:52 ` [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Catalin Marinas
2 siblings, 2 replies; 16+ messages in thread
From: Dev Jain @ 2025-11-12 6:27 UTC (permalink / raw)
To: catalin.marinas, will
Cc: ryan.roberts, rppt, shijie, yang, linux-arm-kernel, linux-kernel,
Dev Jain
Consider the following code path:
(1) vmalloc -> (2) set_vm_flush_reset_perms -> (3) set_memory_ro/set_memory_rox
-> .... (4) use the mapping .... -> (5) vfree -> (6) vm_reset_perms
-> (7) set_area_direct_map.
Or, it may happen that we encounter failure at (3) and directly jump to (5).
In both cases, (7) may fail due to linear map split failure. But, we care
about its success *only* for the region which got successfully changed by
(3). Such a region is guaranteed to be pte-mapped.
The TLDR is that (7) will surely succeed for the regions we care about.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
arch/arm64/mm/pageattr.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b4ea86cd3a71..dc05f06a47f2 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -185,6 +185,15 @@ static int change_memory_common(unsigned long addr, int numpages,
*/
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
pgprot_val(clear_mask) == PTE_RDONLY)) {
+ /*
+ * Note: One may wonder what happens if the calls to
+ * set_area_direct_map() in vm_reset_perms() fail due ENOMEM on
+ * linear map split failure. Observe that we care about those
+ * calls to succeed *only* for the region whose permissions
+ * are not default. Such a region is guaranteed to be
+ * pte-mapped, because the below call can change those
+ * permissions to non-default only after splitting that region.
+ */
for (i = 0; i < area->nr_pages; i++) {
ret = __change_memory_common((u64)page_address(area->pages[i]),
PAGE_SIZE, set_mask, clear_mask);
--
2.30.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
@ 2025-11-12 22:27 ` Yang Shi
2025-11-19 18:06 ` Catalin Marinas
2025-11-13 11:55 ` Ryan Roberts
2025-11-19 18:03 ` Catalin Marinas
2 siblings, 1 reply; 16+ messages in thread
From: Yang Shi @ 2025-11-12 22:27 UTC (permalink / raw)
To: Dev Jain, catalin.marinas, will
Cc: ryan.roberts, rppt, shijie, linux-arm-kernel, linux-kernel
On 11/11/25 10:27 PM, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Thanks for fixing this. My old patches propagated the return value of
splitting page table which was called outside __change_memory_common().
But it was missed in the final patches.
The fix looks good to me. Reviewed-by: Yang Shi
<yang@os.amperecomputing.com>
Yang
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
>
> arch/arm64/mm/pageattr.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..b4ea86cd3a71 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> + int ret;
> int i;
>
> if (!PAGE_ALIGNED(addr)) {
> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
> + if (ret)
> + return ret;
> }
> }
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic
2025-11-12 6:27 ` [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic Dev Jain
@ 2025-11-12 22:34 ` Yang Shi
2025-11-13 12:25 ` Ryan Roberts
1 sibling, 0 replies; 16+ messages in thread
From: Yang Shi @ 2025-11-12 22:34 UTC (permalink / raw)
To: Dev Jain, catalin.marinas, will
Cc: ryan.roberts, rppt, shijie, linux-arm-kernel, linux-kernel
On 11/11/25 10:27 PM, Dev Jain wrote:
> Consider the following code path:
>
> (1) vmalloc -> (2) set_vm_flush_reset_perms -> (3) set_memory_ro/set_memory_rox
> -> .... (4) use the mapping .... -> (5) vfree -> (6) vm_reset_perms
> -> (7) set_area_direct_map.
> Or, it may happen that we encounter failure at (3) and directly jump to (5).
>
> In both cases, (7) may fail due to linear map split failure. But, we care
> about its success *only* for the region which got successfully changed by
> (3). Such a region is guaranteed to be pte-mapped.
>
> The TLDR is that (7) will surely succeed for the regions we care about.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Thanks for documenting this. Looks good to me. Reviewed-by: Yang Shi
<yang@os.amperecomputing.com>
Yang
> ---
> arch/arm64/mm/pageattr.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index b4ea86cd3a71..dc05f06a47f2 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -185,6 +185,15 @@ static int change_memory_common(unsigned long addr, int numpages,
> */
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> + /*
> + * Note: One may wonder what happens if the calls to
> + * set_area_direct_map() in vm_reset_perms() fail due ENOMEM on
> + * linear map split failure. Observe that we care about those
> + * calls to succeed *only* for the region whose permissions
> + * are not default. Such a region is guaranteed to be
> + * pte-mapped, because the below call can change those
> + * permissions to non-default only after splitting that region.
> + */
> for (i = 0; i < area->nr_pages; i++) {
> ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
2025-11-12 22:27 ` Yang Shi
@ 2025-11-13 11:55 ` Ryan Roberts
2025-11-13 17:38 ` Yang Shi
2025-11-24 15:11 ` Will Deacon
2025-11-19 18:03 ` Catalin Marinas
2 siblings, 2 replies; 16+ messages in thread
From: Ryan Roberts @ 2025-11-13 11:55 UTC (permalink / raw)
To: Dev Jain, catalin.marinas, will
Cc: rppt, shijie, yang, linux-arm-kernel, linux-kernel
On 12/11/2025 06:27, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
It would be good to get this into v6.18 I guess?
Although I think this will conflict with a patch from Yang that makes this work
with a partial vm area range - But I think that one will only go to v6.19.
Thanks,
Ryan
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
>
> arch/arm64/mm/pageattr.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..b4ea86cd3a71 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> + int ret;
> int i;
>
> if (!PAGE_ALIGNED(addr)) {
> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
> + if (ret)
> + return ret;
> }
> }
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic
2025-11-12 6:27 ` [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic Dev Jain
2025-11-12 22:34 ` Yang Shi
@ 2025-11-13 12:25 ` Ryan Roberts
1 sibling, 0 replies; 16+ messages in thread
From: Ryan Roberts @ 2025-11-13 12:25 UTC (permalink / raw)
To: Dev Jain, catalin.marinas, will
Cc: rppt, shijie, yang, linux-arm-kernel, linux-kernel
On 12/11/2025 06:27, Dev Jain wrote:
> Consider the following code path:
>
> (1) vmalloc -> (2) set_vm_flush_reset_perms -> (3) set_memory_ro/set_memory_rox
> -> .... (4) use the mapping .... -> (5) vfree -> (6) vm_reset_perms
> -> (7) set_area_direct_map.
> Or, it may happen that we encounter failure at (3) and directly jump to (5).
>
> In both cases, (7) may fail due to linear map split failure. But, we care
> about its success *only* for the region which got successfully changed by
> (3). Such a region is guaranteed to be pte-mapped.
I keep tripping over this bit because I keep forgetting that even if the vmalloc
region whose permissions are being changed contains a huge mapping, we still
change the linear map page-by-page. So the portion of the linear map is
guarranteed to be pte-mapped.
>
> The TLDR is that (7) will surely succeed for the regions we care about.
Appologies, we have definitely discussed this before, but I still can't quite
convince myself. Consider this:
static void vm_reset_perms(struct vm_struct *area)
{
...
/*
* Set direct map to something invalid so that it won't be cached if
* there are any accesses after the TLB flush, then flush the TLB and
* reset the direct map permissions to the default.
*/
set_area_direct_map(area, set_direct_map_invalid_noflush);
_vm_unmap_aliases(start, end, flush_dmap);
set_area_direct_map(area, set_direct_map_default_noflush);
}
If we have a situation where a region of 4M is allocated with vmalloc, which is
PMD-mapped, then only the second 2M has its permissions changed by
set_memory_ro[x], we end up with the first 2M not pte-mapped in the linear map
with default permissions and the second 2M pte-mapped in the linear with
non-default permissions.
The above code tries to set the whole vm area to invalid in the linear map
before issuing a TLB flush. This could fail for the first half of the area if we
are unable to allocate the PTE table to split to PTE (since
set_direct_map_default_noflush() is called page-by-page). So we end up with the
first half of the region valid with default permissions in the linear map and
the second half invalid when we do the TLB flush.
(The above code does it this way to a) reduce the number of TLB flushes to the
minimum - the same one covers both the linear map and the vmalloc invalidation,
and b) to ensure there is no window when memory has both X and W aliases.
Given it is guarranteed that if any of the linear map is still valid, then it's
permissions are the default ones and the vmalloc alias was never changed to
non-default permissions for that part, then I agree this is safe.
I've convinced myself that this is all safe and correct. Sorry for the babble:
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> arch/arm64/mm/pageattr.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index b4ea86cd3a71..dc05f06a47f2 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -185,6 +185,15 @@ static int change_memory_common(unsigned long addr, int numpages,
> */
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> + /*
> + * Note: One may wonder what happens if the calls to
> + * set_area_direct_map() in vm_reset_perms() fail due ENOMEM on
> + * linear map split failure. Observe that we care about those
> + * calls to succeed *only* for the region whose permissions
> + * are not default. Such a region is guaranteed to be
> + * pte-mapped, because the below call can change those
> + * permissions to non-default only after splitting that region.
> + */
> for (i = 0; i < area->nr_pages; i++) {
> ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-13 11:55 ` Ryan Roberts
@ 2025-11-13 17:38 ` Yang Shi
2025-11-24 15:11 ` Will Deacon
1 sibling, 0 replies; 16+ messages in thread
From: Yang Shi @ 2025-11-13 17:38 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, catalin.marinas, will
Cc: rppt, shijie, linux-arm-kernel, linux-kernel
On 11/13/25 3:55 AM, Ryan Roberts wrote:
> On 12/11/2025 06:27, Dev Jain wrote:
>> The rodata=on security measure requires that any code path which does
>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
>> must take appropriate action; currently we are suppressing the error, and
>> there is a real chance of such an error arising post commit a166563e7ec3
>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
>> propagate any error to the caller.
>>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
> It would be good to get this into v6.18 I guess?
Yeah, agreed.
>
> Although I think this will conflict with a patch from Yang that makes this work
> with a partial vm area range - But I think that one will only go to v6.19.
Yes, it will conflict. But it is quite easy to solve. I will rebase my
patch on top of it once it is picked.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> ---
>> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
>> I have dropped stable since no real chance of failure was there.
>>
>> arch/arm64/mm/pageattr.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 5135f2d66958..b4ea86cd3a71 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>> unsigned long size = PAGE_SIZE * numpages;
>> unsigned long end = start + size;
>> struct vm_struct *area;
>> + int ret;
>> int i;
>>
>> if (!PAGE_ALIGNED(addr)) {
>> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + ret = __change_memory_common((u64)page_address(area->pages[i]),
>> PAGE_SIZE, set_mask, clear_mask);
>> + if (ret)
>> + return ret;
>> }
>> }
>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
2025-11-12 22:27 ` Yang Shi
2025-11-13 11:55 ` Ryan Roberts
@ 2025-11-19 18:03 ` Catalin Marinas
2025-11-20 4:10 ` Dev Jain
2 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2025-11-19 18:03 UTC (permalink / raw)
To: Dev Jain
Cc: will, ryan.roberts, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On Wed, Nov 12, 2025 at 11:57:15AM +0530, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
I couldn't figure out from the comments on v1 whether Will's concern was
addressed:
https://lore.kernel.org/all/aQn4EwKar66UZ7rz@willie-the-truck/
IOW, is the patch necessary? What are the failure scenarios and what
does the caller do? It's good to propagate an error to the caller but
patch also changes the current behaviour by bailing out earlier.
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 22:27 ` Yang Shi
@ 2025-11-19 18:06 ` Catalin Marinas
2025-11-19 19:33 ` Yang Shi
0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2025-11-19 18:06 UTC (permalink / raw)
To: Yang Shi
Cc: Dev Jain, will, ryan.roberts, rppt, shijie, linux-arm-kernel,
linux-kernel
On Wed, Nov 12, 2025 at 02:27:49PM -0800, Yang Shi wrote:
> The fix looks good to me. Reviewed-by: Yang Shi
> <yang@os.amperecomputing.com>
BTW, I think tools like b4 which I use to apply patches will ignored the
reviewed-by tag as it's not on a separate line (I haven't checked the
latest b4 version but I've seen it happening in the past).
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-19 18:06 ` Catalin Marinas
@ 2025-11-19 19:33 ` Yang Shi
0 siblings, 0 replies; 16+ messages in thread
From: Yang Shi @ 2025-11-19 19:33 UTC (permalink / raw)
To: Catalin Marinas
Cc: Dev Jain, will, ryan.roberts, rppt, shijie, linux-arm-kernel,
linux-kernel
On 11/19/25 10:06 AM, Catalin Marinas wrote:
> On Wed, Nov 12, 2025 at 02:27:49PM -0800, Yang Shi wrote:
>> The fix looks good to me. Reviewed-by: Yang Shi
>> <yang@os.amperecomputing.com>
> BTW, I think tools like b4 which I use to apply patches will ignored the
> reviewed-by tag as it's not on a separate line (I haven't checked the
> latest b4 version but I've seen it happening in the past).
Thank you for reminding. I will put my tag in a separate line next time.
Yang
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-19 18:03 ` Catalin Marinas
@ 2025-11-20 4:10 ` Dev Jain
0 siblings, 0 replies; 16+ messages in thread
From: Dev Jain @ 2025-11-20 4:10 UTC (permalink / raw)
To: Catalin Marinas
Cc: will, ryan.roberts, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On 19/11/25 11:33 pm, Catalin Marinas wrote:
> On Wed, Nov 12, 2025 at 11:57:15AM +0530, Dev Jain wrote:
>> The rodata=on security measure requires that any code path which does
>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
>> must take appropriate action; currently we are suppressing the error, and
>> there is a real chance of such an error arising post commit a166563e7ec3
>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
>> propagate any error to the caller.
>>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
>> I have dropped stable since no real chance of failure was there.
> I couldn't figure out from the comments on v1 whether Will's concern was
> addressed:
>
> https://lore.kernel.org/all/aQn4EwKar66UZ7rz@willie-the-truck/
>
> IOW, is the patch necessary? What are the failure scenarios and what
The patch is necessary because rodata=on requires us to proceed with
set_memory_* *only* if the linear map alias permission change was successful.
If we still go ahead with the set_memory_*, you now have memory in
RO state, whereas the linear map alias is in RW state, defeating the purpose
of rodata = on.
> does the caller do? It's good to propagate an error to the caller but
> patch also changes the current behaviour by bailing out earlier.
The caller will get to know that set_memory_* failed. Since we may have
bailed out partially in between the linear map alias permission change,
we must also ensure that we change that memory back to RW - vm_reset_perms()
will ensure that, and that change back to RW is *guaranteed* to succeed
in the calls to set_area_direct_map() in vm_reset_perms(): as explained
in the next patch, since that memory is non-RW, it is guaranteed to be
pte-mapped.
Now, it is the responsibility of the caller to pass VM_FLUSH_RESET_PERMS
*before* it does set_memory_ro/set_memory_rox. If it doesn't, then the
code is broken as is (Ryan noted earlier how this is a fragility in the
vmalloc API and we should do something about it).
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-13 11:55 ` Ryan Roberts
2025-11-13 17:38 ` Yang Shi
@ 2025-11-24 15:11 ` Will Deacon
2025-11-24 18:09 ` Ryan Roberts
1 sibling, 1 reply; 16+ messages in thread
From: Will Deacon @ 2025-11-24 15:11 UTC (permalink / raw)
To: Ryan Roberts
Cc: Dev Jain, catalin.marinas, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On Thu, Nov 13, 2025 at 11:55:48AM +0000, Ryan Roberts wrote:
> On 12/11/2025 06:27, Dev Jain wrote:
> > The rodata=on security measure requires that any code path which does
> > vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> > too. Therefore, if such a call fails, we must abort set_memory_* and caller
> > must take appropriate action; currently we are suppressing the error, and
> > there is a real chance of such an error arising post commit a166563e7ec3
> > ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> > propagate any error to the caller.
> >
> > Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> > Signed-off-by: Dev Jain <dev.jain@arm.com>
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
> It would be good to get this into v6.18 I guess?
I'm not sure I see the urgency. When the commit message says:
"there is a real chance of such an error arising post commit a166563e7ec3"
afaict that's either due to -ENOMEM or some hideous issue with the
page-tables (e.g. the -EINVALs in pageattr_pXd_entry() seem completely
unnecessary to me).
Do you think failure is actually likely and recoverable?
Will
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-24 15:11 ` Will Deacon
@ 2025-11-24 18:09 ` Ryan Roberts
2025-11-24 19:10 ` Will Deacon
0 siblings, 1 reply; 16+ messages in thread
From: Ryan Roberts @ 2025-11-24 18:09 UTC (permalink / raw)
To: Will Deacon
Cc: Dev Jain, catalin.marinas, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On 24/11/2025 15:11, Will Deacon wrote:
> On Thu, Nov 13, 2025 at 11:55:48AM +0000, Ryan Roberts wrote:
>> On 12/11/2025 06:27, Dev Jain wrote:
>>> The rodata=on security measure requires that any code path which does
>>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
>>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
>>> must take appropriate action; currently we are suppressing the error, and
>>> there is a real chance of such an error arising post commit a166563e7ec3
>>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
>>> propagate any error to the caller.
>>>
>>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>
>> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>>
>> It would be good to get this into v6.18 I guess?
>
> I'm not sure I see the urgency. When the commit message says:
>
> "there is a real chance of such an error arising post commit a166563e7ec3"
>
> afaict that's either due to -ENOMEM
Yes this is the main risk; failing to allocate an intermediate page table during
split_kernel_leaf_mapping(). I have no idea how likely that is in production.
The only data point I have is that for the theoretical memory hotplug -ENOMEM
that Chaitanya and Linu fixed recently, we tried to provoke it by hotplugging a
lot of memory on a system under high memory pressure; no matter what we did, we
couldn't get that 4K pgtable allocation to fail. So on that basis, I think we
are unlikely to see it.
> or some hideous issue with the
> page-tables (e.g. the -EINVALs in pageattr_pXd_entry() seem completely
> unnecessary to me).
I thought the -EINVALs were trying to catch the case where someone tries to set
permissions on a sub-portion of a vmalloc_huge() area on a system that doesn't
support BBML2_NOABORT. But looking again, we already refuse vmalloc_huge in
change_memory_common.
>
> Do you think failure is actually likely and recoverable?
As above, I think failure is unlikely, but not impossible. I guess the result
would be memory that remains RW when it should have been set RO or RX. But I
think it will all work itself out at vfree. I think.
We can defer this to next cycle if you prefer.
Thanks,
Ryan
>
> Will
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-24 18:09 ` Ryan Roberts
@ 2025-11-24 19:10 ` Will Deacon
0 siblings, 0 replies; 16+ messages in thread
From: Will Deacon @ 2025-11-24 19:10 UTC (permalink / raw)
To: Ryan Roberts
Cc: Dev Jain, catalin.marinas, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On Mon, Nov 24, 2025 at 06:09:31PM +0000, Ryan Roberts wrote:
> On 24/11/2025 15:11, Will Deacon wrote:
> > On Thu, Nov 13, 2025 at 11:55:48AM +0000, Ryan Roberts wrote:
> >> On 12/11/2025 06:27, Dev Jain wrote:
> >>> The rodata=on security measure requires that any code path which does
> >>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> >>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> >>> must take appropriate action; currently we are suppressing the error, and
> >>> there is a real chance of such an error arising post commit a166563e7ec3
> >>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> >>> propagate any error to the caller.
> >>>
> >>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> >>> Signed-off-by: Dev Jain <dev.jain@arm.com>
> >>
> >> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> >>
> >> It would be good to get this into v6.18 I guess?
> >
> > I'm not sure I see the urgency. When the commit message says:
> >
> > "there is a real chance of such an error arising post commit a166563e7ec3"
> >
> > afaict that's either due to -ENOMEM
>
> Yes this is the main risk; failing to allocate an intermediate page table during
> split_kernel_leaf_mapping(). I have no idea how likely that is in production.
> The only data point I have is that for the theoretical memory hotplug -ENOMEM
> that Chaitanya and Linu fixed recently, we tried to provoke it by hotplugging a
> lot of memory on a system under high memory pressure; no matter what we did, we
> couldn't get that 4K pgtable allocation to fail. So on that basis, I think we
> are unlikely to see it.
>
> > or some hideous issue with the
> > page-tables (e.g. the -EINVALs in pageattr_pXd_entry() seem completely
> > unnecessary to me).
>
> I thought the -EINVALs were trying to catch the case where someone tries to set
> permissions on a sub-portion of a vmalloc_huge() area on a system that doesn't
> support BBML2_NOABORT. But looking again, we already refuse vmalloc_huge in
> change_memory_common.
>
> >
> > Do you think failure is actually likely and recoverable?
>
> As above, I think failure is unlikely, but not impossible. I guess the result
> would be memory that remains RW when it should have been set RO or RX. But I
> think it will all work itself out at vfree. I think.
>
> We can defer this to next cycle if you prefer.
Yeah, I think we have bigger problems if we can't find memory to allocate
page tables tbh.
Will
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common()
2025-11-12 6:27 [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Dev Jain
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
2025-11-12 6:27 ` [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic Dev Jain
@ 2025-11-28 15:52 ` Catalin Marinas
2 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2025-11-28 15:52 UTC (permalink / raw)
To: will, Dev Jain
Cc: ryan.roberts, rppt, shijie, yang, linux-arm-kernel, linux-kernel
On Wed, 12 Nov 2025 11:57:14 +0530, Dev Jain wrote:
> The first patch is a fix to prevent suppression of error encountered during
> linear map alias permission change. This is needed because the rodata=on
> security measure requires us to proceed only if we first do the linear map
> alias protection change.
>
> The second patch documents why linear map split failure during the vfreeing
> path isn't problematic.
>
> [...]
Applied to arm64 (for-next/set_memory), thanks!
[1/2] arm64/pageattr: Propagate return value from __change_memory_common
https://git.kernel.org/arm64/c/e5efd56fa157
[2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic
https://git.kernel.org/arm64/c/0c2988aaa4d3
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-11-28 15:53 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-12 6:27 [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Dev Jain
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
2025-11-12 22:27 ` Yang Shi
2025-11-19 18:06 ` Catalin Marinas
2025-11-19 19:33 ` Yang Shi
2025-11-13 11:55 ` Ryan Roberts
2025-11-13 17:38 ` Yang Shi
2025-11-24 15:11 ` Will Deacon
2025-11-24 18:09 ` Ryan Roberts
2025-11-24 19:10 ` Will Deacon
2025-11-19 18:03 ` Catalin Marinas
2025-11-20 4:10 ` Dev Jain
2025-11-12 6:27 ` [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic Dev Jain
2025-11-12 22:34 ` Yang Shi
2025-11-13 12:25 ` Ryan Roberts
2025-11-28 15:52 ` [PATCH 0/2] arm64/mm: A fix and a documentation bit for change_memory_common() Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).