* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
@ 2025-11-12 22:27 ` Yang Shi
2025-11-19 18:06 ` Catalin Marinas
2025-11-13 11:55 ` Ryan Roberts
2025-11-19 18:03 ` Catalin Marinas
2 siblings, 1 reply; 16+ messages in thread
From: Yang Shi @ 2025-11-12 22:27 UTC (permalink / raw)
To: Dev Jain, catalin.marinas, will
Cc: ryan.roberts, rppt, shijie, linux-arm-kernel, linux-kernel
On 11/11/25 10:27 PM, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Thanks for fixing this. My old patches propagated the return value of
splitting page table which was called outside __change_memory_common().
But it was missed in the final patches.
The fix looks good to me. Reviewed-by: Yang Shi
<yang@os.amperecomputing.com>
Yang
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
>
> arch/arm64/mm/pageattr.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..b4ea86cd3a71 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> + int ret;
> int i;
>
> if (!PAGE_ALIGNED(addr)) {
> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
> + if (ret)
> + return ret;
> }
> }
>
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 22:27 ` Yang Shi
@ 2025-11-19 18:06 ` Catalin Marinas
2025-11-19 19:33 ` Yang Shi
0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2025-11-19 18:06 UTC (permalink / raw)
To: Yang Shi
Cc: Dev Jain, will, ryan.roberts, rppt, shijie, linux-arm-kernel,
linux-kernel
On Wed, Nov 12, 2025 at 02:27:49PM -0800, Yang Shi wrote:
> The fix looks good to me. Reviewed-by: Yang Shi
> <yang@os.amperecomputing.com>
BTW, I think tools like b4 which I use to apply patches will ignored the
reviewed-by tag as it's not on a separate line (I haven't checked the
latest b4 version but I've seen it happening in the past).
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-19 18:06 ` Catalin Marinas
@ 2025-11-19 19:33 ` Yang Shi
0 siblings, 0 replies; 16+ messages in thread
From: Yang Shi @ 2025-11-19 19:33 UTC (permalink / raw)
To: Catalin Marinas
Cc: Dev Jain, will, ryan.roberts, rppt, shijie, linux-arm-kernel,
linux-kernel
On 11/19/25 10:06 AM, Catalin Marinas wrote:
> On Wed, Nov 12, 2025 at 02:27:49PM -0800, Yang Shi wrote:
>> The fix looks good to me. Reviewed-by: Yang Shi
>> <yang@os.amperecomputing.com>
> BTW, I think tools like b4 which I use to apply patches will ignored the
> reviewed-by tag as it's not on a separate line (I haven't checked the
> latest b4 version but I've seen it happening in the past).
Thank you for reminding. I will put my tag in a separate line next time.
Yang
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
2025-11-12 22:27 ` Yang Shi
@ 2025-11-13 11:55 ` Ryan Roberts
2025-11-13 17:38 ` Yang Shi
2025-11-24 15:11 ` Will Deacon
2025-11-19 18:03 ` Catalin Marinas
2 siblings, 2 replies; 16+ messages in thread
From: Ryan Roberts @ 2025-11-13 11:55 UTC (permalink / raw)
To: Dev Jain, catalin.marinas, will
Cc: rppt, shijie, yang, linux-arm-kernel, linux-kernel
On 12/11/2025 06:27, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
It would be good to get this into v6.18 I guess?
Although I think this will conflict with a patch from Yang that makes this work
with a partial vm area range - But I think that one will only go to v6.19.
Thanks,
Ryan
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
>
> arch/arm64/mm/pageattr.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..b4ea86cd3a71 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> + int ret;
> int i;
>
> if (!PAGE_ALIGNED(addr)) {
> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
> + if (ret)
> + return ret;
> }
> }
>
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-13 11:55 ` Ryan Roberts
@ 2025-11-13 17:38 ` Yang Shi
2025-11-24 15:11 ` Will Deacon
1 sibling, 0 replies; 16+ messages in thread
From: Yang Shi @ 2025-11-13 17:38 UTC (permalink / raw)
To: Ryan Roberts, Dev Jain, catalin.marinas, will
Cc: rppt, shijie, linux-arm-kernel, linux-kernel
On 11/13/25 3:55 AM, Ryan Roberts wrote:
> On 12/11/2025 06:27, Dev Jain wrote:
>> The rodata=on security measure requires that any code path which does
>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
>> must take appropriate action; currently we are suppressing the error, and
>> there is a real chance of such an error arising post commit a166563e7ec3
>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
>> propagate any error to the caller.
>>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
> It would be good to get this into v6.18 I guess?
Yeah, agreed.
>
> Although I think this will conflict with a patch from Yang that makes this work
> with a partial vm area range - But I think that one will only go to v6.19.
Yes, it will conflict. But it is quite easy to solve. I will rebase my
patch on top of it once it is picked.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> ---
>> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
>> I have dropped stable since no real chance of failure was there.
>>
>> arch/arm64/mm/pageattr.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 5135f2d66958..b4ea86cd3a71 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>> unsigned long size = PAGE_SIZE * numpages;
>> unsigned long end = start + size;
>> struct vm_struct *area;
>> + int ret;
>> int i;
>>
>> if (!PAGE_ALIGNED(addr)) {
>> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + ret = __change_memory_common((u64)page_address(area->pages[i]),
>> PAGE_SIZE, set_mask, clear_mask);
>> + if (ret)
>> + return ret;
>> }
>> }
>>
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-13 11:55 ` Ryan Roberts
2025-11-13 17:38 ` Yang Shi
@ 2025-11-24 15:11 ` Will Deacon
2025-11-24 18:09 ` Ryan Roberts
1 sibling, 1 reply; 16+ messages in thread
From: Will Deacon @ 2025-11-24 15:11 UTC (permalink / raw)
To: Ryan Roberts
Cc: Dev Jain, catalin.marinas, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On Thu, Nov 13, 2025 at 11:55:48AM +0000, Ryan Roberts wrote:
> On 12/11/2025 06:27, Dev Jain wrote:
> > The rodata=on security measure requires that any code path which does
> > vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> > too. Therefore, if such a call fails, we must abort set_memory_* and caller
> > must take appropriate action; currently we are suppressing the error, and
> > there is a real chance of such an error arising post commit a166563e7ec3
> > ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> > propagate any error to the caller.
> >
> > Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> > Signed-off-by: Dev Jain <dev.jain@arm.com>
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
> It would be good to get this into v6.18 I guess?
I'm not sure I see the urgency. When the commit message says:
"there is a real chance of such an error arising post commit a166563e7ec3"
afaict that's either due to -ENOMEM or some hideous issue with the
page-tables (e.g. the -EINVALs in pageattr_pXd_entry() seem completely
unnecessary to me).
Do you think failure is actually likely and recoverable?
Will
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-24 15:11 ` Will Deacon
@ 2025-11-24 18:09 ` Ryan Roberts
2025-11-24 19:10 ` Will Deacon
0 siblings, 1 reply; 16+ messages in thread
From: Ryan Roberts @ 2025-11-24 18:09 UTC (permalink / raw)
To: Will Deacon
Cc: Dev Jain, catalin.marinas, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On 24/11/2025 15:11, Will Deacon wrote:
> On Thu, Nov 13, 2025 at 11:55:48AM +0000, Ryan Roberts wrote:
>> On 12/11/2025 06:27, Dev Jain wrote:
>>> The rodata=on security measure requires that any code path which does
>>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
>>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
>>> must take appropriate action; currently we are suppressing the error, and
>>> there is a real chance of such an error arising post commit a166563e7ec3
>>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
>>> propagate any error to the caller.
>>>
>>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>
>> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>>
>> It would be good to get this into v6.18 I guess?
>
> I'm not sure I see the urgency. When the commit message says:
>
> "there is a real chance of such an error arising post commit a166563e7ec3"
>
> afaict that's either due to -ENOMEM
Yes this is the main risk; failing to allocate an intermediate page table during
split_kernel_leaf_mapping(). I have no idea how likely that is in production.
The only data point I have is that for the theoretical memory hotplug -ENOMEM
that Chaitanya and Linu fixed recently, we tried to provoke it by hotplugging a
lot of memory on a system under high memory pressure; no matter what we did, we
couldn't get that 4K pgtable allocation to fail. So on that basis, I think we
are unlikely to see it.
> or some hideous issue with the
> page-tables (e.g. the -EINVALs in pageattr_pXd_entry() seem completely
> unnecessary to me).
I thought the -EINVALs were trying to catch the case where someone tries to set
permissions on a sub-portion of a vmalloc_huge() area on a system that doesn't
support BBML2_NOABORT. But looking again, we already refuse vmalloc_huge in
change_memory_common.
>
> Do you think failure is actually likely and recoverable?
As above, I think failure is unlikely, but not impossible. I guess the result
would be memory that remains RW when it should have been set RO or RX. But I
think it will all work itself out at vfree. I think.
We can defer this to next cycle if you prefer.
Thanks,
Ryan
>
> Will
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-24 18:09 ` Ryan Roberts
@ 2025-11-24 19:10 ` Will Deacon
0 siblings, 0 replies; 16+ messages in thread
From: Will Deacon @ 2025-11-24 19:10 UTC (permalink / raw)
To: Ryan Roberts
Cc: Dev Jain, catalin.marinas, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On Mon, Nov 24, 2025 at 06:09:31PM +0000, Ryan Roberts wrote:
> On 24/11/2025 15:11, Will Deacon wrote:
> > On Thu, Nov 13, 2025 at 11:55:48AM +0000, Ryan Roberts wrote:
> >> On 12/11/2025 06:27, Dev Jain wrote:
> >>> The rodata=on security measure requires that any code path which does
> >>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> >>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> >>> must take appropriate action; currently we are suppressing the error, and
> >>> there is a real chance of such an error arising post commit a166563e7ec3
> >>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> >>> propagate any error to the caller.
> >>>
> >>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> >>> Signed-off-by: Dev Jain <dev.jain@arm.com>
> >>
> >> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> >>
> >> It would be good to get this into v6.18 I guess?
> >
> > I'm not sure I see the urgency. When the commit message says:
> >
> > "there is a real chance of such an error arising post commit a166563e7ec3"
> >
> > afaict that's either due to -ENOMEM
>
> Yes this is the main risk; failing to allocate an intermediate page table during
> split_kernel_leaf_mapping(). I have no idea how likely that is in production.
> The only data point I have is that for the theoretical memory hotplug -ENOMEM
> that Chaitanya and Linu fixed recently, we tried to provoke it by hotplugging a
> lot of memory on a system under high memory pressure; no matter what we did, we
> couldn't get that 4K pgtable allocation to fail. So on that basis, I think we
> are unlikely to see it.
>
> > or some hideous issue with the
> > page-tables (e.g. the -EINVALs in pageattr_pXd_entry() seem completely
> > unnecessary to me).
>
> I thought the -EINVALs were trying to catch the case where someone tries to set
> permissions on a sub-portion of a vmalloc_huge() area on a system that doesn't
> support BBML2_NOABORT. But looking again, we already refuse vmalloc_huge in
> change_memory_common.
>
> >
> > Do you think failure is actually likely and recoverable?
>
> As above, I think failure is unlikely, but not impossible. I guess the result
> would be memory that remains RW when it should have been set RO or RX. But I
> think it will all work itself out at vfree. I think.
>
> We can defer this to next cycle if you prefer.
Yeah, I think we have bigger problems if we can't find memory to allocate
page tables tbh.
Will
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-12 6:27 ` [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Dev Jain
2025-11-12 22:27 ` Yang Shi
2025-11-13 11:55 ` Ryan Roberts
@ 2025-11-19 18:03 ` Catalin Marinas
2025-11-20 4:10 ` Dev Jain
2 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2025-11-19 18:03 UTC (permalink / raw)
To: Dev Jain
Cc: will, ryan.roberts, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On Wed, Nov 12, 2025 at 11:57:15AM +0530, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
I couldn't figure out from the comments on v1 whether Will's concern was
addressed:
https://lore.kernel.org/all/aQn4EwKar66UZ7rz@willie-the-truck/
IOW, is the patch necessary? What are the failure scenarios and what
does the caller do? It's good to propagate an error to the caller but
patch also changes the current behaviour by bailing out earlier.
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common
2025-11-19 18:03 ` Catalin Marinas
@ 2025-11-20 4:10 ` Dev Jain
0 siblings, 0 replies; 16+ messages in thread
From: Dev Jain @ 2025-11-20 4:10 UTC (permalink / raw)
To: Catalin Marinas
Cc: will, ryan.roberts, rppt, shijie, yang, linux-arm-kernel,
linux-kernel
On 19/11/25 11:33 pm, Catalin Marinas wrote:
> On Wed, Nov 12, 2025 at 11:57:15AM +0530, Dev Jain wrote:
>> The rodata=on security measure requires that any code path which does
>> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
>> too. Therefore, if such a call fails, we must abort set_memory_* and caller
>> must take appropriate action; currently we are suppressing the error, and
>> there is a real chance of such an error arising post commit a166563e7ec3
>> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
>> propagate any error to the caller.
>>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
>> I have dropped stable since no real chance of failure was there.
> I couldn't figure out from the comments on v1 whether Will's concern was
> addressed:
>
> https://lore.kernel.org/all/aQn4EwKar66UZ7rz@willie-the-truck/
>
> IOW, is the patch necessary? What are the failure scenarios and what
The patch is necessary because rodata=on requires us to proceed with
set_memory_* *only* if the linear map alias permission change was successful.
If we still go ahead with the set_memory_*, you now have memory in
RO state, whereas the linear map alias is in RW state, defeating the purpose
of rodata = on.
> does the caller do? It's good to propagate an error to the caller but
> patch also changes the current behaviour by bailing out earlier.
The caller will get to know that set_memory_* failed. Since we may have
bailed out partially in between the linear map alias permission change,
we must also ensure that we change that memory back to RW - vm_reset_perms()
will ensure that, and that change back to RW is *guaranteed* to succeed
in the calls to set_area_direct_map() in vm_reset_perms(): as explained
in the next patch, since that memory is non-RW, it is guaranteed to be
pte-mapped.
Now, it is the responsibility of the caller to pass VM_FLUSH_RESET_PERMS
*before* it does set_memory_ro/set_memory_rox. If it doesn't, then the
code is broken as is (Ryan noted earlier how this is a fragility in the
vmalloc API and we should do something about it).
>
^ permalink raw reply [flat|nested] 16+ messages in thread