public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [v2 PATCH] arm64: mm: make linear mapping permission update more robust for patial range
@ 2025-10-23 20:44 Yang Shi
  2025-11-10 23:08 ` Yang Shi
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Yang Shi @ 2025-10-23 20:44 UTC (permalink / raw)
  To: ryan.roberts, dev.jain, cl, catalin.marinas, will
  Cc: yang, linux-arm-kernel, linux-kernel

The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
permissions for vmalloc_huge mappings") made permission update for
partial range more robust. But the linear mapping permission update
still assumes update the whole range by iterating from the first page
all the way to the last page of the area.

Make it more robust by updating the linear mapping permission from the
page mapped by start address, and update the number of numpages.

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
v2: * Dropped the fixes tag per Ryan and Dev
    * Simplified the loop per Dev
    * Collected R-bs

 arch/arm64/mm/pageattr.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 5135f2d66958..08ac96b9f846 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
 	unsigned long size = PAGE_SIZE * numpages;
 	unsigned long end = start + size;
 	struct vm_struct *area;
-	int i;
 
 	if (!PAGE_ALIGNED(addr)) {
 		start &= PAGE_MASK;
@@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
 	 */
 	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
 			    pgprot_val(clear_mask) == PTE_RDONLY)) {
-		for (i = 0; i < area->nr_pages; i++) {
-			__change_memory_common((u64)page_address(area->pages[i]),
+		unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
+		for (; numpages; idx++, numpages--) {
+			__change_memory_common((u64)page_address(area->pages[idx]),
 					       PAGE_SIZE, set_mask, clear_mask);
 		}
 	}
-- 
2.47.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-11-18 23:35 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-23 20:44 [v2 PATCH] arm64: mm: make linear mapping permission update more robust for patial range Yang Shi
2025-11-10 23:08 ` Yang Shi
2025-11-13 18:59 ` Catalin Marinas
2025-11-18 16:41 ` Nathan Chancellor
2025-11-18 17:35   ` Yang Shi
2025-11-18 23:07     ` Nathan Chancellor
2025-11-18 23:34       ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox