public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses
@ 2026-04-06 21:33 Kameron Carr
  0 siblings, 0 replies; only message in thread
From: Kameron Carr @ 2026-04-06 21:33 UTC (permalink / raw)
  To: catalin.marinas, will
  Cc: suzuki.poulose, steven.price, ryan.roberts, dev.jain, yang,
	shijie, kevin.brodsky, linux-arm-kernel, linux-kernel

Currently __set_memory_enc_dec() only handles linear map (lm) addresses
and returns -EINVAL for anything else. This means callers using
vmalloc'd buffers cannot mark memory as shared/protected with the RMM
via set_memory_decrypted()/set_memory_encrypted().

Extend the implementation to handle vmalloc (non-linear-map) addresses
by introducing __set_va_addr_enc_dec(). For vmalloc addresses, the page
table entries are not contiguous in the physical address space, so the
function walks the vm_area's pages array and issues per-page RSI calls
to transition each page between shared and protected states.

The original linear-map path is factored out into __set_lm_addr_enc_dec(),
and __set_memory_enc_dec() now dispatches to the appropriate helper based
on whether the address is a linear map address.

Signed-off-by: Kameron Carr <kameroncarr@linux.microsoft.com>
---
 arch/arm64/mm/pageattr.c | 74 +++++++++++++++++++++++++++++++++++-----
 1 file changed, 65 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index ce035e1b4eaf..45058f61b957 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -275,20 +275,12 @@ int set_direct_map_default_noflush(struct page *page)
 				 PAGE_SIZE, set_mask, clear_mask);
 }
 
-static int __set_memory_enc_dec(unsigned long addr,
-				int numpages,
-				bool encrypt)
+static int __set_lm_addr_enc_dec(unsigned long addr, int numpages, bool encrypt)
 {
 	unsigned long set_prot = 0, clear_prot = 0;
 	phys_addr_t start, end;
 	int ret;
 
-	if (!is_realm_world())
-		return 0;
-
-	if (!__is_lm_address(addr))
-		return -EINVAL;
-
 	start = __virt_to_phys(addr);
 	end = start + numpages * PAGE_SIZE;
 
@@ -321,6 +313,70 @@ static int __set_memory_enc_dec(unsigned long addr,
 				      __pgprot(PTE_PRESENT_INVALID));
 }
 
+static int __set_va_addr_enc_dec(unsigned long addr, int numpages, bool encrypt)
+{
+	unsigned long set_prot = 0, clear_prot = 0, start_idx;
+	struct vm_struct *area;
+	int i, ret;
+
+	if (encrypt)
+		clear_prot = PROT_NS_SHARED;
+	else
+		set_prot = PROT_NS_SHARED;
+
+	area = find_vm_area((void *)addr);
+	if (!area)
+		return -EINVAL;
+
+	start_idx = ((unsigned long)kasan_reset_tag((void *)addr) -
+		     (unsigned long)kasan_reset_tag(area->addr)) >>
+		    PAGE_SHIFT;
+
+	if (start_idx + numpages > area->nr_pages)
+		return -EINVAL;
+
+	/*
+	 * Break the mapping before we make any changes to avoid stale TLB
+	 * entries or Synchronous External Aborts caused by RIPAS_EMPTY
+	 */
+	ret = change_memory_common(addr, numpages,
+		__pgprot(set_prot | PTE_PRESENT_INVALID),
+		__pgprot(clear_prot | PTE_PRESENT_VALID_KERNEL));
+
+	if (ret)
+		return ret;
+
+	for (i = 0; i < numpages; i++) {
+		struct page *page = area->pages[start_idx + i];
+		phys_addr_t phys = page_to_phys(page);
+
+		if (encrypt) {
+			ret = rsi_set_memory_range_protected(phys,
+							     phys + PAGE_SIZE);
+		} else {
+			ret = rsi_set_memory_range_shared(phys,
+							  phys + PAGE_SIZE);
+		}
+		if (ret)
+			return ret;
+	}
+
+	return change_memory_common(addr, numpages,
+				    __pgprot(PTE_PRESENT_VALID_KERNEL),
+				    __pgprot(PTE_PRESENT_INVALID));
+}
+
+static int __set_memory_enc_dec(unsigned long addr, int numpages, bool encrypt)
+{
+	if (!is_realm_world())
+		return 0;
+
+	if (!__is_lm_address(addr))
+		return __set_va_addr_enc_dec(addr, numpages, encrypt);
+
+	return __set_lm_addr_enc_dec(addr, numpages, encrypt);
+}
+
 static int realm_set_memory_encrypted(unsigned long addr, int numpages)
 {
 	int ret = __set_memory_enc_dec(addr, numpages, true);

base-commit: 25f6040ca43a78048466b949994b8d637fe2fe07
-- 
2.45.4



^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2026-04-06 21:33 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-06 21:33 [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses Kameron Carr

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox