public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses
@ 2026-04-06 21:33 Kameron Carr
  2026-04-10 11:06 ` Catalin Marinas
  0 siblings, 1 reply; 2+ messages in thread
From: Kameron Carr @ 2026-04-06 21:33 UTC (permalink / raw)
  To: catalin.marinas, will
  Cc: suzuki.poulose, steven.price, ryan.roberts, dev.jain, yang,
	shijie, kevin.brodsky, linux-arm-kernel, linux-kernel

Currently __set_memory_enc_dec() only handles linear map (lm) addresses
and returns -EINVAL for anything else. This means callers using
vmalloc'd buffers cannot mark memory as shared/protected with the RMM
via set_memory_decrypted()/set_memory_encrypted().

Extend the implementation to handle vmalloc (non-linear-map) addresses
by introducing __set_va_addr_enc_dec(). For vmalloc addresses, the page
table entries are not contiguous in the physical address space, so the
function walks the vm_area's pages array and issues per-page RSI calls
to transition each page between shared and protected states.

The original linear-map path is factored out into __set_lm_addr_enc_dec(),
and __set_memory_enc_dec() now dispatches to the appropriate helper based
on whether the address is a linear map address.

Signed-off-by: Kameron Carr <kameroncarr@linux.microsoft.com>
---
 arch/arm64/mm/pageattr.c | 74 +++++++++++++++++++++++++++++++++++-----
 1 file changed, 65 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index ce035e1b4eaf..45058f61b957 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -275,20 +275,12 @@ int set_direct_map_default_noflush(struct page *page)
 				 PAGE_SIZE, set_mask, clear_mask);
 }
 
-static int __set_memory_enc_dec(unsigned long addr,
-				int numpages,
-				bool encrypt)
+static int __set_lm_addr_enc_dec(unsigned long addr, int numpages, bool encrypt)
 {
 	unsigned long set_prot = 0, clear_prot = 0;
 	phys_addr_t start, end;
 	int ret;
 
-	if (!is_realm_world())
-		return 0;
-
-	if (!__is_lm_address(addr))
-		return -EINVAL;
-
 	start = __virt_to_phys(addr);
 	end = start + numpages * PAGE_SIZE;
 
@@ -321,6 +313,70 @@ static int __set_memory_enc_dec(unsigned long addr,
 				      __pgprot(PTE_PRESENT_INVALID));
 }
 
+static int __set_va_addr_enc_dec(unsigned long addr, int numpages, bool encrypt)
+{
+	unsigned long set_prot = 0, clear_prot = 0, start_idx;
+	struct vm_struct *area;
+	int i, ret;
+
+	if (encrypt)
+		clear_prot = PROT_NS_SHARED;
+	else
+		set_prot = PROT_NS_SHARED;
+
+	area = find_vm_area((void *)addr);
+	if (!area)
+		return -EINVAL;
+
+	start_idx = ((unsigned long)kasan_reset_tag((void *)addr) -
+		     (unsigned long)kasan_reset_tag(area->addr)) >>
+		    PAGE_SHIFT;
+
+	if (start_idx + numpages > area->nr_pages)
+		return -EINVAL;
+
+	/*
+	 * Break the mapping before we make any changes to avoid stale TLB
+	 * entries or Synchronous External Aborts caused by RIPAS_EMPTY
+	 */
+	ret = change_memory_common(addr, numpages,
+		__pgprot(set_prot | PTE_PRESENT_INVALID),
+		__pgprot(clear_prot | PTE_PRESENT_VALID_KERNEL));
+
+	if (ret)
+		return ret;
+
+	for (i = 0; i < numpages; i++) {
+		struct page *page = area->pages[start_idx + i];
+		phys_addr_t phys = page_to_phys(page);
+
+		if (encrypt) {
+			ret = rsi_set_memory_range_protected(phys,
+							     phys + PAGE_SIZE);
+		} else {
+			ret = rsi_set_memory_range_shared(phys,
+							  phys + PAGE_SIZE);
+		}
+		if (ret)
+			return ret;
+	}
+
+	return change_memory_common(addr, numpages,
+				    __pgprot(PTE_PRESENT_VALID_KERNEL),
+				    __pgprot(PTE_PRESENT_INVALID));
+}
+
+static int __set_memory_enc_dec(unsigned long addr, int numpages, bool encrypt)
+{
+	if (!is_realm_world())
+		return 0;
+
+	if (!__is_lm_address(addr))
+		return __set_va_addr_enc_dec(addr, numpages, encrypt);
+
+	return __set_lm_addr_enc_dec(addr, numpages, encrypt);
+}
+
 static int realm_set_memory_encrypted(unsigned long addr, int numpages)
 {
 	int ret = __set_memory_enc_dec(addr, numpages, true);

base-commit: 25f6040ca43a78048466b949994b8d637fe2fe07
-- 
2.45.4


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses
  2026-04-06 21:33 [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses Kameron Carr
@ 2026-04-10 11:06 ` Catalin Marinas
  0 siblings, 0 replies; 2+ messages in thread
From: Catalin Marinas @ 2026-04-10 11:06 UTC (permalink / raw)
  To: Kameron Carr
  Cc: will, suzuki.poulose, steven.price, ryan.roberts, dev.jain, yang,
	shijie, kevin.brodsky, linux-arm-kernel, linux-kernel

On Mon, Apr 06, 2026 at 02:33:17PM -0700, Kameron Carr wrote:
> Currently __set_memory_enc_dec() only handles linear map (lm) addresses
> and returns -EINVAL for anything else. This means callers using
> vmalloc'd buffers cannot mark memory as shared/protected with the RMM
> via set_memory_decrypted()/set_memory_encrypted().
> 
> Extend the implementation to handle vmalloc (non-linear-map) addresses
> by introducing __set_va_addr_enc_dec(). For vmalloc addresses, the page
> table entries are not contiguous in the physical address space, so the
> function walks the vm_area's pages array and issues per-page RSI calls
> to transition each page between shared and protected states.
> 
> The original linear-map path is factored out into __set_lm_addr_enc_dec(),
> and __set_memory_enc_dec() now dispatches to the appropriate helper based
> on whether the address is a linear map address.

Could you give more details about the user of set_memory_decrypted() on
vmalloc()'ed addresses? I think this came up in the past and I wondered
whether something like GFP_DECRYPTED would be simpler to implement (even
posted a hack but without vmalloc() support). If it is known upfront
that the memory will be decrypted, it's easier/cheaper to do this on the
page allocation time to change the linear map and just use
pgprot_decrypted() for vmap(). No need to rewrite the page table after
mapping the pages.

-- 
Catalin

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-10 11:06 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-06 21:33 [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses Kameron Carr
2026-04-10 11:06 ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox