From: Kameron Carr <kameroncarr@linux.microsoft.com>
To: catalin.marinas@arm.com, will@kernel.org
Cc: suzuki.poulose@arm.com, steven.price@arm.com,
ryan.roberts@arm.com, dev.jain@arm.com,
yang@os.amperecomputing.com, shijie@os.amperecomputing.com,
kevin.brodsky@arm.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: [RFC PATCH] arm64: mm: support set_memory_encrypted/decrypted for vmalloc addresses
Date: Mon, 6 Apr 2026 14:33:17 -0700 [thread overview]
Message-ID: <20260406213317.216171-1-kameroncarr@linux.microsoft.com> (raw)
Currently __set_memory_enc_dec() only handles linear map (lm) addresses
and returns -EINVAL for anything else. This means callers using
vmalloc'd buffers cannot mark memory as shared/protected with the RMM
via set_memory_decrypted()/set_memory_encrypted().
Extend the implementation to handle vmalloc (non-linear-map) addresses
by introducing __set_va_addr_enc_dec(). For vmalloc addresses, the page
table entries are not contiguous in the physical address space, so the
function walks the vm_area's pages array and issues per-page RSI calls
to transition each page between shared and protected states.
The original linear-map path is factored out into __set_lm_addr_enc_dec(),
and __set_memory_enc_dec() now dispatches to the appropriate helper based
on whether the address is a linear map address.
Signed-off-by: Kameron Carr <kameroncarr@linux.microsoft.com>
---
arch/arm64/mm/pageattr.c | 74 +++++++++++++++++++++++++++++++++++-----
1 file changed, 65 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index ce035e1b4eaf..45058f61b957 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -275,20 +275,12 @@ int set_direct_map_default_noflush(struct page *page)
PAGE_SIZE, set_mask, clear_mask);
}
-static int __set_memory_enc_dec(unsigned long addr,
- int numpages,
- bool encrypt)
+static int __set_lm_addr_enc_dec(unsigned long addr, int numpages, bool encrypt)
{
unsigned long set_prot = 0, clear_prot = 0;
phys_addr_t start, end;
int ret;
- if (!is_realm_world())
- return 0;
-
- if (!__is_lm_address(addr))
- return -EINVAL;
-
start = __virt_to_phys(addr);
end = start + numpages * PAGE_SIZE;
@@ -321,6 +313,70 @@ static int __set_memory_enc_dec(unsigned long addr,
__pgprot(PTE_PRESENT_INVALID));
}
+static int __set_va_addr_enc_dec(unsigned long addr, int numpages, bool encrypt)
+{
+ unsigned long set_prot = 0, clear_prot = 0, start_idx;
+ struct vm_struct *area;
+ int i, ret;
+
+ if (encrypt)
+ clear_prot = PROT_NS_SHARED;
+ else
+ set_prot = PROT_NS_SHARED;
+
+ area = find_vm_area((void *)addr);
+ if (!area)
+ return -EINVAL;
+
+ start_idx = ((unsigned long)kasan_reset_tag((void *)addr) -
+ (unsigned long)kasan_reset_tag(area->addr)) >>
+ PAGE_SHIFT;
+
+ if (start_idx + numpages > area->nr_pages)
+ return -EINVAL;
+
+ /*
+ * Break the mapping before we make any changes to avoid stale TLB
+ * entries or Synchronous External Aborts caused by RIPAS_EMPTY
+ */
+ ret = change_memory_common(addr, numpages,
+ __pgprot(set_prot | PTE_PRESENT_INVALID),
+ __pgprot(clear_prot | PTE_PRESENT_VALID_KERNEL));
+
+ if (ret)
+ return ret;
+
+ for (i = 0; i < numpages; i++) {
+ struct page *page = area->pages[start_idx + i];
+ phys_addr_t phys = page_to_phys(page);
+
+ if (encrypt) {
+ ret = rsi_set_memory_range_protected(phys,
+ phys + PAGE_SIZE);
+ } else {
+ ret = rsi_set_memory_range_shared(phys,
+ phys + PAGE_SIZE);
+ }
+ if (ret)
+ return ret;
+ }
+
+ return change_memory_common(addr, numpages,
+ __pgprot(PTE_PRESENT_VALID_KERNEL),
+ __pgprot(PTE_PRESENT_INVALID));
+}
+
+static int __set_memory_enc_dec(unsigned long addr, int numpages, bool encrypt)
+{
+ if (!is_realm_world())
+ return 0;
+
+ if (!__is_lm_address(addr))
+ return __set_va_addr_enc_dec(addr, numpages, encrypt);
+
+ return __set_lm_addr_enc_dec(addr, numpages, encrypt);
+}
+
static int realm_set_memory_encrypted(unsigned long addr, int numpages)
{
int ret = __set_memory_enc_dec(addr, numpages, true);
base-commit: 25f6040ca43a78048466b949994b8d637fe2fe07
--
2.45.4
reply other threads:[~2026-04-06 21:33 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260406213317.216171-1-kameroncarr@linux.microsoft.com \
--to=kameroncarr@linux.microsoft.com \
--cc=catalin.marinas@arm.com \
--cc=dev.jain@arm.com \
--cc=kevin.brodsky@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shijie@os.amperecomputing.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yang@os.amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox