public inbox for iommu@lists.linux-foundation.org
 help / color / mirror / Atom feed
From: Mostafa Saleh <smostafa@google.com>
To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org
Cc: robin.murphy@arm.com, m.szyprowski@samsung.com, will@kernel.org,
	 maz@kernel.org, suzuki.poulose@arm.com, catalin.marinas@arm.com,
	 jiri@resnulli.us, jgg@ziepe.ca, aneesh.kumar@kernel.org,
	 Mostafa Saleh <smostafa@google.com>
Subject: [RFC PATCH v2 4/5] dma-mapping: Refactor memory encryption usage
Date: Mon, 30 Mar 2026 14:50:42 +0000	[thread overview]
Message-ID: <20260330145043.1586623-5-smostafa@google.com> (raw)
In-Reply-To: <20260330145043.1586623-1-smostafa@google.com>

At the moment dma-direct deals with memory encryption in 2 cases
- Pre-decrypted restricted dma-pools
- Arch code through force_dma_unencrypted()

In the first case, the memory is owned by the pool and the decryption
is not managed by the dma-direct.

However, it should be aware of it to use the appropriate phys_to_dma*
and page table prot.

For the second case, it’s the job of the dma-direct to manage the
decryption of the allocated memory.

As there have been bugs in this code due to wrong or missing
checks and there are more use cases coming for memory decryption,
we need more robust checks in the code to abstract the core logic,
so introduce some local helpers:
- dma_external_decryption(): For pages decrypted but managed externally
- dma_owns_decryption(): For pages need to be decrypted and managed
  by dma-direct
- is_dma_decrypted(): To check if memory is decrypted

Note that this patch is not a no-op as there are some subtle changes
which are actually theoretical bug fixes in dma_direct_mmap() and
dma_direct_alloc() where the wrong prot might be used for remap.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
---
 kernel/dma/direct.c | 37 +++++++++++++++++++++++++++----------
 1 file changed, 27 insertions(+), 10 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index a4260689bcc8..1078e1b38a34 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -23,10 +23,27 @@
  */
 u64 zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
 
+/* Memory is decrypted and managed externally. */
+static inline bool dma_external_decryption(struct device *dev)
+{
+	return is_swiotlb_for_alloc(dev);
+}
+
+/* Memory needs to be decrypted by the dma-direct layer. */
+static inline bool dma_owns_decryption(struct device *dev)
+{
+	return force_dma_unencrypted(dev) && !dma_external_decryption(dev);
+}
+
+static inline bool is_dma_decrypted(struct device *dev)
+{
+	return force_dma_unencrypted(dev) || dma_external_decryption(dev);
+}
+
 static inline dma_addr_t phys_to_dma_direct(struct device *dev,
 		phys_addr_t phys)
 {
-	if (force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
+	if (is_dma_decrypted(dev))
 		return phys_to_dma_unencrypted(dev, phys);
 	return phys_to_dma(dev, phys);
 }
@@ -79,7 +96,7 @@ bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 
 static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
 {
-	if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
+	if (!dma_owns_decryption(dev))
 		return 0;
 	return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
 }
@@ -88,7 +105,7 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
 {
 	int ret;
 
-	if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
+	if (!dma_owns_decryption(dev))
 		return 0;
 	ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
 	if (ret)
@@ -203,7 +220,7 @@ static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size,
 void *dma_direct_alloc(struct device *dev, size_t size,
 		dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
-	bool allow_highmem = !force_dma_unencrypted(dev);
+	bool allow_highmem = !dma_owns_decryption(dev);
 	bool remap = false, set_uncached = false;
 	struct page *page;
 	void *ret;
@@ -213,7 +230,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev))
+	    !is_dma_decrypted(dev))
 		return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp);
 
 	if (!dev_is_dma_coherent(dev)) {
@@ -247,7 +264,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	 * Remapping or decrypting memory may block, allocate the memory from
 	 * the atomic pools instead if we aren't allowed block.
 	 */
-	if ((remap || force_dma_unencrypted(dev)) &&
+	if ((remap || dma_owns_decryption(dev)) &&
 	    dma_direct_use_pool(dev, gfp))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
@@ -272,7 +289,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	if (remap) {
 		pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs);
 
-		if (force_dma_unencrypted(dev))
+		if (is_dma_decrypted(dev))
 			prot = pgprot_decrypted(prot);
 
 		/* remove any dirty cache lines on the kernel alias */
@@ -314,7 +331,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
+	    !is_dma_decrypted(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
@@ -362,7 +379,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	struct page *page;
 	void *ret;
 
-	if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
+	if (dma_owns_decryption(dev) && dma_direct_use_pool(dev, gfp))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp, false);
@@ -530,7 +547,7 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
 	int ret = -ENXIO;
 
 	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
-	if (force_dma_unencrypted(dev))
+	if (is_dma_decrypted(dev))
 		vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
 	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
-- 
2.53.0.1185.g05d4b7b318-goog


  parent reply	other threads:[~2026-03-30 14:51 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-30 14:50 [RFC PATCH v2 0/5] dma-mapping: Fixes for memory encryption Mostafa Saleh
2026-03-30 14:50 ` [RFC PATCH v2 1/5] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL Mostafa Saleh
2026-03-30 15:06   ` Jason Gunthorpe
2026-03-30 20:43     ` Mostafa Saleh
2026-03-31 11:34       ` Suzuki K Poulose
2026-03-31 12:50         ` Mostafa Saleh
2026-03-30 14:50 ` [RFC PATCH v2 2/5] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL Mostafa Saleh
2026-03-30 15:09   ` Jason Gunthorpe
2026-03-30 20:47     ` Mostafa Saleh
2026-03-30 22:28       ` Jason Gunthorpe
2026-03-30 14:50 ` [RFC PATCH v2 3/5] dma-mapping: Decrypt memory on remap Mostafa Saleh
2026-03-30 15:19   ` Jason Gunthorpe
2026-03-30 20:49     ` Mostafa Saleh
2026-03-30 22:30       ` Jason Gunthorpe
2026-03-30 14:50 ` Mostafa Saleh [this message]
2026-03-30 15:27   ` [RFC PATCH v2 4/5] dma-mapping: Refactor memory encryption usage Jason Gunthorpe
2026-03-30 14:50 ` [RFC PATCH v2 5/5] dma-mapping: Add doc for memory encryption Mostafa Saleh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260330145043.1586623-5-smostafa@google.com \
    --to=smostafa@google.com \
    --cc=aneesh.kumar@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@ziepe.ca \
    --cc=jiri@resnulli.us \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=maz@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox