* [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
@ 2026-01-02 15:50 Aneesh Kumar K.V (Arm)
2026-01-05 17:53 ` Jason Gunthorpe
2026-01-14 9:47 ` Aneesh Kumar K.V
0 siblings, 2 replies; 7+ messages in thread
From: Aneesh Kumar K.V (Arm) @ 2026-01-02 15:50 UTC (permalink / raw)
To: iommu, linux-kernel
Cc: Marek Szyprowski, Robin Murphy, Arnd Bergmann, Linus Walleij,
Matthew Wilcox, Suzuki K Poulose, Aneesh Kumar K.V (Arm)
Devices that are DMA non-coherent and require a remap were skipping
dma_set_decrypted(), leaving DMA buffers encrypted even when the device
requires unencrypted access. Move the call after the if (remap) branch
so that both the direct and remapped allocation paths correctly mark the
allocation as decrypted (or fail cleanly) before use.
Architectures such as arm64 cannot mark vmap addresses as decrypted, and
highmem pages necessarily require a vmap remap. As a result, such
allocations cannot be safely used for unencrypted DMA. Therefore, when
an unencrypted DMA buffer is requested, avoid allocating high PFNs from
__dma_direct_alloc_pages().
Other architectures (e.g. x86) do not have this limitation. However,
rather than making this architecture-specific, apply the restriction
only when the device requires unencrypted DMA access, for simplicity.
Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
---
kernel/dma/direct.c | 31 ++++++++++++++++++++++++++++---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index ffa267020a1e..faf1e41afde8 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -204,6 +204,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
{
bool remap = false, set_uncached = false;
+ bool allow_highmem = true;
struct page *page;
void *ret;
@@ -250,8 +251,18 @@ void *dma_direct_alloc(struct device *dev, size_t size,
dma_direct_use_pool(dev, gfp))
return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
+
+ if (force_dma_unencrypted(dev))
+ /*
+ * Unencrypted/shared DMA requires a linear-mapped buffer
+ * address to look up the PFN and set architecture-required PFN
+ * attributes. This is not possible with HighMem. Avoid HighMem
+ * allocation.
+ */
+ allow_highmem = false;
+
/* we always manually zero the memory once we are done */
- page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, allow_highmem);
if (!page)
return NULL;
@@ -282,7 +293,13 @@ void *dma_direct_alloc(struct device *dev, size_t size,
goto out_free_pages;
} else {
ret = page_address(page);
- if (dma_set_decrypted(dev, ret, size))
+ }
+
+ if (force_dma_unencrypted(dev)) {
+ void *lm_addr;
+
+ lm_addr = page_address(page);
+ if (set_memory_decrypted((unsigned long)lm_addr, PFN_UP(size)))
goto out_leak_pages;
}
@@ -344,8 +361,16 @@ void dma_direct_free(struct device *dev, size_t size,
} else {
if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
arch_dma_clear_uncached(cpu_addr, size);
- if (dma_set_encrypted(dev, cpu_addr, size))
+ }
+
+ if (force_dma_unencrypted(dev)) {
+ void *lm_addr;
+
+ lm_addr = phys_to_virt(dma_to_phys(dev, dma_addr));
+ if (set_memory_encrypted((unsigned long)lm_addr, PFN_UP(size))) {
+ pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
return;
+ }
}
__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
2026-01-02 15:50 [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations Aneesh Kumar K.V (Arm)
@ 2026-01-05 17:53 ` Jason Gunthorpe
2026-01-07 14:26 ` Linus Walleij
2026-01-14 9:47 ` Aneesh Kumar K.V
1 sibling, 1 reply; 7+ messages in thread
From: Jason Gunthorpe @ 2026-01-05 17:53 UTC (permalink / raw)
To: Aneesh Kumar K.V (Arm)
Cc: iommu, linux-kernel, Marek Szyprowski, Robin Murphy,
Arnd Bergmann, Linus Walleij, Matthew Wilcox, Suzuki K Poulose
On Fri, Jan 02, 2026 at 09:20:37PM +0530, Aneesh Kumar K.V (Arm) wrote:
> Devices that are DMA non-coherent and require a remap were skipping
> dma_set_decrypted(), leaving DMA buffers encrypted even when the device
> requires unencrypted access. Move the call after the if (remap) branch
> so that both the direct and remapped allocation paths correctly mark the
> allocation as decrypted (or fail cleanly) before use.
This is probably fine, but IMHO, we should be excluding the
combination of highmem and CC at the kconfig level :\
Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
2026-01-05 17:53 ` Jason Gunthorpe
@ 2026-01-07 14:26 ` Linus Walleij
2026-01-07 15:03 ` Arnd Bergmann
0 siblings, 1 reply; 7+ messages in thread
From: Linus Walleij @ 2026-01-07 14:26 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Aneesh Kumar K.V (Arm), iommu, linux-kernel, Marek Szyprowski,
Robin Murphy, Arnd Bergmann, Matthew Wilcox, Suzuki K Poulose
On Mon, Jan 5, 2026 at 6:53 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Fri, Jan 02, 2026 at 09:20:37PM +0530, Aneesh Kumar K.V (Arm) wrote:
> > Devices that are DMA non-coherent and require a remap were skipping
> > dma_set_decrypted(), leaving DMA buffers encrypted even when the device
> > requires unencrypted access. Move the call after the if (remap) branch
> > so that both the direct and remapped allocation paths correctly mark the
> > allocation as decrypted (or fail cleanly) before use.
>
> This is probably fine, but IMHO, we should be excluding the
> combination of highmem and CC at the kconfig level :\
The only way you can get CMA in highmem is by passing in a highmem
location to the allocator from the command line.
I have a strong urge to just patch CMA to not allow that and see what
happens. Or at least have it print a big fat warning that this will go
away soon.
I think this is only used on legacy ARM32 products that are no longer
maintained, but I might be wrong.
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
2026-01-07 14:26 ` Linus Walleij
@ 2026-01-07 15:03 ` Arnd Bergmann
2026-01-07 17:41 ` Linus Walleij
0 siblings, 1 reply; 7+ messages in thread
From: Arnd Bergmann @ 2026-01-07 15:03 UTC (permalink / raw)
To: Linus Walleij, Jason Gunthorpe
Cc: Aneesh Kumar K.V (Arm), iommu, linux-kernel, Marek Szyprowski,
Robin Murphy, Matthew Wilcox, Suzuki K Poulose, Dmitry Osipenko,
Thierry Reding
On Wed, Jan 7, 2026, at 15:26, Linus Walleij wrote:
> On Mon, Jan 5, 2026 at 6:53 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>>
>> On Fri, Jan 02, 2026 at 09:20:37PM +0530, Aneesh Kumar K.V (Arm) wrote:
>> > Devices that are DMA non-coherent and require a remap were skipping
>> > dma_set_decrypted(), leaving DMA buffers encrypted even when the device
>> > requires unencrypted access. Move the call after the if (remap) branch
>> > so that both the direct and remapped allocation paths correctly mark the
>> > allocation as decrypted (or fail cleanly) before use.
>>
>> This is probably fine, but IMHO, we should be excluding the
>> combination of highmem and CC at the kconfig level :\
>
> The only way you can get CMA in highmem is by passing in a highmem
> location to the allocator from the command line.
What about those that declare a "shared-dma-pool" node in DT?
I don't quite understand how the alloc-ranges are picked here, but
from what I can tell, most of them are intentionally limiting
themselves to the smallest lowmem area (CONFIG_VMSPLIT_3G), while
at least three others seem to intentionally pick a highmem area,
specifically these tegra114 and tegra20 (but not tegra30) devices:
arch/arm/boot/dts/nvidia/tegra114-asus-tf701t.dts- alloc-ranges = <0x80000000 0x30000000>;
arch/arm/boot/dts/nvidia/tegra20-acer-a500-picasso.dts- alloc-ranges = <0x30000000 0x10000000>;
arch/arm/boot/dts/nvidia/tegra20-asus-transformer-common.dtsi- alloc-ranges = <0x30000000 0x10000000>;
[cc Dmitry and Thierry in case they remember why this was done]
With my proposed change to increase the default lowmem size,
all of these would be in the lowmem area after all, so it does
not actually matter, but there are probably other out-of-tree
dtbs doing the same thing.
> I have a strong urge to just patch CMA to not allow that and see what
> happens. Or at least have it print a big fat warning that this will go
> away soon.
>
> I think this is only used on legacy ARM32 products that are no longer
> maintained, but I might be wrong.
It's hard to know, I can definitely think of reasons to do this
intentionally on machines that are still supported, using either
a custom dtb or a custom command line. We can of course decide to not
support this configuration any more, and move the CMA area down
in those cases.
Arnd
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
2026-01-07 15:03 ` Arnd Bergmann
@ 2026-01-07 17:41 ` Linus Walleij
2026-01-07 21:23 ` Arnd Bergmann
0 siblings, 1 reply; 7+ messages in thread
From: Linus Walleij @ 2026-01-07 17:41 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Jason Gunthorpe, Aneesh Kumar K.V (Arm), iommu, linux-kernel,
Marek Szyprowski, Robin Murphy, Matthew Wilcox, Suzuki K Poulose,
Dmitry Osipenko, Thierry Reding
On Wed, Jan 7, 2026 at 4:04 PM Arnd Bergmann <arnd@kernel.org> wrote:
> On Wed, Jan 7, 2026, at 15:26, Linus Walleij wrote:
> > On Mon, Jan 5, 2026 at 6:53 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
> >>
> >> On Fri, Jan 02, 2026 at 09:20:37PM +0530, Aneesh Kumar K.V (Arm) wrote:
> >> > Devices that are DMA non-coherent and require a remap were skipping
> >> > dma_set_decrypted(), leaving DMA buffers encrypted even when the device
> >> > requires unencrypted access. Move the call after the if (remap) branch
> >> > so that both the direct and remapped allocation paths correctly mark the
> >> > allocation as decrypted (or fail cleanly) before use.
> >>
> >> This is probably fine, but IMHO, we should be excluding the
> >> combination of highmem and CC at the kconfig level :\
> >
> > The only way you can get CMA in highmem is by passing in a highmem
> > location to the allocator from the command line.
>
> What about those that declare a "shared-dma-pool" node in DT?
Yeah that thing is really obscuring the view :(
That's usually just defining a size and alignment and pick out of the
existing core memory, sometimes an 'alloc-range'.
The actual memory extents resolution happens in
__reserved_mem_alloc_size in drivers/of/of_reserved_mem.c.
If you didn't define an alloc-range it will pick from base 0
so as low in lowmem as it can get, essentially. (AFAICT)
And I guess it will usually get some really low lowmem
then?
If an alloc-range is specified __reserved_mem_alloc_in_range()
is called and that is considerably less predictable and seems
to mostly concern itself with other fixed allocations and not
whether they are in highmem or lowmem, and might even
theoretically cross over the lowmem/highmem border
AFAICT, which would be a major headache so I don't think
that happens in practice.
I did a grep 'alloc-ranges' but now I'm a bit slow in the head
so cannot really figure out if these will end up outside the
linear map or not.
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
2026-01-07 17:41 ` Linus Walleij
@ 2026-01-07 21:23 ` Arnd Bergmann
0 siblings, 0 replies; 7+ messages in thread
From: Arnd Bergmann @ 2026-01-07 21:23 UTC (permalink / raw)
To: Linus Walleij
Cc: Jason Gunthorpe, Aneesh Kumar K.V (Arm), iommu, linux-kernel,
Marek Szyprowski, Robin Murphy, Matthew Wilcox, Suzuki K Poulose,
Dmitry Osipenko, Thierry Reding
On Wed, Jan 7, 2026, at 18:41, Linus Walleij wrote:
> I did a grep 'alloc-ranges' but now I'm a bit slow in the head
> so cannot really figure out if these will end up outside the
> linear map or not.
The way I see it, we have two ways of doing it:
1GB of RAM, CMA in the last 256MB:
arch/arm/boot/dts/nvidia/tegra20-acer-a500-picasso.dts:
memory@0 {
reg = <0x00000000 0x40000000>;
};
reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
linux,cma@30000000 {
compatible = "shared-dma-pool";
alloc-ranges = <0x30000000 0x10000000>;
size = <0x10000000>; /* 256MiB */
linux,cma-default;
reusable;
};
};
2GB of RAM, CMA in the first 768MB:
arch/arm/boot/dts/nvidia/tegra30-asus-p1801-t.dts
memory@80000000 {
reg = <0x80000000 0x80000000>;
};
reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
linux,cma@80000000 {
compatible = "shared-dma-pool";
alloc-ranges = <0x80000000 0x30000000>;
size = <0x10000000>; /* 256MiB */
linux,cma-default;
reusable;
};
};
With CONFIG_VMSPLIT_3G, the first example spans all of highmem,
i.e. 0x30000000-0x3fffffff, the second example spans all of
lowmem at 0x80000000-0xafffffff.
Arnd
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations
2026-01-02 15:50 [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations Aneesh Kumar K.V (Arm)
2026-01-05 17:53 ` Jason Gunthorpe
@ 2026-01-14 9:47 ` Aneesh Kumar K.V
1 sibling, 0 replies; 7+ messages in thread
From: Aneesh Kumar K.V @ 2026-01-14 9:47 UTC (permalink / raw)
To: iommu, linux-kernel
Cc: Marek Szyprowski, Robin Murphy, Arnd Bergmann, Linus Walleij,
Matthew Wilcox, Suzuki K Poulose
"Aneesh Kumar K.V (Arm)" <aneesh.kumar@kernel.org> writes:
> Devices that are DMA non-coherent and require a remap were skipping
> dma_set_decrypted(), leaving DMA buffers encrypted even when the device
> requires unencrypted access. Move the call after the if (remap) branch
> so that both the direct and remapped allocation paths correctly mark the
> allocation as decrypted (or fail cleanly) before use.
>
> Architectures such as arm64 cannot mark vmap addresses as decrypted, and
> highmem pages necessarily require a vmap remap. As a result, such
> allocations cannot be safely used for unencrypted DMA. Therefore, when
> an unencrypted DMA buffer is requested, avoid allocating high PFNs from
> __dma_direct_alloc_pages().
>
> Other architectures (e.g. x86) do not have this limitation. However,
> rather than making this architecture-specific, apply the restriction
> only when the device requires unencrypted DMA access, for simplicity.
>
Considering that we don’t expect to use HighMem on systems that support
memory encryption or Confidential Compute, should we go ahead and merge
this change so that the behavior is technically correct? We can address
the separate question of whether DMA allocations should ever return
HighMem independently.
>
> Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
> ---
> kernel/dma/direct.c | 31 ++++++++++++++++++++++++++++---
> 1 file changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index ffa267020a1e..faf1e41afde8 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -204,6 +204,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
> {
> bool remap = false, set_uncached = false;
> + bool allow_highmem = true;
> struct page *page;
> void *ret;
>
> @@ -250,8 +251,18 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> dma_direct_use_pool(dev, gfp))
> return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
> +
> + if (force_dma_unencrypted(dev))
> + /*
> + * Unencrypted/shared DMA requires a linear-mapped buffer
> + * address to look up the PFN and set architecture-required PFN
> + * attributes. This is not possible with HighMem. Avoid HighMem
> + * allocation.
> + */
> + allow_highmem = false;
> +
> /* we always manually zero the memory once we are done */
> - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
> + page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, allow_highmem);
> if (!page)
> return NULL;
>
> @@ -282,7 +293,13 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> goto out_free_pages;
> } else {
> ret = page_address(page);
> - if (dma_set_decrypted(dev, ret, size))
> + }
> +
> + if (force_dma_unencrypted(dev)) {
> + void *lm_addr;
> +
> + lm_addr = page_address(page);
> + if (set_memory_decrypted((unsigned long)lm_addr, PFN_UP(size)))
> goto out_leak_pages;
> }
>
> @@ -344,8 +361,16 @@ void dma_direct_free(struct device *dev, size_t size,
> } else {
> if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
> arch_dma_clear_uncached(cpu_addr, size);
> - if (dma_set_encrypted(dev, cpu_addr, size))
> + }
> +
> + if (force_dma_unencrypted(dev)) {
> + void *lm_addr;
> +
> + lm_addr = phys_to_virt(dma_to_phys(dev, dma_addr));
> + if (set_memory_encrypted((unsigned long)lm_addr, PFN_UP(size))) {
> + pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
> return;
> + }
> }
>
> __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
> --
> 2.43.0
-aneesh
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-01-14 9:47 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-02 15:50 [PATCH v2] dma-direct: set decrypted flag for remapped DMA allocations Aneesh Kumar K.V (Arm)
2026-01-05 17:53 ` Jason Gunthorpe
2026-01-07 14:26 ` Linus Walleij
2026-01-07 15:03 ` Arnd Bergmann
2026-01-07 17:41 ` Linus Walleij
2026-01-07 21:23 ` Arnd Bergmann
2026-01-14 9:47 ` Aneesh Kumar K.V
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox