* [RFC PATCH 0/2] dma-mapping: DMA_RESTRICTED_POOL and encryption
@ 2026-03-05 17:03 Mostafa Saleh
2026-03-05 17:03 ` [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL Mostafa Saleh
2026-03-05 17:03 ` [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL Mostafa Saleh
0 siblings, 2 replies; 10+ messages in thread
From: Mostafa Saleh @ 2026-03-05 17:03 UTC (permalink / raw)
To: iommu, linux-kernel
Cc: robin.murphy, m.szyprowski, will, maz, suzuki.poulose,
catalin.marinas, Mostafa Saleh
I have been looking into DMA code with DMA_RESTRICTED_POOL and how it
interacts with the memory encryption API, mainly in the context of
protected KVM (pKVM) on Arm64.
While trying to extend force_dma_unencrypted() to be pKVM aware I
noticed some inconsistencies in direct-dma code which looks as bugs.
I am not sure if there are any architectures affected by this at the
moment as some of the logic of memory encryption is forwarded to the
hypervisor as hypercalls ore realm calls.
I have wrote some fixes from my simplistic understanding.
However, Future looking, I feel like we would need to have a more
solid API for memory encryption and decryption, that can be used
consistently from both SWIOTLB(so we can also not decrypt per-device
pools by default), DMA-direct and other subsystems.
That would be useful in cases (at least for pKVM) where a device would
need to have a private encrypted pool, (if it needs to bounce memory
for any reason with leaking information by decrypting the data).
I am not sure how other CCA solutions deals with in Linux, I am
assuming they won't to need to bounce at all?
I can send another series for this which adds a property to SWIOTLB
buffers to be decrypted by default if that makes sense.
Mostafa Saleh (2):
dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL
dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL
kernel/dma/direct.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL
2026-03-05 17:03 [RFC PATCH 0/2] dma-mapping: DMA_RESTRICTED_POOL and encryption Mostafa Saleh
@ 2026-03-05 17:03 ` Mostafa Saleh
2026-03-10 13:36 ` Catalin Marinas
2026-03-05 17:03 ` [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL Mostafa Saleh
1 sibling, 1 reply; 10+ messages in thread
From: Mostafa Saleh @ 2026-03-05 17:03 UTC (permalink / raw)
To: iommu, linux-kernel
Cc: robin.murphy, m.szyprowski, will, maz, suzuki.poulose,
catalin.marinas, Mostafa Saleh
In case a device have a restricted DMA pool, it will be decrypted.
However, in the path of dma_direct_alloc() memory can be allocated
from this pool using, __dma_direct_alloc_pages() =>
dma_direct_alloc_swiotlb()
After that from the same function, it will attempt to decrypt it
using dma_set_decrypted() if force_dma_unencrypted().
Which results in the memory being decrypted twice.
It's not clear how the does realm world/hypervisors deal with that,
for example:
- Clear a bit in the page table and call realm IPA_STATE_SET
- TDX: Seems to issue a hypercall also.
- pKVM: Which doesn't implement force_dma_unencrypted() at the moment,
uses a share hypercall which is definitely not Idempotent.
This patch will only encrypt/decrypt memory that are not allocated
form the restricted dma pools.
Signed-off-by: Mostafa Saleh <smostafa@google.com>
---
kernel/dma/direct.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 8f43a930716d..27d804f0473f 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -79,7 +79,7 @@ bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
{
- if (!force_dma_unencrypted(dev))
+ if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
return 0;
return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
}
@@ -88,7 +88,7 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
{
int ret;
- if (!force_dma_unencrypted(dev))
+ if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
return 0;
ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
if (ret)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL
2026-03-05 17:03 [RFC PATCH 0/2] dma-mapping: DMA_RESTRICTED_POOL and encryption Mostafa Saleh
2026-03-05 17:03 ` [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL Mostafa Saleh
@ 2026-03-05 17:03 ` Mostafa Saleh
2026-03-10 13:08 ` Catalin Marinas
1 sibling, 1 reply; 10+ messages in thread
From: Mostafa Saleh @ 2026-03-05 17:03 UTC (permalink / raw)
To: iommu, linux-kernel
Cc: robin.murphy, m.szyprowski, will, maz, suzuki.poulose,
catalin.marinas, Mostafa Saleh
As restricted dma pools are always decrypted, in swiotlb.c it uses
phys_to_dma_unencrypted() for address conversion.
However, in DMA-direct, calls to phys_to_dma_direct() with
force_dma_unencrypted() returning false, will fallback to
phys_to_dma() which is inconsistent for memory allocated from
restricted dma pools.
Signed-off-by: Mostafa Saleh <smostafa@google.com>
---
kernel/dma/direct.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 27d804f0473f..1a402bb956d9 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -26,7 +26,7 @@ u64 zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
static inline dma_addr_t phys_to_dma_direct(struct device *dev,
phys_addr_t phys)
{
- if (force_dma_unencrypted(dev))
+ if (force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
return phys_to_dma_unencrypted(dev, phys);
return phys_to_dma(dev, phys);
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL
2026-03-05 17:03 ` [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL Mostafa Saleh
@ 2026-03-10 13:08 ` Catalin Marinas
2026-03-10 13:20 ` Suzuki K Poulose
2026-03-11 12:28 ` Mostafa Saleh
0 siblings, 2 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-10 13:08 UTC (permalink / raw)
To: Mostafa Saleh
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz,
suzuki.poulose
On Thu, Mar 05, 2026 at 05:03:35PM +0000, Mostafa Saleh wrote:
> As restricted dma pools are always decrypted, in swiotlb.c it uses
> phys_to_dma_unencrypted() for address conversion.
>
> However, in DMA-direct, calls to phys_to_dma_direct() with
> force_dma_unencrypted() returning false, will fallback to
> phys_to_dma() which is inconsistent for memory allocated from
> restricted dma pools.
>
> Signed-off-by: Mostafa Saleh <smostafa@google.com>
> ---
> kernel/dma/direct.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 27d804f0473f..1a402bb956d9 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -26,7 +26,7 @@ u64 zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
> static inline dma_addr_t phys_to_dma_direct(struct device *dev,
> phys_addr_t phys)
> {
> - if (force_dma_unencrypted(dev))
> + if (force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
> return phys_to_dma_unencrypted(dev, phys);
> return phys_to_dma(dev, phys);
> }
I couldn't fully get my head around the DMA API but I think all the
pools and bounce buffers are decrypted and protected guests (or realms
for Arm CCA) should always return true for force_dma_unencrypted(). If
that's the case, the above change wouldn't be necessary. I can see that
arm64 only does this for CCA and not pKVM guests.
Device assignment is another story that requires reworking those DMA
pools to support encrypted buffers.
--
Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL
2026-03-10 13:08 ` Catalin Marinas
@ 2026-03-10 13:20 ` Suzuki K Poulose
2026-03-11 12:28 ` Mostafa Saleh
1 sibling, 0 replies; 10+ messages in thread
From: Suzuki K Poulose @ 2026-03-10 13:20 UTC (permalink / raw)
To: Catalin Marinas, Mostafa Saleh
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz
On 10/03/2026 13:08, Catalin Marinas wrote:
> On Thu, Mar 05, 2026 at 05:03:35PM +0000, Mostafa Saleh wrote:
>> As restricted dma pools are always decrypted, in swiotlb.c it uses
>> phys_to_dma_unencrypted() for address conversion.
>>
>> However, in DMA-direct, calls to phys_to_dma_direct() with
>> force_dma_unencrypted() returning false, will fallback to
>> phys_to_dma() which is inconsistent for memory allocated from
>> restricted dma pools.
>>
>> Signed-off-by: Mostafa Saleh <smostafa@google.com>
>> ---
>> kernel/dma/direct.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>> index 27d804f0473f..1a402bb956d9 100644
>> --- a/kernel/dma/direct.c
>> +++ b/kernel/dma/direct.c
>> @@ -26,7 +26,7 @@ u64 zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
>> static inline dma_addr_t phys_to_dma_direct(struct device *dev,
>> phys_addr_t phys)
>> {
>> - if (force_dma_unencrypted(dev))
>> + if (force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
>> return phys_to_dma_unencrypted(dev, phys);
>> return phys_to_dma(dev, phys);
>> }
>
> I couldn't fully get my head around the DMA API but I think all the
> pools and bounce buffers are decrypted and protected guests (or realms
> for Arm CCA) should always return true for force_dma_unencrypted(). If
> that's the case, the above change wouldn't be necessary. I can see that
> arm64 only does this for CCA and not pKVM guests.
That is correct. Why would the force_dma_unencrypted() return false for
a device ? As far as I can see, all CC guests are treating all devices
as "untrusted" for now (and there is a series available that is adding
support for "trusted devices [0]).
>
> Device assignment is another story that requires reworking those DMA
> pools to support encrypted buffers.
>
[0]
https://lkml.kernel.org/r/20260303000207.1836586-1-dan.j.williams@intel.com
Suzuki
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL
2026-03-05 17:03 ` [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL Mostafa Saleh
@ 2026-03-10 13:36 ` Catalin Marinas
2026-03-10 13:55 ` Catalin Marinas
0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2026-03-10 13:36 UTC (permalink / raw)
To: Mostafa Saleh
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz,
suzuki.poulose
On Thu, Mar 05, 2026 at 05:03:34PM +0000, Mostafa Saleh wrote:
> In case a device have a restricted DMA pool, it will be decrypted.
> However, in the path of dma_direct_alloc() memory can be allocated
> from this pool using, __dma_direct_alloc_pages() =>
> dma_direct_alloc_swiotlb()
>
> After that from the same function, it will attempt to decrypt it
> using dma_set_decrypted() if force_dma_unencrypted().
>
> Which results in the memory being decrypted twice.
>
> It's not clear how the does realm world/hypervisors deal with that,
> for example:
> - Clear a bit in the page table and call realm IPA_STATE_SET
> - TDX: Seems to issue a hypercall also.
> - pKVM: Which doesn't implement force_dma_unencrypted() at the moment,
> uses a share hypercall which is definitely not Idempotent.
>
> This patch will only encrypt/decrypt memory that are not allocated
> form the restricted dma pools.
>
> Signed-off-by: Mostafa Saleh <smostafa@google.com>
> ---
> kernel/dma/direct.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 8f43a930716d..27d804f0473f 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -79,7 +79,7 @@ bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
>
> static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
> {
> - if (!force_dma_unencrypted(dev))
> + if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
> return 0;
> return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
> }
> @@ -88,7 +88,7 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
> {
> int ret;
>
> - if (!force_dma_unencrypted(dev))
> + if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
> return 0;
> ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
> if (ret)
I think that's functionally correct for rmem buffers. Normally I'd have
moved the is_swiotlb_for_alloc() condition in the caller but even
dma_direct_alloc() doesn't know where the buffer came from, it's hidden
in __dma_direct_alloc_pages().
However, it's unclear to me whether we can get encrypted pages when
is_swiotlb_for_alloc() == false, remap == true and
force_dma_unencrypted() == true in dma_direct_alloc().
dma_set_decrypted() is only called on the !remap path.
--
Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL
2026-03-10 13:36 ` Catalin Marinas
@ 2026-03-10 13:55 ` Catalin Marinas
2026-03-11 12:25 ` Mostafa Saleh
0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2026-03-10 13:55 UTC (permalink / raw)
To: Mostafa Saleh
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz,
suzuki.poulose, Aneesh Kumar K.V
On Tue, Mar 10, 2026 at 01:36:08PM +0000, Catalin Marinas wrote:
> However, it's unclear to me whether we can get encrypted pages when
> is_swiotlb_for_alloc() == false, remap == true and
> force_dma_unencrypted() == true in dma_direct_alloc().
> dma_set_decrypted() is only called on the !remap path.
Ah, I can see Anneesh trying to address this here:
https://lore.kernel.org/r/yq5abjjl4o0j.fsf@kernel.org
--
Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL
2026-03-10 13:55 ` Catalin Marinas
@ 2026-03-11 12:25 ` Mostafa Saleh
2026-03-13 7:36 ` Aneesh Kumar K.V
0 siblings, 1 reply; 10+ messages in thread
From: Mostafa Saleh @ 2026-03-11 12:25 UTC (permalink / raw)
To: Catalin Marinas
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz,
suzuki.poulose, Aneesh Kumar K.V
On Tue, Mar 10, 2026 at 01:55:52PM +0000, Catalin Marinas wrote:
> On Tue, Mar 10, 2026 at 01:36:08PM +0000, Catalin Marinas wrote:
> > However, it's unclear to me whether we can get encrypted pages when
> > is_swiotlb_for_alloc() == false, remap == true and
> > force_dma_unencrypted() == true in dma_direct_alloc().
> > dma_set_decrypted() is only called on the !remap path.
>
> Ah, I can see Anneesh trying to address this here:
>
> https://lore.kernel.org/r/yq5abjjl4o0j.fsf@kernel.org
I see, thanks for pointing that out, the case Aneesh is fixing is the
missing decryption in the remap case. However, it’s not clear to me
how we can get there for CCA, I left a comment on his patch.
I can inline the is_swiotlb_for_alloc() checks outside, but I believe
adding this in the lowest level is better as indeed the memory is
decrypted and we don’t have to open code the check in other places are
dma_direct_alloc_pages()
Thanks,
Mostafa
>
> --
> Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL
2026-03-10 13:08 ` Catalin Marinas
2026-03-10 13:20 ` Suzuki K Poulose
@ 2026-03-11 12:28 ` Mostafa Saleh
1 sibling, 0 replies; 10+ messages in thread
From: Mostafa Saleh @ 2026-03-11 12:28 UTC (permalink / raw)
To: Catalin Marinas
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz,
suzuki.poulose
On Tue, Mar 10, 2026 at 01:08:00PM +0000, Catalin Marinas wrote:
> On Thu, Mar 05, 2026 at 05:03:35PM +0000, Mostafa Saleh wrote:
> > As restricted dma pools are always decrypted, in swiotlb.c it uses
> > phys_to_dma_unencrypted() for address conversion.
> >
> > However, in DMA-direct, calls to phys_to_dma_direct() with
> > force_dma_unencrypted() returning false, will fallback to
> > phys_to_dma() which is inconsistent for memory allocated from
> > restricted dma pools.
> >
> > Signed-off-by: Mostafa Saleh <smostafa@google.com>
> > ---
> > kernel/dma/direct.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> > index 27d804f0473f..1a402bb956d9 100644
> > --- a/kernel/dma/direct.c
> > +++ b/kernel/dma/direct.c
> > @@ -26,7 +26,7 @@ u64 zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
> > static inline dma_addr_t phys_to_dma_direct(struct device *dev,
> > phys_addr_t phys)
> > {
> > - if (force_dma_unencrypted(dev))
> > + if (force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev))
> > return phys_to_dma_unencrypted(dev, phys);
> > return phys_to_dma(dev, phys);
> > }
>
> I couldn't fully get my head around the DMA API but I think all the
> pools and bounce buffers are decrypted and protected guests (or realms
> for Arm CCA) should always return true for force_dma_unencrypted(). If
> that's the case, the above change wouldn't be necessary. I can see that
> arm64 only does this for CCA and not pKVM guests.
>
Yes, that’s the problem, pKVM relies on SWIOTLB to use decrypted
buffers and not force_dma_unencrypted() in DMA-direct.
So, at the moment pKVM guests actually call:
- phys_to_dma_unencrypted(): From swiotlb code
- phys_to_dma(): From Direct-DMA code
Which is in-consistent, but only works as the pKVM memory encryption/
decryption is in-place, so there is no address conversion.
I was looking into setting force_dma_unencrypted() to true for pKVM,
which then resulted in the bug of double-decryption I am trying to solve
with patch-1.
I think the main problem is that SWIOTLB(restricted DMA) decrypts stuff
unconditionally, so we have to treat is_swiotlb_for_alloc() the same way as
force_dma_unencrypted().
That is what these 2 patches do, otherwise we teach SWIOTLB code about
force_dma_unencrypted().
Thanks,
Mostafa
> Device assignment is another story that requires reworking those DMA
> pools to support encrypted buffers.
>
> --
> Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL
2026-03-11 12:25 ` Mostafa Saleh
@ 2026-03-13 7:36 ` Aneesh Kumar K.V
0 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2026-03-13 7:36 UTC (permalink / raw)
To: Mostafa Saleh, Catalin Marinas
Cc: iommu, linux-kernel, robin.murphy, m.szyprowski, will, maz,
suzuki.poulose
Mostafa Saleh <smostafa@google.com> writes:
> On Tue, Mar 10, 2026 at 01:55:52PM +0000, Catalin Marinas wrote:
>> On Tue, Mar 10, 2026 at 01:36:08PM +0000, Catalin Marinas wrote:
>> > However, it's unclear to me whether we can get encrypted pages when
>> > is_swiotlb_for_alloc() == false, remap == true and
>> > force_dma_unencrypted() == true in dma_direct_alloc().
>> > dma_set_decrypted() is only called on the !remap path.
>>
>> Ah, I can see Anneesh trying to address this here:
>>
>> https://lore.kernel.org/r/yq5abjjl4o0j.fsf@kernel.org
>
> I see, thanks for pointing that out, the case Aneesh is fixing is the
> missing decryption in the remap case. However, it’s not clear to me
> how we can get there for CCA, I left a comment on his patch.
>
> I can inline the is_swiotlb_for_alloc() checks outside, but I believe
> adding this in the lowest level is better as indeed the memory is
> decrypted and we don’t have to open code the check in other places are
> dma_direct_alloc_pages()
>
There are a few related changes that I have posted. However, I am
wondering whether it would be simpler to treat the swiotlb pool as
always decrypted. In that case, even when allocating from swiotlb we
would not need to toggle between decrypt/encrypt.
Another reason to treat swiotlb as special is the alignment requirement
when toggling between decrypted and encrypted states.
The patch implementing this approach is here
https://lore.kernel.org/all/20260309102625.2315725-2-aneesh.kumar@kernel.org
With respect to remapping, there are two conditions that can currently
trigger a remap: when the device is non-coherent, or when we receive a
HighMem allocation. Neither of these conditions applies to CCA. We could
potentially enforce the HighMem case by using the following hunk in the
patch:
+
+ if (force_dma_unencrypted(dev))
+ /*
+ * Unencrypted/shared DMA requires a linear-mapped buffer
+ * address to look up the PFN and set architecture-required PFN
+ * attributes. This is not possible with HighMem. Avoid HighMem
+ * allocation.
+ */
+ allow_highmem = false;
+
/* we always manually zero the memory once we are done */
- page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, allow_highmem);
if (!page)
return NULL;
https://lore.kernel.org/all/20260102155037.2551524-1-aneesh.kumar@kernel.org
I haven't got much feedback on that patch yet.
-aneesh
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-03-13 7:36 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-05 17:03 [RFC PATCH 0/2] dma-mapping: DMA_RESTRICTED_POOL and encryption Mostafa Saleh
2026-03-05 17:03 ` [RFC PATCH 1/2] dma-mapping: Avoid double decrypting with DMA_RESTRICTED_POOL Mostafa Saleh
2026-03-10 13:36 ` Catalin Marinas
2026-03-10 13:55 ` Catalin Marinas
2026-03-11 12:25 ` Mostafa Saleh
2026-03-13 7:36 ` Aneesh Kumar K.V
2026-03-05 17:03 ` [RFC PATCH 2/2] dma-mapping: Use the correct phys_to_dma() for DMA_RESTRICTED_POOL Mostafa Saleh
2026-03-10 13:08 ` Catalin Marinas
2026-03-10 13:20 ` Suzuki K Poulose
2026-03-11 12:28 ` Mostafa Saleh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox