From: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
To: Mostafa Saleh <smostafa@google.com>
Cc: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev,
Robin Murphy <robin.murphy@arm.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
Steven Price <steven.price@arm.com>,
Suzuki K Poulose <Suzuki.Poulose@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Jiri Pirko <jiri@resnulli.us>, Jason Gunthorpe <jgg@ziepe.ca>,
Petr Tesarik <ptesarik@suse.com>,
Alexey Kardashevskiy <aik@amd.com>,
Dan Williams <dan.j.williams@intel.com>,
Xu Yilun <yilun.xu@linux.intel.com>,
linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
"Christophe Leroy (CS GROUP)" <chleroy@kernel.org>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
x86@kernel.org
Subject: Re: [PATCH v4 01/13] dma-direct: swiotlb: handle swiotlb alloc/free outside __dma_direct_alloc_pages
Date: Thu, 14 May 2026 10:24:48 +0530 [thread overview]
Message-ID: <yq5amry2a8vb.fsf@kernel.org> (raw)
In-Reply-To: <agSDPJMZkcI-uH8f@google.com>
Mostafa Saleh <smostafa@google.com> writes:
> On Tue, May 12, 2026 at 02:33:56PM +0530, Aneesh Kumar K.V (Arm) wrote:
>> Move swiotlb allocation out of __dma_direct_alloc_pages() and handle it in
>> dma_direct_alloc() / dma_direct_alloc_pages().
>>
>> This is needed for follow-up changes that simplify the handling of
>> memory encryption/decryption based on the DMA attribute flags.
>>
>> swiotlb backing pages are already mapped decrypted by
>> swiotlb_update_mem_attributes() and rmem_swiotlb_device_init(), so
>> dma-direct should not call dma_set_decrypted() on allocation nor
>> dma_set_encrypted() on free for swiotlb-backed memory.
>>
>> Update alloc/free paths to detect swiotlb-backed pages and skip
>> encrypt/decrypt transitions for those paths. Keep the existing highmem
>> rejection in dma_direct_alloc_pages() for swiotlb allocations.
>>
>> Only for "restricted-dma-pool", we currently set `for_alloc = true`, while
>> rmem_swiotlb_device_init() decrypts the whole pool up front. This pool is
>> typically used together with "shared-dma-pool", where the shared region is
>> accessed after remap/ioremap and the returned address is suitable for
>> decrypted memory access. So existing code paths remain valid.
>>
>> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
>> ---
>> kernel/dma/direct.c | 44 +++++++++++++++++++++++++++++++++++++-------
>> 1 file changed, 37 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>> index ec887f443741..b958f150718a 100644
>> --- a/kernel/dma/direct.c
>> +++ b/kernel/dma/direct.c
>> @@ -125,9 +125,6 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>>
>> WARN_ON_ONCE(!PAGE_ALIGNED(size));
>>
>> - if (is_swiotlb_for_alloc(dev))
>> - return dma_direct_alloc_swiotlb(dev, size);
>> -
>> gfp |= dma_direct_optimal_gfp_mask(dev, &phys_limit);
>> page = dma_alloc_contiguous(dev, size, gfp);
>> if (page) {
>> @@ -204,6 +201,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>> dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
>> {
>> bool remap = false, set_uncached = false;
>> + bool mark_mem_decrypt = true;
>> struct page *page;
>> void *ret;
>>
>> @@ -250,11 +248,21 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>> dma_direct_use_pool(dev, gfp))
>> return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>>
>> + if (is_swiotlb_for_alloc(dev)) {
>> + page = dma_direct_alloc_swiotlb(dev, size);
>> + if (page) {
>> + mark_mem_decrypt = false;
>> + goto setup_page;
>> + }
>> + return NULL;
>> + }
>> +
>> /* we always manually zero the memory once we are done */
>> page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
>> if (!page)
>> return NULL;
>>
>> +setup_page:
>> /*
>> * dma_alloc_contiguous can return highmem pages depending on a
>> * combination the cma= arguments and per-arch setup. These need to be
>> @@ -281,7 +289,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>> goto out_free_pages;
>> } else {
>> ret = page_address(page);
>> - if (dma_set_decrypted(dev, ret, size))
>> + if (mark_mem_decrypt && dma_set_decrypted(dev, ret, size))
>
> I am ok with that approach, but Jason was mentioning we shouldn’t
> special case swiotlb and make the allocator return the memory state
> (similar to the dma_page [1]) . I am also OK if you want to merge that
> part of my series with is.
>
> [1] https://lore.kernel.org/linux-iommu/20260408194750.2280873-1-smostafa@google.com/
>
I was not sure whether we need dma_page. As shown in this series, we can
simplify the allocation and free paths without adding new abstractions
like dma_page.
>
>> goto out_leak_pages;
>> }
>>
>> @@ -298,7 +306,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>> return ret;
>>
>> out_encrypt_pages:
>> - if (dma_set_encrypted(dev, page_address(page), size))
>> + if (mark_mem_decrypt && dma_set_encrypted(dev, page_address(page), size))
>> return NULL;
>> out_free_pages:
>> __dma_direct_free_pages(dev, page, size);
>> @@ -310,6 +318,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>> void dma_direct_free(struct device *dev, size_t size,
>> void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs)
>> {
>> + bool mark_mem_encrypted = true;
>> unsigned int page_order = get_order(size);
>>
>> if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
>> @@ -338,12 +347,15 @@ void dma_direct_free(struct device *dev, size_t size,
>> dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size)))
>> return;
>>
>> + if (swiotlb_find_pool(dev, dma_to_phys(dev, dma_addr)))
>> + mark_mem_encrypted = false;
>> +
>> if (is_vmalloc_addr(cpu_addr)) {
>> vunmap(cpu_addr);
>> } else {
>> if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
>> arch_dma_clear_uncached(cpu_addr, size);
>> - if (dma_set_encrypted(dev, cpu_addr, size))
>> + if (mark_mem_encrypted && dma_set_encrypted(dev, cpu_addr, size))
>> return;
>> }
>>
>> @@ -359,6 +371,19 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>> if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
>> return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>>
>> + if (is_swiotlb_for_alloc(dev)) {
>> + page = dma_direct_alloc_swiotlb(dev, size);
>> + if (!page)
>> + return NULL;
>> +
>> + if (PageHighMem(page)) {
>
> My understanding is that rmem_swiotlb_device_init() asserts that there
> is no PageHighMem()? Also a similar check doesn’t exist in
> dma_direct_alloc().
>
The reason I added that HighMem check is that __dma_direct_alloc_pages()
already has that check.
page = dma_alloc_contiguous(dev, size, gfp);
if (page) {
if (dma_coherent_ok(dev, page_to_phys(page), size) &&
(allow_highmem || !PageHighMem(page)))
return page;
dma_free_contiguous(dev, page, size);
}
I understand that the current usage of swiotlb alloc is restricted to
restricted memory, and it will not return HighMem pages. I will drop
this hunk from the patch.
-aneesh
next prev parent reply other threads:[~2026-05-14 4:55 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 9:03 [PATCH v4 00/13] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths Aneesh Kumar K.V (Arm)
2026-05-12 9:03 ` [PATCH v4 01/13] dma-direct: swiotlb: handle swiotlb alloc/free outside __dma_direct_alloc_pages Aneesh Kumar K.V (Arm)
2026-05-13 13:57 ` Mostafa Saleh
2026-05-14 4:54 ` Aneesh Kumar K.V [this message]
2026-05-12 9:03 ` [PATCH v4 02/13] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths Aneesh Kumar K.V (Arm)
2026-05-13 13:58 ` Mostafa Saleh
2026-05-14 5:01 ` Aneesh Kumar K.V
2026-05-12 9:03 ` [PATCH v4 03/13] dma-pool: track decrypted atomic pools and select them via attrs Aneesh Kumar K.V (Arm)
2026-05-13 14:00 ` Mostafa Saleh
2026-05-14 7:00 ` Aneesh Kumar K.V
2026-05-14 8:06 ` Mostafa Saleh
2026-05-12 9:03 ` [PATCH v4 04/13] dma: swiotlb: track pool encryption state and honor DMA_ATTR_CC_SHARED Aneesh Kumar K.V (Arm)
2026-05-13 14:27 ` Mostafa Saleh
2026-05-13 17:24 ` Jason Gunthorpe
2026-05-14 6:24 ` Aneesh Kumar K.V
2026-05-14 11:48 ` Mostafa Saleh
2026-05-14 12:35 ` Jason Gunthorpe
2026-05-14 5:54 ` Aneesh Kumar K.V
2026-05-14 12:02 ` Mostafa Saleh
2026-05-14 12:48 ` Aneesh Kumar K.V
2026-05-14 14:21 ` Mostafa Saleh
2026-05-14 14:43 ` Aneesh Kumar K.V
2026-05-14 14:37 ` Jason Gunthorpe
2026-05-12 9:04 ` [PATCH v4 05/13] dma-mapping: make dma_pgprot() " Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 06/13] dma-direct: pass attrs to dma_capable() for DMA_ATTR_CC_SHARED checks Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 07/13] dma-direct: make dma_direct_map_phys() honor DMA_ATTR_CC_SHARED Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 08/13] dma-direct: set decrypted flag for remapped DMA allocations Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 09/13] dma-direct: select DMA address encoding from DMA_ATTR_CC_SHARED Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 10/13] dma-pool: fix page leak in atomic_pool_expand() cleanup Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 11/13] dma-direct: rename ret to cpu_addr in alloc helpers Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 12/13] dma-direct: return struct page from dma_direct_alloc_from_pool() Aneesh Kumar K.V (Arm)
2026-05-12 9:04 ` [PATCH v4 13/13] x86/amd-gart: preserve the direct DMA address until GART mapping succeeds Aneesh Kumar K.V (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=yq5amry2a8vb.fsf@kernel.org \
--to=aneesh.kumar@kernel.org \
--cc=Suzuki.Poulose@arm.com \
--cc=agordeev@linux.ibm.com \
--cc=aik@amd.com \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=chleroy@kernel.org \
--cc=dan.j.williams@intel.com \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=jiri@resnulli.us \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=m.szyprowski@samsung.com \
--cc=maddy@linux.ibm.com \
--cc=maz@kernel.org \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=ptesarik@suse.com \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=steven.price@arm.com \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yilun.xu@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox