From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49CE8FF8861 for ; Mon, 27 Apr 2026 06:31:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xZZuJ/VWJdN3QM4VD+dmOzSAchixkLT21bQpI0a8b8k=; b=H3rNwPzja9tXe79AFrQio6fP2d zgEQ7+Dh8mapryzRtjH8+m7hXksm0bembDkBt3yZ32U0BSA9709fMbo5qMJElwdlSLMyTxfCVuXeG 9lr5cjWU7hnf32eu9w03EP7ZzhHxmhIRsAuliunM7mchTonn/s2lwO+hrKuqIMK++wX8p1ktVbUA7 +WtbV2xqSGRnXUDcG4VncAhIKxYPR9G1SiVkuz6Phhcxtv+Xbor7t8oh008yGeVIFCozgMCNeVd4a 7EIC/8HbArze2lJ8dmtMQck/8suvgpRlQzjks17AERo/izar5/Ic+71pec0yUuvT3tnQg4JLU5Uwq L6DKbX3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1wHFV6-0000000GGW2-3vbx; Mon, 27 Apr 2026 06:31:24 +0000 Received: from tor.source.kernel.org ([2600:3c04:e001:324:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wHFV5-0000000GGVU-3G5L for linux-arm-kernel@lists.infradead.org; Mon, 27 Apr 2026 06:31:23 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 7ABE76013B; Mon, 27 Apr 2026 06:31:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 308B5C19425; Mon, 27 Apr 2026 06:31:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777271481; bh=51SRdzsHsfSJUwYM7aDNWKGTka/ejbXOv+xk24ZvFA8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WoknH5UkRycDtvRECfukSbctzdQWuXglzYvqG1WqiESiVJmgf7MzmgxBuYKQHsJMF lQoYBS2fynNOZHDXAR4BWtuVao0w5qyWeCNBqUEnXyyECRBWVgIjJXQyzmYrok8FKx osUxpgRbm56dsJiBebOcu8gVo/+Q++MrcwIQzd3F3RIwFsU32GDelL8UeF+UnIyo03 qk9trcGzYmi3mJDKQ0eHgtpne/WVJgc3DB5oQdgeo52GoayvBLerMaeRuBiJeVMCa/ Zny02VwHT/LOpLdsLKRkFK/yOfRxs4rCPAyxTFbhRdrLdRFeZCQdvi/IKxuf9VCc0K lgqij3mSEStog== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Catalin Marinas , Jason Gunthorpe , Marc Zyngier , Marek Szyprowski , Robin Murphy , Steven Price , Suzuki K Poulose , Thomas Gleixner , Will Deacon Subject: [PATCH v4 1/3] dma-direct: swiotlb: handle swiotlb alloc/free outside __dma_direct_alloc_pages Date: Mon, 27 Apr 2026 12:01:06 +0530 Message-ID: <20260427063108.909019-2-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260427063108.909019-1-aneesh.kumar@kernel.org> References: <20260427063108.909019-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move swiotlb allocation out of __dma_direct_alloc_pages() and handle it in dma_direct_alloc() / dma_direct_alloc_pages(). This is needed for follow-up changes that align shared decrypted buffers to hypervisor page size. swiotlb pool memory is decrypted as a whole and does not need per-allocation alignment handling. swiotlb backing pages are already mapped decrypted by swiotlb_update_mem_attributes() and rmem_swiotlb_device_init(), so dma-direct should not call dma_set_decrypted() on allocation nor dma_set_encrypted() on free for swiotlb-backed memory. Update alloc/free paths to detect swiotlb-backed pages and skip encrypt/decrypt transitions for those paths. Keep the existing highmem rejection in dma_direct_alloc_pages() for swiotlb allocations. Only for "restricted-dma-pool", we currently set `for_alloc = true`, while rmem_swiotlb_device_init() decrypts the whole pool up front. This pool is typically used together with "shared-dma-pool", where the shared region is accessed after remap/ioremap and the returned address is suitable for decrypted memory access. So existing code paths remain valid. Signed-off-by: Aneesh Kumar K.V (Arm) --- kernel/dma/direct.c | 44 +++++++++++++++++++++++++++++++++++++------- 1 file changed, 37 insertions(+), 7 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 8f43a930716d..c2a43e4ef902 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -125,9 +125,6 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, WARN_ON_ONCE(!PAGE_ALIGNED(size)); - if (is_swiotlb_for_alloc(dev)) - return dma_direct_alloc_swiotlb(dev, size); - gfp |= dma_direct_optimal_gfp_mask(dev, &phys_limit); page = dma_alloc_contiguous(dev, size, gfp); if (page) { @@ -204,6 +201,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { bool remap = false, set_uncached = false; + bool mark_mem_decrypt = true; struct page *page; void *ret; @@ -250,11 +248,21 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); + if (is_swiotlb_for_alloc(dev)) { + page = dma_direct_alloc_swiotlb(dev, size); + if (page) { + mark_mem_decrypt = false; + goto setup_page; + } + return NULL; + } + /* we always manually zero the memory once we are done */ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); if (!page) return NULL; +setup_page: /* * dma_alloc_contiguous can return highmem pages depending on a * combination the cma= arguments and per-arch setup. These need to be @@ -281,7 +289,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, goto out_free_pages; } else { ret = page_address(page); - if (dma_set_decrypted(dev, ret, size)) + if (mark_mem_decrypt && dma_set_decrypted(dev, ret, size)) goto out_leak_pages; } @@ -298,7 +306,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return ret; out_encrypt_pages: - if (dma_set_encrypted(dev, page_address(page), size)) + if (mark_mem_decrypt && dma_set_encrypted(dev, page_address(page), size)) return NULL; out_free_pages: __dma_direct_free_pages(dev, page, size); @@ -310,6 +318,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { + bool mark_mem_encrypted = true; unsigned int page_order = get_order(size); if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && @@ -338,12 +347,15 @@ void dma_direct_free(struct device *dev, size_t size, dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) return; + if (swiotlb_find_pool(dev, dma_to_phys(dev, dma_addr))) + mark_mem_encrypted = false; + if (is_vmalloc_addr(cpu_addr)) { vunmap(cpu_addr); } else { if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - if (dma_set_encrypted(dev, cpu_addr, size)) + if (mark_mem_encrypted && dma_set_encrypted(dev, cpu_addr, size)) return; } @@ -359,6 +371,19 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); + if (is_swiotlb_for_alloc(dev)) { + page = dma_direct_alloc_swiotlb(dev, size); + if (!page) + return NULL; + + if (PageHighMem(page)) { + swiotlb_free(dev, page, size); + return NULL; + } + ret = page_address(page); + goto setup_page; + } + page = __dma_direct_alloc_pages(dev, size, gfp, false); if (!page) return NULL; @@ -366,6 +391,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, ret = page_address(page); if (dma_set_decrypted(dev, ret, size)) goto out_leak_pages; +setup_page: memset(ret, 0, size); *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; @@ -378,13 +404,17 @@ void dma_direct_free_pages(struct device *dev, size_t size, enum dma_data_direction dir) { void *vaddr = page_address(page); + bool mark_mem_encrypted = true; /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && dma_free_from_pool(dev, vaddr, size)) return; - if (dma_set_encrypted(dev, vaddr, size)) + if (swiotlb_find_pool(dev, page_to_phys(page))) + mark_mem_encrypted = false; + + if (mark_mem_encrypted && dma_set_encrypted(dev, vaddr, size)) return; __dma_direct_free_pages(dev, page, size); } -- 2.43.0