From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03C623B19AE; Tue, 12 May 2026 09:05:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778576754; cv=none; b=SvtzVrU4+1qn8A9YwxoInHVz/Fpecfw3rAeqYE5qNTTIqmljL5eLhvJ8WPBq9lyhU7iHLevmtpcMOKTg7xKJWWQfdTXfc2SnBPzzGsxNni2s3orE6LiMs27IoCW3nHxG8LERBcOJngNz0v5Y0UTkCtpuNlOh/IkCsA2Aexc/LeA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778576754; c=relaxed/simple; bh=djoVlq64riIKyUlBW+OecFjb5IHhvE+yjLiRQdlK7Xo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jz7nXMoUWNz/xD1oeCVx9FVoJA8XXwxqzPjsbBl9tT7p0Ln+j+BEPIocFwsbcUpUqdhwps1/rC3ik7yYSJ+T3z6uGdhBF5tqsMaIKg5lp87RXFE99sOUKgdvTes3qx+zoN5BnATKam0ggc6beyxCYYcUQQyd2kbiDE+plcFfQ9g= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fuh+zcQs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fuh+zcQs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0682DC2BCC7; Tue, 12 May 2026 09:05:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778576753; bh=djoVlq64riIKyUlBW+OecFjb5IHhvE+yjLiRQdlK7Xo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fuh+zcQsaxfk2wQhFIIRp8/7gcaQLxZWOwrFiHLLxW5CnQkq0T/KSl1gp/vKApyP9 R6DmxYhpxc2zRk2urFfPqZWxXnjmoH0MofSbKYSmgdbZB4P25Neb+k+78sIifLEU6e 4w9X+UpIjBdI03TEPGNu6hiiYU2LzruavDT7RqYhhsafudue7rWC/M7yyh/AkAmu+A arg0xANxjpzXi8B9p5TT0Vu/fc7RmGkaEP3LnZP70W4DH6ZKE4PHX40DaupNv2eJe6 3fBtIwklzh6LmOj/oEur30apoK6OLSNqPw4q32I6AUAghRWN6c98HJZ1M6PzXenUfZ GnfiBePVDQ7BA== From: "Aneesh Kumar K.V (Arm)" To: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Robin Murphy , Marek Szyprowski , Will Deacon , Marc Zyngier , Steven Price , Suzuki K Poulose , Catalin Marinas , Jiri Pirko , Jason Gunthorpe , Mostafa Saleh , Petr Tesarik , Alexey Kardashevskiy , Dan Williams , Xu Yilun , linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , x86@kernel.org Subject: [PATCH v4 07/13] dma-direct: make dma_direct_map_phys() honor DMA_ATTR_CC_SHARED Date: Tue, 12 May 2026 14:34:02 +0530 Message-ID: <20260512090408.794195-8-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260512090408.794195-1-aneesh.kumar@kernel.org> References: <20260512090408.794195-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-s390@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Teach dma_direct_map_phys() to select the DMA address encoding based on DMA_ATTR_CC_SHARED. Use phys_to_dma_unencrypted() for decrypted mappings and phys_to_dma_encrypted() otherwise. If a device requires unencrypted DMA but the source physical address is still encrypted, force the mapping through swiotlb so the DMA address and backing memory attributes remain consistent. Update the arm64, x86, s390 and powerpc secure-guest setup to not use swiotlb force option Signed-off-by: Aneesh Kumar K.V (Arm) --- Changes from v3: * Handle DMA_ATTR_MMIO --- arch/arm64/mm/init.c | 4 +-- arch/powerpc/platforms/pseries/svm.c | 2 +- arch/s390/mm/init.c | 2 +- arch/x86/kernel/pci-dma.c | 4 +-- kernel/dma/direct.c | 4 ++- kernel/dma/direct.h | 38 +++++++++++++--------------- 6 files changed, 24 insertions(+), 30 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 97987f850a33..acf67c7064db 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -338,10 +338,8 @@ void __init arch_mm_preinit(void) unsigned int flags = SWIOTLB_VERBOSE; bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit); - if (is_realm_world()) { + if (is_realm_world()) swiotlb = true; - flags |= SWIOTLB_FORCE; - } if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) { /* diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 384c9dc1899a..7a403dbd35ee 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -29,7 +29,7 @@ static int __init init_svm(void) * need to use the SWIOTLB buffer for DMA even if dma_capable() says * otherwise. */ - ppc_swiotlb_flags |= SWIOTLB_ANY | SWIOTLB_FORCE; + ppc_swiotlb_flags |= SWIOTLB_ANY; /* Share the SWIOTLB buffer with the host. */ swiotlb_update_mem_attributes(); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 1f72efc2a579..843dbd445124 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -149,7 +149,7 @@ static void __init pv_init(void) virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); /* make sure bounce buffers are shared */ - swiotlb_init(true, SWIOTLB_FORCE | SWIOTLB_VERBOSE); + swiotlb_init(true, SWIOTLB_VERBOSE); swiotlb_update_mem_attributes(); } diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 6267363e0189..75cf8f6ae8cd 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -59,10 +59,8 @@ static void __init pci_swiotlb_detect(void) * bounce buffers as the hypervisor can't access arbitrary VM memory * that is not explicitly shared with it. */ - if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) { + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) x86_swiotlb_enable = true; - x86_swiotlb_flags |= SWIOTLB_FORCE; - } } #else static inline void __init pci_swiotlb_detect(void) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index ac315dd046c4..5aaa813c5509 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -691,8 +691,10 @@ size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ if (is_swiotlb_active(dev) && - (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev))) + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev) || + force_dma_unencrypted(dev))) return swiotlb_max_mapping_size(dev); + return SIZE_MAX; } diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index e05dc7649366..4e35264ab6f8 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -89,36 +89,32 @@ static inline dma_addr_t dma_direct_map_phys(struct device *dev, dma_addr_t dma_addr; if (is_swiotlb_force_bounce(dev)) { - if (!(attrs & DMA_ATTR_CC_SHARED)) { - if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) - return DMA_MAPPING_ERROR; + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) + return DMA_MAPPING_ERROR; - return swiotlb_map(dev, phys, size, dir, attrs); - } - } else if (attrs & DMA_ATTR_CC_SHARED) { - return DMA_MAPPING_ERROR; + return swiotlb_map(dev, phys, size, dir, attrs); } - if (attrs & DMA_ATTR_MMIO) { - dma_addr = phys; - if (unlikely(!dma_capable(dev, dma_addr, size, false, attrs))) - goto err_overflow; - } else if (attrs & DMA_ATTR_CC_SHARED) { + if (attrs & DMA_ATTR_CC_SHARED) dma_addr = phys_to_dma_unencrypted(dev, phys); + else + dma_addr = phys_to_dma_encrypted(dev, phys); + + if (attrs & DMA_ATTR_MMIO) { if (unlikely(!dma_capable(dev, dma_addr, size, false, attrs))) goto err_overflow; - } else { - dma_addr = phys_to_dma(dev, phys); - if (unlikely(!dma_capable(dev, dma_addr, size, true, attrs)) || - dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_swiotlb_active(dev) && - !(attrs & DMA_ATTR_REQUIRE_COHERENT)) - return swiotlb_map(dev, phys, size, dir, attrs); + goto dma_mapped; + } - goto err_overflow; - } + if (unlikely(!dma_capable(dev, dma_addr, size, true, attrs)) || + dma_kmalloc_needs_bounce(dev, size, dir)) { + if (is_swiotlb_active(dev) && + !(attrs & DMA_ATTR_REQUIRE_COHERENT)) + return swiotlb_map(dev, phys, size, dir, attrs); + goto err_overflow; } +dma_mapped: if (!dev_is_dma_coherent(dev) && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_device(phys, size, dir); -- 2.43.0