From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4BF42CD4F35 for ; Tue, 12 May 2026 09:06:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Bdx4205VyoFCYDpfX7LPhqGkWbNFekWVAl0CyAC7Exc=; b=18NJ+e/8yoPfp4ZskceFRpLz3F co+bvB/8bJPoQYkLLrR4OptWGbqHzh4PaGNfVU6xGetipd9UKMzQfSea1ZAVSC4OdFRL8YG9gI6W9 w8jYKKIZyp4BjvO+HoWPLne9ENG0LU3JXTHiVo3qc3VunjUInQcMcKJYBIWFuUeMB0x9tMOeSShUH ZLqinV0uY2VfBbiqQw/EHcePZ1DqEMQjxrFapR7I3qWxL4TgXJEHqY8Zbk6DGZ3WuKSRXJ404u/Yz 5U2OcBmx/6MQAa6xgbEZMDi/AjyzNsj5DQgGfLatgBlJmcOspPJpkvr5iOEzHPq/XP8lCnIwwmlOq JU6VOZww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj3z-0000000GC61-1JhO; Tue, 12 May 2026 09:06:03 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj3x-0000000GC4c-1AoX for linux-arm-kernel@bombadil.infradead.org; Tue, 12 May 2026 09:06:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Bdx4205VyoFCYDpfX7LPhqGkWbNFekWVAl0CyAC7Exc=; b=Ggtwupal+Jgm+weXle+on1QRfp IcXKcqTQ4eA6DdnmlQmJ26w9B34omvi88P4WOkh+s5xh9AEPt6lCLUBgfIhYktaBkpbMbAQYRxkpm +DxxiD6ZiD0wt7yRLt7QJ76bop9XxuvnNmh2/fTaengXUtWibfdhpVRIyDq1EM/SpXpocommO7aBb YiR5WGTtM7RiYYcvT5OTgDHUunjZ5uSmL3dEawMmHu3SOFwu+1OtQq1COtHhYej4djXTYPqFfwreF YAxZN85RaoNyKaCjp0BZkO2bV00BYuVAF9RNucpZj3ShJ5FPi4SmC4Qn5ljQuugXlRw7BxzuV6Id2 kSciB23w==; Received: from sea.source.kernel.org ([172.234.252.31]) by desiato.infradead.org with esmtps (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj3q-0000000EBqP-40WX for linux-arm-kernel@lists.infradead.org; Tue, 12 May 2026 09:05:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id D9510423C8; Tue, 12 May 2026 09:05:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0682DC2BCC7; Tue, 12 May 2026 09:05:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778576753; bh=djoVlq64riIKyUlBW+OecFjb5IHhvE+yjLiRQdlK7Xo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fuh+zcQsaxfk2wQhFIIRp8/7gcaQLxZWOwrFiHLLxW5CnQkq0T/KSl1gp/vKApyP9 R6DmxYhpxc2zRk2urFfPqZWxXnjmoH0MofSbKYSmgdbZB4P25Neb+k+78sIifLEU6e 4w9X+UpIjBdI03TEPGNu6hiiYU2LzruavDT7RqYhhsafudue7rWC/M7yyh/AkAmu+A arg0xANxjpzXi8B9p5TT0Vu/fc7RmGkaEP3LnZP70W4DH6ZKE4PHX40DaupNv2eJe6 3fBtIwklzh6LmOj/oEur30apoK6OLSNqPw4q32I6AUAghRWN6c98HJZ1M6PzXenUfZ GnfiBePVDQ7BA== From: "Aneesh Kumar K.V (Arm)" To: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Robin Murphy , Marek Szyprowski , Will Deacon , Marc Zyngier , Steven Price , Suzuki K Poulose , Catalin Marinas , Jiri Pirko , Jason Gunthorpe , Mostafa Saleh , Petr Tesarik , Alexey Kardashevskiy , Dan Williams , Xu Yilun , linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , x86@kernel.org Subject: [PATCH v4 07/13] dma-direct: make dma_direct_map_phys() honor DMA_ATTR_CC_SHARED Date: Tue, 12 May 2026 14:34:02 +0530 Message-ID: <20260512090408.794195-8-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260512090408.794195-1-aneesh.kumar@kernel.org> References: <20260512090408.794195-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.9.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260512_100556_324791_558FD014 X-CRM114-Status: GOOD ( 17.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Teach dma_direct_map_phys() to select the DMA address encoding based on DMA_ATTR_CC_SHARED. Use phys_to_dma_unencrypted() for decrypted mappings and phys_to_dma_encrypted() otherwise. If a device requires unencrypted DMA but the source physical address is still encrypted, force the mapping through swiotlb so the DMA address and backing memory attributes remain consistent. Update the arm64, x86, s390 and powerpc secure-guest setup to not use swiotlb force option Signed-off-by: Aneesh Kumar K.V (Arm) --- Changes from v3: * Handle DMA_ATTR_MMIO --- arch/arm64/mm/init.c | 4 +-- arch/powerpc/platforms/pseries/svm.c | 2 +- arch/s390/mm/init.c | 2 +- arch/x86/kernel/pci-dma.c | 4 +-- kernel/dma/direct.c | 4 ++- kernel/dma/direct.h | 38 +++++++++++++--------------- 6 files changed, 24 insertions(+), 30 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 97987f850a33..acf67c7064db 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -338,10 +338,8 @@ void __init arch_mm_preinit(void) unsigned int flags = SWIOTLB_VERBOSE; bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit); - if (is_realm_world()) { + if (is_realm_world()) swiotlb = true; - flags |= SWIOTLB_FORCE; - } if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) { /* diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 384c9dc1899a..7a403dbd35ee 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -29,7 +29,7 @@ static int __init init_svm(void) * need to use the SWIOTLB buffer for DMA even if dma_capable() says * otherwise. */ - ppc_swiotlb_flags |= SWIOTLB_ANY | SWIOTLB_FORCE; + ppc_swiotlb_flags |= SWIOTLB_ANY; /* Share the SWIOTLB buffer with the host. */ swiotlb_update_mem_attributes(); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 1f72efc2a579..843dbd445124 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -149,7 +149,7 @@ static void __init pv_init(void) virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); /* make sure bounce buffers are shared */ - swiotlb_init(true, SWIOTLB_FORCE | SWIOTLB_VERBOSE); + swiotlb_init(true, SWIOTLB_VERBOSE); swiotlb_update_mem_attributes(); } diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 6267363e0189..75cf8f6ae8cd 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -59,10 +59,8 @@ static void __init pci_swiotlb_detect(void) * bounce buffers as the hypervisor can't access arbitrary VM memory * that is not explicitly shared with it. */ - if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) { + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) x86_swiotlb_enable = true; - x86_swiotlb_flags |= SWIOTLB_FORCE; - } } #else static inline void __init pci_swiotlb_detect(void) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index ac315dd046c4..5aaa813c5509 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -691,8 +691,10 @@ size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ if (is_swiotlb_active(dev) && - (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev))) + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev) || + force_dma_unencrypted(dev))) return swiotlb_max_mapping_size(dev); + return SIZE_MAX; } diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index e05dc7649366..4e35264ab6f8 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -89,36 +89,32 @@ static inline dma_addr_t dma_direct_map_phys(struct device *dev, dma_addr_t dma_addr; if (is_swiotlb_force_bounce(dev)) { - if (!(attrs & DMA_ATTR_CC_SHARED)) { - if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) - return DMA_MAPPING_ERROR; + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) + return DMA_MAPPING_ERROR; - return swiotlb_map(dev, phys, size, dir, attrs); - } - } else if (attrs & DMA_ATTR_CC_SHARED) { - return DMA_MAPPING_ERROR; + return swiotlb_map(dev, phys, size, dir, attrs); } - if (attrs & DMA_ATTR_MMIO) { - dma_addr = phys; - if (unlikely(!dma_capable(dev, dma_addr, size, false, attrs))) - goto err_overflow; - } else if (attrs & DMA_ATTR_CC_SHARED) { + if (attrs & DMA_ATTR_CC_SHARED) dma_addr = phys_to_dma_unencrypted(dev, phys); + else + dma_addr = phys_to_dma_encrypted(dev, phys); + + if (attrs & DMA_ATTR_MMIO) { if (unlikely(!dma_capable(dev, dma_addr, size, false, attrs))) goto err_overflow; - } else { - dma_addr = phys_to_dma(dev, phys); - if (unlikely(!dma_capable(dev, dma_addr, size, true, attrs)) || - dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_swiotlb_active(dev) && - !(attrs & DMA_ATTR_REQUIRE_COHERENT)) - return swiotlb_map(dev, phys, size, dir, attrs); + goto dma_mapped; + } - goto err_overflow; - } + if (unlikely(!dma_capable(dev, dma_addr, size, true, attrs)) || + dma_kmalloc_needs_bounce(dev, size, dir)) { + if (is_swiotlb_active(dev) && + !(attrs & DMA_ATTR_REQUIRE_COHERENT)) + return swiotlb_map(dev, phys, size, dir, attrs); + goto err_overflow; } +dma_mapped: if (!dev_is_dma_coherent(dev) && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_device(phys, size, dir); -- 2.43.0