From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 798ACCD4F35 for ; Tue, 12 May 2026 09:05:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=olUHD5VANODdDrlZfz5qllxi3FHpkTpPfvEYTlwhhJ8=; b=P4J9mAxOaV9a4U1saeHDv1LdFX 14i7++/rc3ZpYeecfbgbYc564SRyB2eW8wxxNYChd/uYQR+KDCouI2PCp+AydXXnMvLkxaF0Y6noX dcupwZrK3Au/WabrQ1MJ3E5KdbXbH6V42xm2QDLwYlc71Cot6L9drgRI5HECwTFk4+QRyoKib9L2p 1vN1aR0mQkr3Yh/yRCY8DKs7QIsH5D/j2Gx0f5rkLSfXXQKsvCzBL2QgqGlWlCwuBvAuojVBZnX74 oHTKJW+UrHP0x/b3Eo3o4yiN5ns64jeSbO2ZRExJKsM7H3e2sM722iUgj6DfRcdDDhbxS9+UzvhmO phwXzU7Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj2u-0000000GBYK-1tG5; Tue, 12 May 2026 09:04:56 +0000 Received: from tor.source.kernel.org ([2600:3c04:e001:324:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj2t-0000000GBXp-1A2E for linux-arm-kernel@lists.infradead.org; Tue, 12 May 2026 09:04:55 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 89E7460120; Tue, 12 May 2026 09:04:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D8431C2BCF5; Tue, 12 May 2026 09:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778576694; bh=4MW7wMvDoqITfLwkJTW6/lwo+jU4h0pFXHWOV1R6rJM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KbwCoE0ASwdaQ0sDqenX3eV2dOoxqZ1/TFckfyJW6wsl78OCEz7fdBRLlIWH2WVy/ sFRV8gkQD9L/Mqs80GkVY2hI9N/Zl7IpCNaaBGUabr02N0g8a8xHQVKvE27qFMc22Y Moc5qKLCRlFqPVQFD6o8OfprtKqgn6hObtzcYyV/ja3ahXjLP/J7BSn41O6emV6XZM j44eIFpfSCPhdsNwc6gVoAwh43+ZTWY1hvO0qfOUkcIY8lyuB3mKF2isu0H2bj7lcx 0qMWtvyB5JjqGX+H3+qQAlmugtaUALG3nubVqCuniv+CCUgcdEkzGirzgWBjcgdr1o Hvr5pzU6q77Eg== From: "Aneesh Kumar K.V (Arm)" To: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Robin Murphy , Marek Szyprowski , Will Deacon , Marc Zyngier , Steven Price , Suzuki K Poulose , Catalin Marinas , Jiri Pirko , Jason Gunthorpe , Mostafa Saleh , Petr Tesarik , Alexey Kardashevskiy , Dan Williams , Xu Yilun , linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , x86@kernel.org Subject: [PATCH v4 02/13] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths Date: Tue, 12 May 2026 14:33:57 +0530 Message-ID: <20260512090408.794195-3-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260512090408.794195-1-aneesh.kumar@kernel.org> References: <20260512090408.794195-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Propagate force_dma_unencrypted() into DMA_ATTR_CC_SHARED in the dma-direct allocation path and use the attribute to drive the related decisions. This updates dma_direct_alloc(), dma_direct_free(), and dma_direct_alloc_pages() to fold the forced unencrypted case into attrs. Signed-off-by: Aneesh Kumar K.V (Arm) --- kernel/dma/direct.c | 44 ++++++++++++++++++++++++++++++++++++-------- 1 file changed, 36 insertions(+), 8 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b958f150718a..0c2e1f8436ce 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -201,16 +201,31 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { bool remap = false, set_uncached = false; - bool mark_mem_decrypt = true; + bool mark_mem_decrypt = false; struct page *page; void *ret; + /* + * DMA_ATTR_CC_SHARED is not a caller-visible dma_alloc_*() + * attribute. The direct allocator uses it internally after it has + * decided that the backing pages must be shared/decrypted, so the + * rest of the allocation path can consistently select DMA addresses, + * choose compatible pools and restore encryption on free. + */ + if (attrs & DMA_ATTR_CC_SHARED) + return NULL; + + if (force_dma_unencrypted(dev)) { + attrs |= DMA_ATTR_CC_SHARED; + mark_mem_decrypt = true; + } + size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) + if (((attrs & (DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_CC_SHARED)) == + DMA_ATTR_NO_KERNEL_MAPPING) && !is_swiotlb_for_alloc(dev)) return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp); if (!dev_is_dma_coherent(dev)) { @@ -244,7 +259,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, * Remapping or decrypting memory may block, allocate the memory from * the atomic pools instead if we aren't allowed block. */ - if ((remap || force_dma_unencrypted(dev)) && + if ((remap || (attrs & DMA_ATTR_CC_SHARED)) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); @@ -318,11 +333,20 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - bool mark_mem_encrypted = true; + bool mark_mem_encrypted = false; unsigned int page_order = get_order(size); - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { + /* + * if the device had requested for an unencrypted buffer, + * convert it to encrypted on free + */ + if (force_dma_unencrypted(dev)) { + attrs |= DMA_ATTR_CC_SHARED; + mark_mem_encrypted = true; + } + + if (((attrs & (DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_CC_SHARED)) == + DMA_ATTR_NO_KERNEL_MAPPING) && !is_swiotlb_for_alloc(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; @@ -365,10 +389,14 @@ void dma_direct_free(struct device *dev, size_t size, struct page *dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) { + unsigned long attrs = 0; struct page *page; void *ret; - if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) + if (force_dma_unencrypted(dev)) + attrs |= DMA_ATTR_CC_SHARED; + + if ((attrs & DMA_ATTR_CC_SHARED) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); if (is_swiotlb_for_alloc(dev)) { -- 2.43.0