From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 783FC366563; Mon, 27 Apr 2026 05:55:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777269334; cv=none; b=AIqPpq/hSe57udxXHCrnswY7rlAEbuDys+6mR1/7JXvcsX/iGF8cdD3clIwyNPWH/JHQzx7YyIgxCafFiL6TLnYnX0kzShbwftLiVvSKcWH58wP6rUf1eIMCRsKB6IPJ7i7IhVb41t3Cp+YMrcmDuT9ntM26hO44H5gLoWy9dnU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777269334; c=relaxed/simple; bh=4MW7wMvDoqITfLwkJTW6/lwo+jU4h0pFXHWOV1R6rJM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TCMN5ezcl75a/cgi387KMRPawpkNeP6J22wziiGotegf0AvftCq2EfRki2d7V6Jzy7+Tl6nBi6QmpXEyOFhDVbyceC6aTDs+1mQJbOZ7ID3bvdoTW+V9BD3hL/IDRhsdUkwBofrE880bKYC07JMb6M/OKXKVNzW1SrDvLb6Lu5U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B1kzbWu+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B1kzbWu+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C336C19425; Mon, 27 Apr 2026 05:55:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777269334; bh=4MW7wMvDoqITfLwkJTW6/lwo+jU4h0pFXHWOV1R6rJM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B1kzbWu+XZa44j2kbKRsnQLPVkZaX63SB0PubV/0a3yrOpaOOVp/WEjNUq/gEEY4h DWUrrbmKcXbm7TlKwR1FBg5KyKQ37oeXAaH+FpFrSgV5Way6DHAiZrHJ4OgAnXz6Uk w48yXV6IYRRbEx7oYCAR3lPWhF0JH1S9JEFdYYKhgC1QXvGZRUkV7Hu+GgKGPYVNYA 4KJxzqzm40qnfQn+QEwvjkU2c/27uYWMDdg4bIxtwMhCgOSQAv2XXOU2Z71AnUigwO N7jR6qXZzK+re7eFazso3GYYePQ5LZ3pNlQa5uXoJzm6a27ePFvCkpyRUkPLaaqqbu c+2BS/LnffA4A== From: "Aneesh Kumar K.V (Arm)" To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: "Aneesh Kumar K.V (Arm)" , Robin Murphy , Marek Szyprowski , Will Deacon , Marc Zyngier , Steven Price , Suzuki K Poulose , Catalin Marinas , Jiri Pirko , Jason Gunthorpe , Mostafa Saleh , Petr Tesarik , Alexey Kardashevskiy , Dan Williams , Xu Yilun Subject: [PATCH v3 2/9] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths Date: Mon, 27 Apr 2026 11:25:02 +0530 Message-ID: <20260427055509.898190-3-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260427055509.898190-1-aneesh.kumar@kernel.org> References: <20260427055509.898190-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Propagate force_dma_unencrypted() into DMA_ATTR_CC_SHARED in the dma-direct allocation path and use the attribute to drive the related decisions. This updates dma_direct_alloc(), dma_direct_free(), and dma_direct_alloc_pages() to fold the forced unencrypted case into attrs. Signed-off-by: Aneesh Kumar K.V (Arm) --- kernel/dma/direct.c | 44 ++++++++++++++++++++++++++++++++++++-------- 1 file changed, 36 insertions(+), 8 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b958f150718a..0c2e1f8436ce 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -201,16 +201,31 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { bool remap = false, set_uncached = false; - bool mark_mem_decrypt = true; + bool mark_mem_decrypt = false; struct page *page; void *ret; + /* + * DMA_ATTR_CC_SHARED is not a caller-visible dma_alloc_*() + * attribute. The direct allocator uses it internally after it has + * decided that the backing pages must be shared/decrypted, so the + * rest of the allocation path can consistently select DMA addresses, + * choose compatible pools and restore encryption on free. + */ + if (attrs & DMA_ATTR_CC_SHARED) + return NULL; + + if (force_dma_unencrypted(dev)) { + attrs |= DMA_ATTR_CC_SHARED; + mark_mem_decrypt = true; + } + size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) + if (((attrs & (DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_CC_SHARED)) == + DMA_ATTR_NO_KERNEL_MAPPING) && !is_swiotlb_for_alloc(dev)) return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp); if (!dev_is_dma_coherent(dev)) { @@ -244,7 +259,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, * Remapping or decrypting memory may block, allocate the memory from * the atomic pools instead if we aren't allowed block. */ - if ((remap || force_dma_unencrypted(dev)) && + if ((remap || (attrs & DMA_ATTR_CC_SHARED)) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); @@ -318,11 +333,20 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - bool mark_mem_encrypted = true; + bool mark_mem_encrypted = false; unsigned int page_order = get_order(size); - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { + /* + * if the device had requested for an unencrypted buffer, + * convert it to encrypted on free + */ + if (force_dma_unencrypted(dev)) { + attrs |= DMA_ATTR_CC_SHARED; + mark_mem_encrypted = true; + } + + if (((attrs & (DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_CC_SHARED)) == + DMA_ATTR_NO_KERNEL_MAPPING) && !is_swiotlb_for_alloc(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; @@ -365,10 +389,14 @@ void dma_direct_free(struct device *dev, size_t size, struct page *dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) { + unsigned long attrs = 0; struct page *page; void *ret; - if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) + if (force_dma_unencrypted(dev)) + attrs |= DMA_ATTR_CC_SHARED; + + if ((attrs & DMA_ATTR_CC_SHARED) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); if (is_swiotlb_for_alloc(dev)) { -- 2.43.0