From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FAB1CD4851 for ; Tue, 12 May 2026 09:06:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hwkgKwxQEB4N/lycjHNgM+B8NLmz+/9SJqThMVz/1bE=; b=WEyrf2xAoixt99pxeoyRkN0g5g 79D842Ci5kDXp1C5xuU61T5LDXPzRLTi1adUKgErkYPCWDviuZW4uYgA802vd/q3ul/Ub/EuifPjV 0GsdMYGAcr+OJqfBCeLVOfLTra4Emma/VzrbIs1r+VKbOuTYhd7Ei7KsjG1YJtNiyjKEHBINEdoEf UePC3B9LMbhxjpULWyDPF2uYyi5V7vmAPfSfl/x+9ss1yd2K1IHGNio1d1NhfWJrET8EtB2v6nv7s bjtHrESQ+C1boZ33ZJsL87Mo8BbISjJxu+f5rQygIQ1DJeSLIbPVS4sysshLDqQqIro977jlaBouW NokzANaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj44-0000000GC8w-1jvt; Tue, 12 May 2026 09:06:08 +0000 Received: from tor.source.kernel.org ([172.105.4.254]) by bombadil.infradead.org with esmtps (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMj42-0000000GC7z-1Fi2 for linux-arm-kernel@lists.infradead.org; Tue, 12 May 2026 09:06:06 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id B94D260126; Tue, 12 May 2026 09:06:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6C74EC2BCF5; Tue, 12 May 2026 09:05:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778576765; bh=16IbbDmBI894AMGJNFhh69uBka8eAlp5FTjDwJZdlnM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nVAK40LfYZrBtDvRtjA7tMJs0cMnULnsZSnRoWepqQHZmIqmuO9je8PQQVCyNkkNg Rc+KO9UbCW2vOCU+lW7aFNJLcd/+Sf/EeaMZhPN9SJXYOtncLxjbWoeYv2YtTFEoJZ 2APIaWChN5x4ArYIiPZjEy1HEkXcsFQ2nTfz6nFccGEeC66p0So8BhQr2Yen03LZLP NRrZ4OT0eqMTdqRr6bc/hMMZl4e2PRbobg2h/udFtsQ45xPYs0LqE1tUFgXuKqgnoX KgDFxMAbVhQoS4MVXkW9Rh8gLamKNOy++DEQYXPcgIddKiT5EDUMlaaRqAorqYLBhO LK9KOABLNjAsg== From: "Aneesh Kumar K.V (Arm)" To: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Robin Murphy , Marek Szyprowski , Will Deacon , Marc Zyngier , Steven Price , Suzuki K Poulose , Catalin Marinas , Jiri Pirko , Jason Gunthorpe , Mostafa Saleh , Petr Tesarik , Alexey Kardashevskiy , Dan Williams , Xu Yilun , linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , x86@kernel.org Subject: [PATCH v4 08/13] dma-direct: set decrypted flag for remapped DMA allocations Date: Tue, 12 May 2026 14:34:03 +0530 Message-ID: <20260512090408.794195-9-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260512090408.794195-1-aneesh.kumar@kernel.org> References: <20260512090408.794195-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Devices that are DMA non-coherent and require a remap were skipping dma_set_decrypted(), leaving DMA buffers encrypted even when the device requires unencrypted access. Move the call after the if (remap) branch so that both the direct and remapped allocation paths correctly mark the allocation as decrypted (or fail cleanly) before use. Fix dma_direct_alloc() and dma_direct_free() to apply set_memory_*() to the linear-map alias of the backing pages instead of the remapped CPU address. Also disallow highmem pages for DMA_ATTR_CC_SHARED, because highmem buffers do not provide a usable linear-map address. Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc") Signed-off-by: Aneesh Kumar K.V (Arm) --- kernel/dma/direct.c | 56 ++++++++++++++++++++++++++++++++++++--------- 1 file changed, 45 insertions(+), 11 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 5aaa813c5509..f5da6e992d83 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -204,6 +204,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, { bool remap = false, set_uncached = false; bool mark_mem_decrypt = false; + bool allow_highmem = true; struct page *page; void *ret; @@ -222,6 +223,15 @@ void *dma_direct_alloc(struct device *dev, size_t size, mark_mem_decrypt = true; } + if (attrs & DMA_ATTR_CC_SHARED) + /* + * Unencrypted/shared DMA requires a linear-mapped buffer + * address to look up the PFN and set architecture-required PFN + * attributes. This is not possible with HighMem. Avoid HighMem + * allocation. + */ + allow_highmem = false; + size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; @@ -280,7 +290,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, } /* we always manually zero the memory once we are done */ - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); + page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, allow_highmem); if (!page) return NULL; @@ -295,6 +305,14 @@ void *dma_direct_alloc(struct device *dev, size_t size, set_uncached = false; } + if (mark_mem_decrypt) { + void *lm_addr; + + lm_addr = page_address(page); + if (set_memory_decrypted((unsigned long)lm_addr, PFN_UP(size))) + goto out_leak_pages; + } + if (remap) { pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs); @@ -305,31 +323,39 @@ void *dma_direct_alloc(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, size, prot, __builtin_return_address(0)); if (!ret) - goto out_free_pages; + goto out_encrypt_pages; } else { ret = page_address(page); - if (mark_mem_decrypt && dma_set_decrypted(dev, ret, size)) - goto out_leak_pages; } memset(ret, 0, size); if (set_uncached) { + void *uncached_cpu_addr; + arch_dma_prep_coherent(page, size); - ret = arch_dma_set_uncached(ret, size); - if (IS_ERR(ret)) - goto out_encrypt_pages; + uncached_cpu_addr = arch_dma_set_uncached(ret, size); + if (IS_ERR(uncached_cpu_addr)) + goto out_free_remap_pages; + ret = uncached_cpu_addr; } *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return ret; + +out_free_remap_pages: + if (remap) + dma_common_free_remap(ret, size); + out_encrypt_pages: - if (mark_mem_decrypt && dma_set_encrypted(dev, page_address(page), size)) - return NULL; -out_free_pages: + if (mark_mem_decrypt && + dma_set_encrypted(dev, page_address(page), size)) + goto out_leak_pages; + __dma_direct_free_pages(dev, page, size); return NULL; + out_leak_pages: return NULL; } @@ -384,8 +410,16 @@ void dma_direct_free(struct device *dev, size_t size, } else { if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - if (mark_mem_encrypted && dma_set_encrypted(dev, cpu_addr, size)) + } + + if (mark_mem_encrypted) { + void *lm_addr; + + lm_addr = phys_to_virt(dma_to_phys(dev, dma_addr)); + if (set_memory_encrypted((unsigned long)lm_addr, PFN_UP(size))) { + pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n"); return; + } } __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size); -- 2.43.0