From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83D2D3815E8 for ; Wed, 8 Apr 2026 19:47:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775677682; cv=none; b=TmBNkJOVJUnKxuFGjuE5IhNPZSfXqNysxZ/tYxGsb3QFh0z5lMyAYUt2Xl/L0Zg83HI7ZW4u49nHHNSnU7wox5ypaPcOPHKJHDyfK5nA9/mUF0N8Qqd6uIxmIujZQy3zXGlrh9gk32u9V16lBEvVjX0zcxeDfwZc7oPNyy4eLq8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775677682; c=relaxed/simple; bh=Wap6lbG/SCEnn+rOeCGCnUqunIqCe5Tn1Ow/jjzxAEc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sOHdo2aHwHRxwUPIMb75qmq0qGY50X+RMd3Vd7nXHN0S9Z6d/81GeVdruoB9StG8DP2bW6DGqJRNncT2cWgMRy7tMVV48DsbCG9tDWGo6+7swN059UVNK2yPX+nO4X5sTuo9+HxydtRqr4XX6WmlBIFCZeBZlqJ/VzJe3zSimVU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lFmlPkJA; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lFmlPkJA" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-486fa07f2bbso343315e9.2 for ; Wed, 08 Apr 2026 12:47:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775677677; x=1776282477; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FmQc+pnXBAjV16L6lXzAX5g89yFxEhVx/QorzEaCsdc=; b=lFmlPkJAxW549St55Sol7mqnyfphwXgVQ7n5q7suNy8otl92ZKA8KOqs/d+OHr1+oC nQt99O7SCSZ4QgdmpXmZfeua2cHZXi2+BEaKqV3+Rci5i+dFb/iv/NSJQs4GJ5v+DSCO +RXZm5bsbKUdMfDwaZ/s+PMtU3/Yb/lwY0jhcGX2DRD7Q/Zw/e3SWDASXXrXVCkwwSRk h054L1RTBmYbTsPHD1KE3I+PILYfqb9qYFHlF9uVTlvOIQ1GdD0JWzrCC3lTDWN1sbPO O6f13UAIQG45J4dkt1rw4ld/nQip9L+lgaZMGy3RO2UyO/rpZqECzZKqxBwQzf2g3LkJ saKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775677677; x=1776282477; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FmQc+pnXBAjV16L6lXzAX5g89yFxEhVx/QorzEaCsdc=; b=TOj1S/18ZVUBrroWBnTopUz6aUpP8T/wZhDxcKBTZACm0GGe2S3BgqSrN51qhzQ6QU pbg7ubDC4qdc7bxsyWociYQoBnvpAEIznX7vRg+vVIjE9474cpK+mfOdGxHWuA3CnFZc VfUsETfhXTh8NDhJ6ACixzUyRrbvdIXHGDF6IAukhMM0kduBgm0Hto+z6abc4dfzoxcN G6dyGPNztsnJyY9IF4Kx6R5ZIskJ3gv6ph2zChbGrZ2tT0EonNUsa+C39jFGO1L97mEO cS7TS85IhG0KJr8dx9Cz12qZuMtyvaA79Wq7RpW09Xlw8g6hxUTJtaGAR8M4Mc2jJtgm /rvQ== X-Forwarded-Encrypted: i=1; AJvYcCWd3cqeCUPC79Axn1iRZ9DAKATiV8JiQDEnjBAGPDkIdQombDThDLxbMvBsAtN0GzqPsRUXnAmxKFaBh+E=@vger.kernel.org X-Gm-Message-State: AOJu0YwqUmvmESzIXmAiilAh5QADUA2JoCDArixqc2UR3Cp82TGkeK+Y kh2U7kA1Yn2+THEIUxXYTVztOZQRZDWKG7rY9Cx4DTLzSji0r7TiKP3wWa0wKs68Zg/GM/of68Z m+kBhsvUVScibNA== X-Received: from wmbgx14.prod.google.com ([2002:a05:600c:858e:b0:488:a474:87b6]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8597:b0:487:575:5e1 with SMTP id 5b1f17b1804b1-488cd008ec4mr10978775e9.24.1775677676908; Wed, 08 Apr 2026 12:47:56 -0700 (PDT) Date: Wed, 8 Apr 2026 19:47:40 +0000 In-Reply-To: <20260408194750.2280873-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260408194750.2280873-1-smostafa@google.com> X-Mailer: git-send-email 2.53.0.1213.gd9a14994de-goog Message-ID: <20260408194750.2280873-4-smostafa@google.com> Subject: [RFC PATCH v3 3/5] dma-mapping: Decrypt memory on remap From: Mostafa Saleh To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: robin.murphy@arm.com, m.szyprowski@samsung.com, will@kernel.org, maz@kernel.org, suzuki.poulose@arm.com, catalin.marinas@arm.com, jiri@resnulli.us, jgg@ziepe.ca, aneesh.kumar@kernel.org, Mostafa Saleh Content-Type: text/plain; charset="UTF-8" In case memory needs to be remapped on systems with force_dma_unencrypted(), where this memory is not allocated from a restricted-dma pool, this was currently ignored, while only setting the decrypted pgprot in the remapped alias. The memory still needs to be decrypted in that case. With memory decryption, don't allow highmem allocations, but that shouldn't be a problem on such modern systems. Also, move force_dma_unencrypted() outside of dma_set_* to make it clear to be able to use more generic logic to decided memory state. Reported-by: Catalin Marinas Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc") Signed-off-by: Mostafa Saleh --- kernel/dma/direct.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index ce74f213ec40..de63e0449700 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -79,8 +79,6 @@ bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size) { - if (!force_dma_unencrypted(dev)) - return 0; return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size)); } @@ -88,8 +86,6 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size) { int ret; - if (!force_dma_unencrypted(dev)) - return 0; ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size)); if (ret) pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n"); @@ -206,7 +202,7 @@ static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size, void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - bool remap = false, set_uncached = false, encrypt = false; + bool remap = false, set_uncached = false, decrypt = force_dma_unencrypted(dev); struct page *page; void *ret; @@ -215,7 +211,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, gfp |= __GFP_NOWARN; if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) + !decrypt && !is_swiotlb_for_alloc(dev)) return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp); if (!dev_is_dma_coherent(dev)) { @@ -249,12 +245,15 @@ void *dma_direct_alloc(struct device *dev, size_t size, * Remapping or decrypting memory may block, allocate the memory from * the atomic pools instead if we aren't allowed block. */ - if ((remap || force_dma_unencrypted(dev)) && + if ((remap || decrypt) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); - /* we always manually zero the memory once we are done */ - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); + /* + * we always manually zero the memory once we are done, and only allow + * high mem if pages doesn't need decryption. + */ + page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, !decrypt); if (!page) return NULL; @@ -268,10 +267,12 @@ void *dma_direct_alloc(struct device *dev, size_t size, set_uncached = false; } + if (decrypt && dma_set_decrypted(dev, page_address(page), size)) + goto out_leak_pages; if (remap) { pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs); - if (force_dma_unencrypted(dev)) + if (decrypt) prot = pgprot_decrypted(prot); /* remove any dirty cache lines on the kernel alias */ @@ -281,11 +282,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, size, prot, __builtin_return_address(0)); if (!ret) - goto out_free_pages; + goto out_encrypt_pages; } else { ret = page_address(page); - if (dma_set_decrypted(dev, ret, size)) - goto out_leak_pages; } memset(ret, 0, size); @@ -301,9 +300,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return ret; out_encrypt_pages: - encrypt = true; -out_free_pages: - __dma_direct_free_pages(dev, page, size, encrypt); + __dma_direct_free_pages(dev, page, size, decrypt); return NULL; out_leak_pages: return NULL; @@ -366,7 +363,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, return NULL; ret = page_address(page); - if (dma_set_decrypted(dev, ret, size)) + if (force_dma_unencrypted(dev) && dma_set_decrypted(dev, ret, size)) goto out_leak_pages; memset(ret, 0, size); *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); -- 2.53.0.1213.gd9a14994de-goog