From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7B0A3815DB for ; Wed, 8 Apr 2026 19:47:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775677682; cv=none; b=mHaWoiwMVzjOv+8sP2hqxPhClPLCzqXNzuZaJ9XCG1mbF8hO7am4D7SXqa5x76SE3RTWQPLNufCD/2DBjeOlzCXXVhaIFILIHjZzc6drngvdq/SmILeEteUtHBJiDlZUZjj+Vn8FRAmlImIeiqMUOTYm5tj8kAEOekaJre3O1Ns= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775677682; c=relaxed/simple; bh=ljnywAXYuD9yVasMcE6vMsdXOxIf5S7KW0HfwwIoGA4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bsozyqTfMTVjIO2p5nh2MOsmNjVP5zfpBagTeEd2HfEVcLHNWv3xivsWdequ18/0dmvIFHvIi5VC7RmAaGnMYDdF1/3t+zVFdrdeBCK76LDNoMgzOJfQ4BehaVsD1nfkW0AdHSGNmZS+i7PlS/RiIna8cAqXXsoHYVF0sLRt+Kg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=onnRlBxs; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="onnRlBxs" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-488c768a9a9so474895e9.1 for ; Wed, 08 Apr 2026 12:47:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775677678; x=1776282478; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0HFQ1jpsIj20Rmmvy6mz/JJ67LDNNDb4EoQDQuxQbic=; b=onnRlBxsbFoEIWm8HT6Nw4FwKGH5AChdOqih70InQvbmNzHt6mshStRn2Hma9uIpiG V18LDZSWud2PNOUrct+NdC61zRp0WMjde7Ik1Gt1hwh/1f9X8tNsQm7E3VZdLk58v9IN a6AaoFbXAyyqyzqJBXCVPQ1tMYGnT1MgXMbauI9V9gjArwnZyHGCGiJu5sH/S58eERcr HWq4WPBszWUDS52KZcDRwMuoDy4CoffN6Hd6x/1MBn+ZcdiMjID1i8KF6FXzMAn0Upv1 qe0Bt3iDf6qT6w+ANYy191OdqfheO14LBogX09s7Tk2Xhjy9Tvvc7MUwVQDjOti8lk3l FJTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775677678; x=1776282478; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0HFQ1jpsIj20Rmmvy6mz/JJ67LDNNDb4EoQDQuxQbic=; b=PZA6wLwnF/ngcb/nDGzB1XEoDGcY3/b3D0ywMz3wMklx/Ulv/+Xl2G2lvcFq37CR9o fsybC/kaOmV23iYy+LKJtDV3OegG014lEzsO2BKl6/IYaBvEoJfetOdrd5aWx1D+Bb2b 2233kluRy9y/eAdNRTS6bNTHVQ6AcRPvws9nUshzJFLI21Kl9p4i68KPhfaXyjK4Ssry T+in7NYf+N/W4cQchzE6C57UinaryHS+iwxq1Y4JZyLgyvrch8dSyyF+mz9OdAvYh3Xg SUoXKBj9f5K5s0s6DYJgFeLRXAG1nRrW8oxaqbRqtNzURWt6jtVJOzjgxCFDwUq/lI7C hd/A== X-Forwarded-Encrypted: i=1; AJvYcCWbwcgCf762KdYAaReFr4rovk54zlg+Z9KsmB4oLNll9RD1uErvYx4kDSwRtlY80e0Bng29d3jdEXy8TUU=@vger.kernel.org X-Gm-Message-State: AOJu0YyxraVpWXhhn2xkMZg3RR7VbjFrrLgygKL6R1Oa5IA5CNPo8k0s tbdVACK7tmJNXjGNJxi2GUhzFDlwtNESYJJ78pK0uqTB1069Y5K5QcLH7y8/fjiztLk70ot/+1S vyJHOSOZQFLu/RA== X-Received: from wrbgl27-n2.prod.google.com ([2002:a05:6000:299b:20b0:43c:f8a2:96a5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8710:b0:488:c40b:c8a4 with SMTP id 5b1f17b1804b1-488ccf3a129mr15045395e9.1.1775677678033; Wed, 08 Apr 2026 12:47:58 -0700 (PDT) Date: Wed, 8 Apr 2026 19:47:41 +0000 In-Reply-To: <20260408194750.2280873-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260408194750.2280873-1-smostafa@google.com> X-Mailer: git-send-email 2.53.0.1213.gd9a14994de-goog Message-ID: <20260408194750.2280873-5-smostafa@google.com> Subject: [RFC PATCH v3 4/5] dma-mapping: Encapsulate memory state during allocation From: Mostafa Saleh To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: robin.murphy@arm.com, m.szyprowski@samsung.com, will@kernel.org, maz@kernel.org, suzuki.poulose@arm.com, catalin.marinas@arm.com, jiri@resnulli.us, jgg@ziepe.ca, aneesh.kumar@kernel.org, Mostafa Saleh Content-Type: text/plain; charset="UTF-8" Introduce a new dma-direct internal type dma_page which is "struct page" and a bit indicate whether the memory has been decrypted or not. This is useful to pass such information encapsulated through allocation functions, which is currently set from swiotlb_alloc(). No functional changes. Signed-off-by: Mostafa Saleh --- kernel/dma/direct.c | 58 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 46 insertions(+), 12 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index de63e0449700..204bc566480c 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -16,6 +16,33 @@ #include #include "direct.h" +/* + * Represent DMA allocation and 1 bit flag for it's state + */ +struct dma_page { + unsigned long val; +}; + +#define DMA_PAGE_DECRYPTED_FLAG BIT(0) + +#define DMA_PAGE_NULL ((struct dma_page){ .val = 0 }) + +static inline struct dma_page page_to_dma_page(struct page *page, bool decrypted) +{ + struct dma_page dma_page; + + dma_page.val = (unsigned long)page; + if (decrypted) + dma_page.val |= DMA_PAGE_DECRYPTED_FLAG; + + return dma_page; +} + +static inline struct page *dma_page_to_page(struct dma_page dma_page) +{ + return (struct page *)(dma_page.val & ~DMA_PAGE_DECRYPTED_FLAG); +} + /* * Most architectures use ZONE_DMA for the first 16 Megabytes, but some use * it for entirely different regions. In that case the arch code needs to @@ -103,20 +130,21 @@ static void __dma_direct_free_pages(struct device *dev, struct page *page, dma_free_contiguous(dev, page, size); } -static struct page *dma_direct_alloc_swiotlb(struct device *dev, size_t size) +static struct dma_page dma_direct_alloc_swiotlb(struct device *dev, size_t size) { - struct page *page = swiotlb_alloc(dev, size, NULL); + enum swiotlb_page_state state; + struct page *page = swiotlb_alloc(dev, size, &state); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { swiotlb_free(dev, page, size); - return NULL; + return DMA_PAGE_NULL; } - return page; + return page_to_dma_page(page, state == SWIOTLB_PAGE_DECRYPTED); } -static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, - gfp_t gfp, bool allow_highmem) +static struct dma_page __dma_direct_alloc_pages(struct device *dev, size_t size, + gfp_t gfp, bool allow_highmem) { int node = dev_to_node(dev); struct page *page; @@ -132,7 +160,7 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, if (page) { if (dma_coherent_ok(dev, page_to_phys(page), size) && (allow_highmem || !PageHighMem(page))) - return page; + return page_to_dma_page(page, false); dma_free_contiguous(dev, page, size); } @@ -148,10 +176,10 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, else if (IS_ENABLED(CONFIG_ZONE_DMA) && !(gfp & GFP_DMA)) gfp = (gfp & ~GFP_DMA32) | GFP_DMA; else - return NULL; + return DMA_PAGE_NULL; } - return page; + return page_to_dma_page(page, false); } /* @@ -184,9 +212,11 @@ static void *dma_direct_alloc_from_pool(struct device *dev, size_t size, static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) { + struct dma_page dma_page; struct page *page; - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); + dma_page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); + page = dma_page_to_page(dma_page); if (!page) return NULL; @@ -203,6 +233,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { bool remap = false, set_uncached = false, decrypt = force_dma_unencrypted(dev); + struct dma_page dma_page; struct page *page; void *ret; @@ -253,7 +284,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, * we always manually zero the memory once we are done, and only allow * high mem if pages doesn't need decryption. */ - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, !decrypt); + dma_page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, !decrypt); + page = dma_page_to_page(dma_page); if (!page) return NULL; @@ -352,13 +384,15 @@ void dma_direct_free(struct device *dev, size_t size, struct page *dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) { + struct dma_page dma_page; struct page *page; void *ret; if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); - page = __dma_direct_alloc_pages(dev, size, gfp, false); + dma_page = __dma_direct_alloc_pages(dev, size, gfp, false); + page = dma_page_to_page(dma_page); if (!page) return NULL; -- 2.53.0.1213.gd9a14994de-goog