The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* Re: [PATCH v3 2/9] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths
       [not found] ` <20260427055509.898190-3-aneesh.kumar@kernel.org>
@ 2026-05-08  9:26   ` Catalin Marinas
  2026-05-11  5:38     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2026-05-08  9:26 UTC (permalink / raw)
  To: Aneesh Kumar K.V (Arm)
  Cc: iommu, linux-kernel, Robin Murphy, Marek Szyprowski, Will Deacon,
	Marc Zyngier, Steven Price, Suzuki K Poulose, Jiri Pirko,
	Jason Gunthorpe, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun, Christoph Hellwig

On Mon, Apr 27, 2026 at 11:25:02AM +0530, Aneesh Kumar K.V (Arm) wrote:
> @@ -365,10 +389,14 @@ void dma_direct_free(struct device *dev, size_t size,
>  struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>  		dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)
>  {
> +	unsigned long attrs = 0;
>  	struct page *page;
>  	void *ret;
>  
> -	if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
> +	if (force_dma_unencrypted(dev))
> +		attrs |= DMA_ATTR_CC_SHARED;
> +
> +	if ((attrs & DMA_ATTR_CC_SHARED) && dma_direct_use_pool(dev, gfp))
>  		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);

I was looking at Sashiko's reports and it noticed the wrong type
returned here. Not something your patch introduces but I think it should
be fixed rather than continue to propagate it. It's been around since
5.10, commit 5b138c534fda ("dma-direct: factor out a
dma_direct_alloc_from_pool helper"). This code path isn't used much I
guess.

-- 
Catalin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 4/9] dma: swiotlb: track pool encryption state and honor DMA_ATTR_CC_SHARED
       [not found] ` <20260427055509.898190-5-aneesh.kumar@kernel.org>
@ 2026-05-08 16:49   ` Catalin Marinas
  2026-05-11  5:14     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2026-05-08 16:49 UTC (permalink / raw)
  To: Aneesh Kumar K.V (Arm)
  Cc: iommu, linux-kernel, Robin Murphy, Marek Szyprowski, Will Deacon,
	Marc Zyngier, Steven Price, Suzuki K Poulose, Jiri Pirko,
	Jason Gunthorpe, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun

On Mon, Apr 27, 2026 at 11:25:04AM +0530, Aneesh Kumar K.V (Arm) wrote:
> @@ -1408,6 +1429,17 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  	if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
>  		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
>  
> +	/*
> +	 * if we are trying to swiotlb map a decrypted paddr or the paddr is encrypted
> +	 * but the device is forcing decryption, use decrypted io_tlb_mem
> +	 */
> +	if ((attrs & DMA_ATTR_CC_SHARED) ||
> +	    (!(attrs & DMA_ATTR_CC_SHARED) && force_dma_unencrypted(dev)))
> +		require_decrypted = true;

Nit: just this should do:

	if ((attrs & DMA_ATTR_CC_SHARED) || force_dma_unencrypted(dev))

> +	if (require_decrypted != mem->decrypted)
> +		return (phys_addr_t)DMA_MAPPING_ERROR;

I wonder whether io_tlb_mem should store the attrs that were used when
created (just DMA_ATTR_CC_SHARED for now) and use that to check here. In
patch 7, this hunk in swiotlb_map() confused me:

	if (dev->dma_io_tlb_mem->decrypted) {
		dma_addr = phys_to_dma_unencrypted(dev, swiotlb_addr);
		attrs |= DMA_ATTR_CC_SHARED;
	} else {
		dma_addr = phys_to_dma_encrypted(dev, swiotlb_addr);
	}

as I thought we'd not update the attributes on the streaming API path.
But what you meant here is for dma_capable() to be checked against the
device with the actual io_tlb_mem attributes.

Anyway, the new swiotlb_tbl_map_single() rejects kmalloc-minalign
bouncing if the device is private while the bounce buffer is shared.
Unlikely we'll need such bouncing if the devices are coherent but it's
good as a safety check.

-- 
Catalin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths
       [not found] <20260427055509.898190-1-aneesh.kumar@kernel.org>
       [not found] ` <20260427055509.898190-3-aneesh.kumar@kernel.org>
       [not found] ` <20260427055509.898190-5-aneesh.kumar@kernel.org>
@ 2026-05-08 17:28 ` Catalin Marinas
  2026-05-10  0:36   ` Jason Gunthorpe
                     ` (2 more replies)
  2 siblings, 3 replies; 8+ messages in thread
From: Catalin Marinas @ 2026-05-08 17:28 UTC (permalink / raw)
  To: Aneesh Kumar K.V (Arm)
  Cc: iommu, linux-kernel, Robin Murphy, Marek Szyprowski, Will Deacon,
	Marc Zyngier, Steven Price, Suzuki K Poulose, Jiri Pirko,
	Jason Gunthorpe, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun

On Mon, Apr 27, 2026 at 11:25:00AM +0530, Aneesh Kumar K.V (Arm) wrote:
> This series propagates DMA_ATTR_CC_SHARED through the dma-direct,
> dma-pool, and swiotlb paths so that encrypted and decrypted DMA buffers
> are handled consistently.

I think this series makes sense, using DMA_ATTR_CC_SHARED throughout the
DMA API, either for alloc or for streaming to decide/check what bouncing
does. Sashiko has a few interesting reports, it probably breaks s390 as
well (it might be similar to the pKVM case).

I don't think it addresses earlier Mostafa's issues with pKVM, although
I'd rather base additional pKVM related fixes on top of this series.
With pKVM, cc_platform_has(CC_ATTR_MEM_ENCRYPT) returns false, as does
force_dma_unencrypted(). I think we should update protected guests to
return true for these if they need shared buffers (the whole
decrypted/shared terminology is messy but in most places it just means
buffer not private to the protected guest, whether encryption is
available or not).

That said, does CC_ATTR_GUEST_MEM_ENCRYPT actually make more sense than
CC_ATTR_MEM_ENCRYPT throughout this series? We'd need to change arm64
realms as well to use this one.

-- 
Catalin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths
  2026-05-08 17:28 ` [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths Catalin Marinas
@ 2026-05-10  0:36   ` Jason Gunthorpe
  2026-05-11 11:13   ` Mostafa Saleh
  2026-05-11 11:18   ` Aneesh Kumar K.V
  2 siblings, 0 replies; 8+ messages in thread
From: Jason Gunthorpe @ 2026-05-10  0:36 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Aneesh Kumar K.V (Arm), iommu, linux-kernel, Robin Murphy,
	Marek Szyprowski, Will Deacon, Marc Zyngier, Steven Price,
	Suzuki K Poulose, Jiri Pirko, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun

On Fri, May 08, 2026 at 06:28:11PM +0100, Catalin Marinas wrote:

> That said, does CC_ATTR_GUEST_MEM_ENCRYPT actually make more sense than
> CC_ATTR_MEM_ENCRYPT throughout this series? We'd need to change arm64
> realms as well to use this one.

It is often quite confusing in the DMA API and iommu what is guest
logic and what is host logic, so this does sound nice. AFAIK none of
this forced bouncing, T=1 or CC_SHARED logic applies to host in the
DMA API.

Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 4/9] dma: swiotlb: track pool encryption state and honor DMA_ATTR_CC_SHARED
  2026-05-08 16:49   ` [PATCH v3 4/9] dma: swiotlb: track pool encryption state and honor DMA_ATTR_CC_SHARED Catalin Marinas
@ 2026-05-11  5:14     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2026-05-11  5:14 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: iommu, linux-kernel, Robin Murphy, Marek Szyprowski, Will Deacon,
	Marc Zyngier, Steven Price, Suzuki K Poulose, Jiri Pirko,
	Jason Gunthorpe, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun

Catalin Marinas <catalin.marinas@arm.com> writes:

> On Mon, Apr 27, 2026 at 11:25:04AM +0530, Aneesh Kumar K.V (Arm) wrote:
>> @@ -1408,6 +1429,17 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>>  	if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
>>  		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
>>  
>> +	/*
>> +	 * if we are trying to swiotlb map a decrypted paddr or the paddr is encrypted
>> +	 * but the device is forcing decryption, use decrypted io_tlb_mem
>> +	 */
>> +	if ((attrs & DMA_ATTR_CC_SHARED) ||
>> +	    (!(attrs & DMA_ATTR_CC_SHARED) && force_dma_unencrypted(dev)))
>> +		require_decrypted = true;
>
> Nit: just this should do:
>
> 	if ((attrs & DMA_ATTR_CC_SHARED) || force_dma_unencrypted(dev))
>

I will update this in the next version.

>> +	if (require_decrypted != mem->decrypted)
>> +		return (phys_addr_t)DMA_MAPPING_ERROR;
>
> I wonder whether io_tlb_mem should store the attrs that were used when
> created (just DMA_ATTR_CC_SHARED for now) and use that to check here. In
> patch 7, this hunk in swiotlb_map() confused me:
>

We already added io_tlb_mem->decrypted. Are you suggesting that this
should instead be io_tlb_mem->attrs and use DMA_ATTR_CC_SHARED? Do we
foresee the need to use any other attributes (other than shared) with
respect to io_tlb_mem?

>
> 	if (dev->dma_io_tlb_mem->decrypted) {
> 		dma_addr = phys_to_dma_unencrypted(dev, swiotlb_addr);
> 		attrs |= DMA_ATTR_CC_SHARED;
> 	} else {
> 		dma_addr = phys_to_dma_encrypted(dev, swiotlb_addr);
> 	}
>
> as I thought we'd not update the attributes on the streaming API path.
> But what you meant here is for dma_capable() to be checked against the
> device with the actual io_tlb_mem attributes.
>

if we allocated/mapped from a decrypted iot_tlb_mem, we should return an
unencrypted dma_addr_t. 

> Anyway, the new swiotlb_tbl_map_single() rejects kmalloc-minalign
> bouncing if the device is private while the bounce buffer is shared.
> Unlikely we'll need such bouncing if the devices are coherent but it's
> good as a safety check.
>

-aneesh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/9] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths
  2026-05-08  9:26   ` [PATCH v3 2/9] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths Catalin Marinas
@ 2026-05-11  5:38     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2026-05-11  5:38 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: iommu, linux-kernel, Robin Murphy, Marek Szyprowski, Will Deacon,
	Marc Zyngier, Steven Price, Suzuki K Poulose, Jiri Pirko,
	Jason Gunthorpe, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun, Christoph Hellwig

Catalin Marinas <catalin.marinas@arm.com> writes:

> On Mon, Apr 27, 2026 at 11:25:02AM +0530, Aneesh Kumar K.V (Arm) wrote:
>> @@ -365,10 +389,14 @@ void dma_direct_free(struct device *dev, size_t size,
>>  struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>>  		dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)
>>  {
>> +	unsigned long attrs = 0;
>>  	struct page *page;
>>  	void *ret;
>>  
>> -	if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
>> +	if (force_dma_unencrypted(dev))
>> +		attrs |= DMA_ATTR_CC_SHARED;
>> +
>> +	if ((attrs & DMA_ATTR_CC_SHARED) && dma_direct_use_pool(dev, gfp))
>>  		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
> I was looking at Sashiko's reports and it noticed the wrong type
> returned here. Not something your patch introduces but I think it should
> be fixed rather than continue to propagate it. It's been around since
> 5.10, commit 5b138c534fda ("dma-direct: factor out a
> dma_direct_alloc_from_pool helper"). This code path isn't used much I
> guess.
>

I can add this change as one of the patch

modified    kernel/dma/direct.c
@@ -165,24 +165,24 @@
 	return !gfpflags_allow_blocking(gfp) && !is_swiotlb_for_alloc(dev);
 }
 
-static void *dma_direct_alloc_from_pool(struct device *dev, size_t size,
-		dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
+static struct page *dma_direct_alloc_from_pool(struct device *dev, size_t size,
+		dma_addr_t *dma_handle, void **cpu_addr, gfp_t gfp,
+		unsigned long attrs)
 {
 	struct page *page;
 	u64 phys_limit;
-	void *ret;
 
 	if (WARN_ON_ONCE(!IS_ENABLED(CONFIG_DMA_COHERENT_POOL)))
 		return NULL;
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, &phys_limit);
-	page = dma_alloc_from_pool(dev, size, &ret, gfp, attrs,
-				  dma_coherent_ok);
+	page = dma_alloc_from_pool(dev, size, cpu_addr, gfp, attrs,
+				   dma_coherent_ok);
 	if (!page)
 		return NULL;
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page),
 					 !!(attrs & DMA_ATTR_CC_SHARED));
-	return ret;
+	return page;
 }
 
 static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size,
@@ -212,7 +212,7 @@
 	bool mark_mem_decrypt = false;
 	bool allow_highmem = true;
 	struct page *page;
-	void *ret;
+	void *cpu_addr;
 
 	/*
 	 * DMA_ATTR_CC_SHARED is not a caller-visible dma_alloc_*()
@@ -278,9 +278,12 @@
 	 * the atomic pools instead if we aren't allowed block.
 	 */
 	if ((remap || (attrs & DMA_ATTR_CC_SHARED)) &&
-	    dma_direct_use_pool(dev, gfp))
-		return dma_direct_alloc_from_pool(dev, size, dma_handle,
-					  gfp, attrs);
+	    dma_direct_use_pool(dev, gfp)) {
+		page = dma_direct_alloc_from_pool(dev, size,
+					dma_handle, &cpu_addr,
+					gfp, attrs);
+		return page ? cpu_addr : NULL;
+	}
 
 	if (is_swiotlb_for_alloc(dev)) {
 		page = dma_direct_alloc_swiotlb(dev, size, attrs);
@@ -318,12 +321,12 @@
 		arch_dma_prep_coherent(page, size);
 
 		/* create a coherent mapping */
-		ret = dma_common_contiguous_remap(page, size, prot,
+		cpu_addr = dma_common_contiguous_remap(page, size, prot,
 				__builtin_return_address(0));
-		if (!ret)
+		if (!cpu_addr)
 			goto out_free_pages;
 	} else {
-		ret = page_address(page);
+		cpu_addr = page_address(page);
 	}
 
 	if (mark_mem_decrypt) {
@@ -334,18 +337,18 @@
 			goto out_leak_pages;
 	}
 
-	memset(ret, 0, size);
+	memset(cpu_addr, 0, size);
 
 	if (set_uncached) {
 		arch_dma_prep_coherent(page, size);
-		ret = arch_dma_set_uncached(ret, size);
-		if (IS_ERR(ret))
+		cpu_addr = arch_dma_set_uncached(cpu_addr, size);
+		if (IS_ERR(cpu_addr))
 			goto out_encrypt_pages;
 	}
 
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page),
 					 !!(attrs & DMA_ATTR_CC_SHARED));
-	return ret;
+	return cpu_addr;
 
 out_encrypt_pages:
 	if (mark_mem_decrypt && dma_set_encrypted(dev, page_address(page), size))
@@ -427,14 +430,14 @@
 {
 	unsigned long attrs = 0;
 	struct page *page;
-	void *ret;
+	void *cpu_addr;
 
 	if (force_dma_unencrypted(dev))
 		attrs |= DMA_ATTR_CC_SHARED;
 
 	if ((attrs & DMA_ATTR_CC_SHARED) && dma_direct_use_pool(dev, gfp))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle,
-					  gfp, attrs);
+					&cpu_addr, gfp, attrs);
 
 	if (is_swiotlb_for_alloc(dev)) {
 		page = dma_direct_alloc_swiotlb(dev, size, attrs);
@@ -445,7 +448,7 @@
 			swiotlb_free(dev, page, size);
 			return NULL;
 		}
-		ret = page_address(page);
+		cpu_addr = page_address(page);
 		goto setup_page;
 	}
 
@@ -453,11 +456,11 @@
 	if (!page)
 		return NULL;
 
-	ret = page_address(page);
-	if ((attrs & DMA_ATTR_CC_SHARED) && dma_set_decrypted(dev, ret, size))
+	cpu_addr = page_address(page);
+	if ((attrs & DMA_ATTR_CC_SHARED) && dma_set_decrypted(dev, cpu_addr, size))
 		goto out_leak_pages;
 setup_page:
-	memset(ret, 0, size);
+	memset(cpu_addr, 0, size);
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page),
 					 !!(attrs & DMA_ATTR_CC_SHARED));
 	return page;




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths
  2026-05-08 17:28 ` [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths Catalin Marinas
  2026-05-10  0:36   ` Jason Gunthorpe
@ 2026-05-11 11:13   ` Mostafa Saleh
  2026-05-11 11:18   ` Aneesh Kumar K.V
  2 siblings, 0 replies; 8+ messages in thread
From: Mostafa Saleh @ 2026-05-11 11:13 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Aneesh Kumar K.V (Arm), iommu, linux-kernel, Robin Murphy,
	Marek Szyprowski, Will Deacon, Marc Zyngier, Steven Price,
	Suzuki K Poulose, Jiri Pirko, Jason Gunthorpe, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun

On Fri, May 08, 2026 at 06:28:11PM +0100, Catalin Marinas wrote:
> On Mon, Apr 27, 2026 at 11:25:00AM +0530, Aneesh Kumar K.V (Arm) wrote:
> > This series propagates DMA_ATTR_CC_SHARED through the dma-direct,
> > dma-pool, and swiotlb paths so that encrypted and decrypted DMA buffers
> > are handled consistently.
> 
> I think this series makes sense, using DMA_ATTR_CC_SHARED throughout the
> DMA API, either for alloc or for streaming to decide/check what bouncing
> does. Sashiko has a few interesting reports, it probably breaks s390 as
> well (it might be similar to the pKVM case).

I have this series on my review list, I believe there is an overlap with
my series, I can rebase mine on top of this if that makes sense, I will
probably wait for a new version to address the current comments and
Sashiko notes.

Thanks,
Mostafa

> 
> I don't think it addresses earlier Mostafa's issues with pKVM, although
> I'd rather base additional pKVM related fixes on top of this series.
> With pKVM, cc_platform_has(CC_ATTR_MEM_ENCRYPT) returns false, as does
> force_dma_unencrypted(). I think we should update protected guests to
> return true for these if they need shared buffers (the whole
> decrypted/shared terminology is messy but in most places it just means
> buffer not private to the protected guest, whether encryption is
> available or not).
> 
> That said, does CC_ATTR_GUEST_MEM_ENCRYPT actually make more sense than
> CC_ATTR_MEM_ENCRYPT throughout this series? We'd need to change arm64
> realms as well to use this one.
> 
> -- 
> Catalin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths
  2026-05-08 17:28 ` [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths Catalin Marinas
  2026-05-10  0:36   ` Jason Gunthorpe
  2026-05-11 11:13   ` Mostafa Saleh
@ 2026-05-11 11:18   ` Aneesh Kumar K.V
  2 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2026-05-11 11:18 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: iommu, linux-kernel, Robin Murphy, Marek Szyprowski, Will Deacon,
	Marc Zyngier, Steven Price, Suzuki K Poulose, Jiri Pirko,
	Jason Gunthorpe, Mostafa Saleh, Petr Tesarik,
	Alexey Kardashevskiy, Dan Williams, Xu Yilun

Catalin Marinas <catalin.marinas@arm.com> writes:

> On Mon, Apr 27, 2026 at 11:25:00AM +0530, Aneesh Kumar K.V (Arm) wrote:
>> This series propagates DMA_ATTR_CC_SHARED through the dma-direct,
>> dma-pool, and swiotlb paths so that encrypted and decrypted DMA buffers
>> are handled consistently.
>
> I think this series makes sense, using DMA_ATTR_CC_SHARED throughout the
> DMA API, either for alloc or for streaming to decide/check what bouncing
> does. Sashiko has a few interesting reports, it probably breaks s390 as
> well (it might be similar to the pKVM case).
>

I will address Shahiko’s review comments in the next revision.

With respect to s390/powerpc, I can drop SWIOTLB_FORCE from this series.
However, I was not sure whether we would get enough testing for that
soon. Both architectures are similar to x86/arm in forcing dma
unencrypted.

powerpc:
static inline bool force_dma_unencrypted(struct device *dev)
{
	return is_secure_guest();
}

s390:
/* are we a protected virtualization guest? */
bool force_dma_unencrypted(struct device *dev)
{
	return is_prot_virt_guest();
}


>
> I don't think it addresses earlier Mostafa's issues with pKVM, although
> I'd rather base additional pKVM related fixes on top of this series.
> With pKVM, cc_platform_has(CC_ATTR_MEM_ENCRYPT) returns false, as does
> force_dma_unencrypted(). I think we should update protected guests to
> return true for these if they need shared buffers (the whole
> decrypted/shared terminology is messy but in most places it just means
> buffer not private to the protected guest, whether encryption is
> available or not).
>
> That said, does CC_ATTR_GUEST_MEM_ENCRYPT actually make more sense than
> CC_ATTR_MEM_ENCRYPT throughout this series? We'd need to change arm64
> realms as well to use this one.
>

x86 memory encryption can use swiotlb boucing even on the host right?

+       if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
+               io_tlb_default_mem.decrypted = true;
+               set_memory_decrypted((unsigned long)mem->vaddr, bytes >> PAGE_SHIFT);
+       } else {
+               io_tlb_default_mem.decrypted = false;
+       }
+


-aneesh

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-11 11:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260427055509.898190-1-aneesh.kumar@kernel.org>
     [not found] ` <20260427055509.898190-3-aneesh.kumar@kernel.org>
2026-05-08  9:26   ` [PATCH v3 2/9] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths Catalin Marinas
2026-05-11  5:38     ` Aneesh Kumar K.V
     [not found] ` <20260427055509.898190-5-aneesh.kumar@kernel.org>
2026-05-08 16:49   ` [PATCH v3 4/9] dma: swiotlb: track pool encryption state and honor DMA_ATTR_CC_SHARED Catalin Marinas
2026-05-11  5:14     ` Aneesh Kumar K.V
2026-05-08 17:28 ` [PATCH v3 0/9] dma-mapping: Use DMA_ATTR_CC_SHARED through direct, pool and swiotlb paths Catalin Marinas
2026-05-10  0:36   ` Jason Gunthorpe
2026-05-11 11:13   ` Mostafa Saleh
2026-05-11 11:18   ` Aneesh Kumar K.V

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox