public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable()
@ 2015-07-17 15:58 Robin Murphy
  2015-07-17 15:58 ` [PATCH 2/2] arm64: dma-mapping: consolidate __swiotlb_mmap() Robin Murphy
  2015-07-20 16:36 ` [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Will Deacon
  0 siblings, 2 replies; 4+ messages in thread
From: Robin Murphy @ 2015-07-17 15:58 UTC (permalink / raw)
  To: linux-arm-kernel

The default dma_common_get_sgtable() implementation relies on the CPU
address of the buffer being a regular lowmem address. This is not always
the case on arm64, since allocations from the various DMA pools may have
remapped vmalloc addresses, rendering the use of virt_to_page() invalid.

Fix this by providing our own implementation based on the fact that we
can safely derive a physical address from the DMA address in both cases.

CC: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/mm/dma-mapping.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index d16a1ce..4b9b600 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -337,10 +337,24 @@ static int __swiotlb_mmap(struct device *dev,
 	return __dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
 }
 
+int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
+			  void *cpu_addr, dma_addr_t handle, size_t size,
+			  struct dma_attrs *attrs)
+{
+	int ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+
+	if (!ret)
+		sg_set_page(sgt->sgl, phys_to_page(dma_to_phys(dev, handle)),
+			    PAGE_ALIGN(size), 0);
+
+	return ret;
+}
+
 static struct dma_map_ops swiotlb_dma_ops = {
 	.alloc = __dma_alloc,
 	.free = __dma_free,
 	.mmap = __swiotlb_mmap,
+	.get_sgtable = __swiotlb_get_sgtable,
 	.map_page = __swiotlb_map_page,
 	.unmap_page = __swiotlb_unmap_page,
 	.map_sg = __swiotlb_map_sg_attrs,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] arm64: dma-mapping: consolidate __swiotlb_mmap()
  2015-07-17 15:58 [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Robin Murphy
@ 2015-07-17 15:58 ` Robin Murphy
  2015-07-20 16:36 ` [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Will Deacon
  1 sibling, 0 replies; 4+ messages in thread
From: Robin Murphy @ 2015-07-17 15:58 UTC (permalink / raw)
  To: linux-arm-kernel

Since commit 9d3bfbb4df58 ("arm64: Combine coherent and non-coherent
swiotlb dma_ops"), __dma_common_mmap() is no longer shared between two
callers, so roll it into the remaining one.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/mm/dma-mapping.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 4b9b600..07c6976 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -303,9 +303,9 @@ static void __swiotlb_sync_sg_for_device(struct device *dev,
 				       sg->length, dir);
 }
 
-/* vma->vm_page_prot must be set appropriately before calling this function */
-static int __dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
-			     void *cpu_addr, dma_addr_t dma_addr, size_t size)
+static int __swiotlb_mmap(struct device *dev, struct vm_area_struct *vma,
+			  void *cpu_addr, dma_addr_t dma_addr, size_t size,
+			  struct dma_attrs *attrs)
 {
 	int ret = -ENXIO;
 	unsigned long nr_vma_pages = (vma->vm_end - vma->vm_start) >>
@@ -314,6 +314,9 @@ static int __dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 	unsigned long pfn = dma_to_phys(dev, dma_addr) >> PAGE_SHIFT;
 	unsigned long off = vma->vm_pgoff;
 
+	vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot,
+					     is_device_dma_coherent(dev));
+
 	if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
 		return ret;
 
@@ -327,16 +330,6 @@ static int __dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 	return ret;
 }
 
-static int __swiotlb_mmap(struct device *dev,
-			  struct vm_area_struct *vma,
-			  void *cpu_addr, dma_addr_t dma_addr, size_t size,
-			  struct dma_attrs *attrs)
-{
-	vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot,
-					     is_device_dma_coherent(dev));
-	return __dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
-}
-
 int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
 			  void *cpu_addr, dma_addr_t handle, size_t size,
 			  struct dma_attrs *attrs)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable()
  2015-07-17 15:58 [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Robin Murphy
  2015-07-17 15:58 ` [PATCH 2/2] arm64: dma-mapping: consolidate __swiotlb_mmap() Robin Murphy
@ 2015-07-20 16:36 ` Will Deacon
  2015-07-20 17:00   ` Robin Murphy
  1 sibling, 1 reply; 4+ messages in thread
From: Will Deacon @ 2015-07-20 16:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 17, 2015 at 04:58:21PM +0100, Robin Murphy wrote:
> The default dma_common_get_sgtable() implementation relies on the CPU
> address of the buffer being a regular lowmem address. This is not always
> the case on arm64, since allocations from the various DMA pools may have
> remapped vmalloc addresses, rendering the use of virt_to_page() invalid.
> 
> Fix this by providing our own implementation based on the fact that we
> can safely derive a physical address from the DMA address in both cases.
> 
> CC: Jon Medhurst <tixy@linaro.org>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  arch/arm64/mm/dma-mapping.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
> index d16a1ce..4b9b600 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -337,10 +337,24 @@ static int __swiotlb_mmap(struct device *dev,
>  	return __dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
>  }
>  
> +int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
> +			  void *cpu_addr, dma_addr_t handle, size_t size,
> +			  struct dma_attrs *attrs)
> +{
> +	int ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> +
> +	if (!ret)
> +		sg_set_page(sgt->sgl, phys_to_page(dma_to_phys(dev, handle)),
> +			    PAGE_ALIGN(size), 0);
> +
> +	return ret;
> +}

Any reason not to do this in dma_common_get_sgtable?

Will

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable()
  2015-07-20 16:36 ` [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Will Deacon
@ 2015-07-20 17:00   ` Robin Murphy
  0 siblings, 0 replies; 4+ messages in thread
From: Robin Murphy @ 2015-07-20 17:00 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,

On 20/07/15 17:36, Will Deacon wrote:
> On Fri, Jul 17, 2015 at 04:58:21PM +0100, Robin Murphy wrote:
>> The default dma_common_get_sgtable() implementation relies on the CPU
>> address of the buffer being a regular lowmem address. This is not always
>> the case on arm64, since allocations from the various DMA pools may have
>> remapped vmalloc addresses, rendering the use of virt_to_page() invalid.
>>
>> Fix this by providing our own implementation based on the fact that we
>> can safely derive a physical address from the DMA address in both cases.
>>
>> CC: Jon Medhurst <tixy@linaro.org>
>> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
>> ---
>>   arch/arm64/mm/dma-mapping.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
>> index d16a1ce..4b9b600 100644
>> --- a/arch/arm64/mm/dma-mapping.c
>> +++ b/arch/arm64/mm/dma-mapping.c
>> @@ -337,10 +337,24 @@ static int __swiotlb_mmap(struct device *dev,
>>   	return __dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
>>   }
>>
>> +int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
>> +			  void *cpu_addr, dma_addr_t handle, size_t size,
>> +			  struct dma_attrs *attrs)
>> +{
>> +	int ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
>> +
>> +	if (!ret)
>> +		sg_set_page(sgt->sgl, phys_to_page(dma_to_phys(dev, handle)),
>> +			    PAGE_ALIGN(size), 0);
>> +
>> +	return ret;
>> +}
>
> Any reason not to do this in dma_common_get_sgtable?

Summarising the discussion over at [1], most architectures seem to 
depend on dma_common_get_sgtable, but only a handful implement 
dma_to_phys (plus this approach seems to match the original intent). 
There doesn't seem to be a nice solution for doing this in common code 
without a big cross-architecture patch, and it's somewhat questionable 
how widely this is actually needed.

Robin.

[1]:http://thread.gmane.org/gmane.linux.kernel/1998795

>
> Will
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-07-20 17:00 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-17 15:58 [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Robin Murphy
2015-07-17 15:58 ` [PATCH 2/2] arm64: dma-mapping: consolidate __swiotlb_mmap() Robin Murphy
2015-07-20 16:36 ` [PATCH 1/2] arm64: dma-mapping: implement dma_get_sgtable() Will Deacon
2015-07-20 17:00   ` Robin Murphy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox