* [PATCH 0/4] Per device coherent DMA map ops
@ 2012-08-14 3:49 Rob Herring
2012-08-14 3:49 ` [PATCH 1/4] ARM: add coherent dma ops Rob Herring
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Rob Herring @ 2012-08-14 3:49 UTC (permalink / raw)
To: linux-arm-kernel
From: Rob Herring <rob.herring@calxeda.com>
This is series adds coherent DMA map operations which can be set per device.
Expanding on the first RFC[1], this version adds iommu ops and implements
support on highbank platform.
arch_is_coherent is never defined true since removing ixp2xxx, so we can
remove it altogether.
Rob
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2012-August/113921.html
Rob Herring (4):
ARM: add coherent dma ops
ARM: add coherent iommu dma ops
ARM: kill off arch_is_coherent
ARM: highbank: add coherent DMA setup
.../devicetree/bindings/ata/ahci-platform.txt | 3 +
.../devicetree/bindings/dma/arm-pl330.txt | 3 +
.../devicetree/bindings/net/calxeda-xgmac.txt | 3 +
arch/arm/boot/dts/highbank.dts | 1 +
arch/arm/include/asm/barrier.h | 7 +-
arch/arm/include/asm/dma-mapping.h | 1 +
arch/arm/include/asm/memory.h | 8 -
arch/arm/mach-highbank/highbank.c | 52 ++++
arch/arm/mm/dma-mapping.c | 254 ++++++++++++++++----
arch/arm/mm/mmu.c | 17 +-
10 files changed, 271 insertions(+), 78 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] ARM: add coherent dma ops
2012-08-14 3:49 [PATCH 0/4] Per device coherent DMA map ops Rob Herring
@ 2012-08-14 3:49 ` Rob Herring
2012-08-14 3:49 ` [PATCH 2/4] ARM: add coherent iommu " Rob Herring
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Rob Herring @ 2012-08-14 3:49 UTC (permalink / raw)
To: linux-arm-kernel
From: Rob Herring <rob.herring@calxeda.com>
arch_is_coherent is problematic as it is a global symbol. This
doesn't work for multi-platform kernels or platforms which can support
per device coherent DMA.
This adds arm_coherent_dma_ops to be used for devices which connected
coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops
are modified at boot when arch_is_coherent is true.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
arch/arm/include/asm/dma-mapping.h | 1 +
arch/arm/mm/dma-mapping.c | 71 ++++++++++++++++++++++++++++++------
2 files changed, 60 insertions(+), 12 deletions(-)
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 2ae842d..c27d855 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -13,6 +13,7 @@
#define DMA_ERROR_CODE (~0)
extern struct dma_map_ops arm_dma_ops;
+extern struct dma_map_ops arm_coherent_dma_ops;
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
{
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c2cdf65..c14e37e 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -73,11 +73,18 @@ static dma_addr_t arm_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
- if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
__dma_page_cpu_to_dev(page, offset, size, dir);
return pfn_to_dma(dev, page_to_pfn(page)) + offset;
}
+static dma_addr_t arm_coherent_dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ return pfn_to_dma(dev, page_to_pfn(page)) + offset;
+}
+
/**
* arm_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
@@ -96,7 +103,7 @@ static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
- if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
__dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)),
handle & ~PAGE_MASK, size, dir);
}
@@ -106,8 +113,7 @@ static void arm_dma_sync_single_for_cpu(struct device *dev,
{
unsigned int offset = handle & (PAGE_SIZE - 1);
struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
- if (!arch_is_coherent())
- __dma_page_dev_to_cpu(page, offset, size, dir);
+ __dma_page_dev_to_cpu(page, offset, size, dir);
}
static void arm_dma_sync_single_for_device(struct device *dev,
@@ -115,8 +121,7 @@ static void arm_dma_sync_single_for_device(struct device *dev,
{
unsigned int offset = handle & (PAGE_SIZE - 1);
struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
- if (!arch_is_coherent())
- __dma_page_cpu_to_dev(page, offset, size, dir);
+ __dma_page_cpu_to_dev(page, offset, size, dir);
}
static int arm_dma_set_mask(struct device *dev, u64 dma_mask);
@@ -138,6 +143,22 @@ struct dma_map_ops arm_dma_ops = {
};
EXPORT_SYMBOL(arm_dma_ops);
+static void *arm_coherent_dma_alloc(struct device *dev, size_t size,
+ dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs);
+static void arm_coherent_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs);
+
+struct dma_map_ops arm_coherent_dma_ops = {
+ .alloc = arm_coherent_dma_alloc,
+ .free = arm_coherent_dma_free,
+ .mmap = arm_dma_mmap,
+ .get_sgtable = arm_dma_get_sgtable,
+ .map_page = arm_coherent_dma_map_page,
+ .map_sg = arm_dma_map_sg,
+ .set_dma_mask = arm_dma_set_mask,
+};
+EXPORT_SYMBOL(arm_coherent_dma_ops);
+
static u64 get_coherent_dma_mask(struct device *dev)
{
u64 mask = (u64)arm_dma_limit;
@@ -538,7 +559,7 @@ static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp,
static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
- gfp_t gfp, pgprot_t prot, const void *caller)
+ gfp_t gfp, pgprot_t prot, bool is_coherent, const void *caller)
{
u64 mask = get_coherent_dma_mask(dev);
struct page *page;
@@ -571,7 +592,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
*handle = DMA_ERROR_CODE;
size = PAGE_ALIGN(size);
- if (arch_is_coherent() || nommu())
+ if (is_coherent || nommu())
addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (gfp & GFP_ATOMIC)
addr = __alloc_from_pool(size, &page);
@@ -599,7 +620,20 @@ void *arm_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
if (dma_alloc_from_coherent(dev, size, handle, &memory))
return memory;
- return __dma_alloc(dev, size, handle, gfp, prot,
+ return __dma_alloc(dev, size, handle, gfp, prot, false,
+ __builtin_return_address(0));
+}
+
+static void *arm_coherent_dma_alloc(struct device *dev, size_t size,
+ dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs)
+{
+ pgprot_t prot = __get_dma_pgprot(attrs, pgprot_kernel);
+ void *memory;
+
+ if (dma_alloc_from_coherent(dev, size, handle, &memory))
+ return memory;
+
+ return __dma_alloc(dev, size, handle, gfp, prot, true,
__builtin_return_address(0));
}
@@ -636,8 +670,9 @@ int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
/*
* Free a buffer as defined by the above mapping.
*/
-void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
- dma_addr_t handle, struct dma_attrs *attrs)
+static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs,
+ bool is_coherent)
{
struct page *page = pfn_to_page(dma_to_pfn(dev, handle));
@@ -646,7 +681,7 @@ void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
size = PAGE_ALIGN(size);
- if (arch_is_coherent() || nommu()) {
+ if (is_coherent || nommu()) {
__dma_free_buffer(page, size);
} else if (!IS_ENABLED(CONFIG_CMA)) {
__dma_free_remap(cpu_addr, size);
@@ -662,6 +697,18 @@ void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
}
}
+void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs)
+{
+ __arm_dma_free(dev, size, cpu_addr, handle, attrs, false);
+}
+
+static void arm_coherent_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs)
+{
+ __arm_dma_free(dev, size, cpu_addr, handle, attrs, true);
+}
+
int arm_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size,
struct dma_attrs *attrs)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/4] ARM: add coherent iommu dma ops
2012-08-14 3:49 [PATCH 0/4] Per device coherent DMA map ops Rob Herring
2012-08-14 3:49 ` [PATCH 1/4] ARM: add coherent dma ops Rob Herring
@ 2012-08-14 3:49 ` Rob Herring
2012-08-14 3:49 ` [PATCH 3/4] ARM: kill off arch_is_coherent Rob Herring
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Rob Herring @ 2012-08-14 3:49 UTC (permalink / raw)
To: linux-arm-kernel
From: Rob Herring <rob.herring@calxeda.com>
Remove arch_is_coherent() from iommu dma ops and implement separate
coherent ops functions.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
arch/arm/mm/dma-mapping.c | 183 +++++++++++++++++++++++++++++++++++----------
1 file changed, 143 insertions(+), 40 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c14e37e..29c3b3a 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1254,7 +1254,8 @@ static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
*/
static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
size_t size, dma_addr_t *handle,
- enum dma_data_direction dir, struct dma_attrs *attrs)
+ enum dma_data_direction dir, struct dma_attrs *attrs,
+ bool is_coherent)
{
struct dma_iommu_mapping *mapping = dev->archdata.mapping;
dma_addr_t iova, iova_base;
@@ -1273,8 +1274,8 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
phys_addr_t phys = page_to_phys(sg_page(s));
unsigned int len = PAGE_ALIGN(s->offset + s->length);
- if (!arch_is_coherent() &&
- !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ if (!is_coherent &&
+ !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
__dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
ret = iommu_map(mapping->domain, iova, phys, len, 0);
@@ -1292,20 +1293,9 @@ fail:
return ret;
}
-/**
- * arm_iommu_map_sg - map a set of SG buffers for streaming mode DMA
- * @dev: valid struct device pointer
- * @sg: list of buffers
- * @nents: number of buffers to map
- * @dir: DMA transfer direction
- *
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * The scatter gather list elements are merged together (if possible) and
- * tagged with the appropriate dma address and length. They are obtained via
- * sg_dma_{address,length}.
- */
-int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
- enum dma_data_direction dir, struct dma_attrs *attrs)
+static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir, struct dma_attrs *attrs,
+ bool is_coherent)
{
struct scatterlist *s = sg, *dma = sg, *start = sg;
int i, count = 0;
@@ -1321,7 +1311,7 @@ int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
if (__map_sg_chunk(dev, start, size, &dma->dma_address,
- dir, attrs) < 0)
+ dir, attrs, is_coherent) < 0)
goto bad_mapping;
dma->dma_address += offset;
@@ -1334,7 +1324,8 @@ int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
}
size += s->length;
}
- if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs) < 0)
+ if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs,
+ is_coherent) < 0)
goto bad_mapping;
dma->dma_address += offset;
@@ -1349,17 +1340,44 @@ bad_mapping:
}
/**
- * arm_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
+ * arm_coherent_iommu_map_sg - map a set of SG buffers for streaming mode DMA
* @dev: valid struct device pointer
* @sg: list of buffers
- * @nents: number of buffers to unmap (same as was passed to dma_map_sg)
- * @dir: DMA transfer direction (same as was passed to dma_map_sg)
+ * @nents: number of buffers to map
+ * @dir: DMA transfer direction
*
- * Unmap a set of streaming mode DMA translations. Again, CPU access
- * rules concerning calls here are the same as for dma_unmap_single().
+ * Map a set of i/o coherent buffers described by scatterlist in streaming
+ * mode for DMA. The scatter gather list elements are merged together (if
+ * possible) and tagged with the appropriate dma address and length. They are
+ * obtained via sg_dma_{address,length}.
*/
-void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
- enum dma_data_direction dir, struct dma_attrs *attrs)
+int arm_coherent_iommu_map_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ return __iommu_map_sg(dev, sg, nents, dir, attrs, true);
+}
+
+/**
+ * arm_iommu_map_sg - map a set of SG buffers for streaming mode DMA
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to map
+ * @dir: DMA transfer direction
+ *
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * The scatter gather list elements are merged together (if possible) and
+ * tagged with the appropriate dma address and length. They are obtained via
+ * sg_dma_{address,length}.
+ */
+int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ return __iommu_map_sg(dev, sg, nents, dir, attrs, false);
+}
+
+static void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir, struct dma_attrs *attrs,
+ bool is_coherent)
{
struct scatterlist *s;
int i;
@@ -1368,7 +1386,7 @@ void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
if (sg_dma_len(s))
__iommu_remove_mapping(dev, sg_dma_address(s),
sg_dma_len(s));
- if (!arch_is_coherent() &&
+ if (!is_coherent &&
!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
__dma_page_dev_to_cpu(sg_page(s), s->offset,
s->length, dir);
@@ -1376,6 +1394,38 @@ void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
}
/**
+ * arm_coherent_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to unmap (same as was passed to dma_map_sg)
+ * @dir: DMA transfer direction (same as was passed to dma_map_sg)
+ *
+ * Unmap a set of streaming mode DMA translations. Again, CPU access
+ * rules concerning calls here are the same as for dma_unmap_single().
+ */
+void arm_coherent_iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ __iommu_unmap_sg(dev, sg, nents, dir, attrs, true);
+}
+
+/**
+ * arm_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to unmap (same as was passed to dma_map_sg)
+ * @dir: DMA transfer direction (same as was passed to dma_map_sg)
+ *
+ * Unmap a set of streaming mode DMA translations. Again, CPU access
+ * rules concerning calls here are the same as for dma_unmap_single().
+ */
+void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ __iommu_unmap_sg(dev, sg, nents, dir, attrs, false);
+}
+
+/**
* arm_iommu_sync_sg_for_cpu
* @dev: valid struct device pointer
* @sg: list of buffers
@@ -1389,8 +1439,7 @@ void arm_iommu_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int i;
for_each_sg(sg, s, nents, i)
- if (!arch_is_coherent())
- __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir);
+ __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir);
}
@@ -1408,22 +1457,21 @@ void arm_iommu_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int i;
for_each_sg(sg, s, nents, i)
- if (!arch_is_coherent())
- __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
+ __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
}
/**
- * arm_iommu_map_page
+ * arm_coherent_iommu_map_page
* @dev: valid struct device pointer
* @page: page that buffer resides in
* @offset: offset into page for start of buffer
* @size: size of buffer to map
* @dir: DMA transfer direction
*
- * IOMMU aware version of arm_dma_map_page()
+ * Coherent IOMMU aware version of arm_dma_map_page()
*/
-static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
+static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
@@ -1431,9 +1479,6 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
dma_addr_t dma_addr;
int ret, len = PAGE_ALIGN(size + offset);
- if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
- __dma_page_cpu_to_dev(page, offset, size, dir);
-
dma_addr = __alloc_iova(mapping, len);
if (dma_addr == DMA_ERROR_CODE)
return dma_addr;
@@ -1449,6 +1494,52 @@ fail:
}
/**
+ * arm_iommu_map_page
+ * @dev: valid struct device pointer
+ * @page: page that buffer resides in
+ * @offset: offset into page for start of buffer
+ * @size: size of buffer to map
+ * @dir: DMA transfer direction
+ *
+ * IOMMU aware version of arm_dma_map_page()
+ */
+static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_cpu_to_dev(page, offset, size, dir);
+
+ return arm_coherent_iommu_map_page(dev, page, offset, size, dir, attrs);
+}
+
+/**
+ * arm_coherent_iommu_unmap_page
+ * @dev: valid struct device pointer
+ * @handle: DMA address of buffer
+ * @size: size of buffer (same as passed to dma_map_page)
+ * @dir: DMA transfer direction (same as passed to dma_map_page)
+ *
+ * Coherent IOMMU aware version of arm_dma_unmap_page()
+ */
+static void arm_coherent_iommu_unmap_page(struct device *dev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ dma_addr_t iova = handle & PAGE_MASK;
+ struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
+ int offset = handle & ~PAGE_MASK;
+ int len = PAGE_ALIGN(size + offset);
+
+ if (!iova)
+ return;
+
+ iommu_unmap(mapping->domain, iova, len);
+ __free_iova(mapping, iova, len);
+}
+
+/**
* arm_iommu_unmap_page
* @dev: valid struct device pointer
* @handle: DMA address of buffer
@@ -1470,7 +1561,7 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
if (!iova)
return;
- if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
__dma_page_dev_to_cpu(page, offset, size, dir);
iommu_unmap(mapping->domain, iova, len);
@@ -1488,8 +1579,7 @@ static void arm_iommu_sync_single_for_cpu(struct device *dev,
if (!iova)
return;
- if (!arch_is_coherent())
- __dma_page_dev_to_cpu(page, offset, size, dir);
+ __dma_page_dev_to_cpu(page, offset, size, dir);
}
static void arm_iommu_sync_single_for_device(struct device *dev,
@@ -1523,6 +1613,19 @@ struct dma_map_ops iommu_ops = {
.sync_sg_for_device = arm_iommu_sync_sg_for_device,
};
+struct dma_map_ops iommu_coherent_ops = {
+ .alloc = arm_iommu_alloc_attrs,
+ .free = arm_iommu_free_attrs,
+ .mmap = arm_iommu_mmap_attrs,
+ .get_sgtable = arm_iommu_get_sgtable,
+
+ .map_page = arm_coherent_iommu_map_page,
+ .unmap_page = arm_coherent_iommu_unmap_page,
+
+ .map_sg = arm_coherent_iommu_map_sg,
+ .unmap_sg = arm_coherent_iommu_unmap_sg,
+};
+
/**
* arm_iommu_create_mapping
* @bus: pointer to the bus holding the client device (for IOMMU calls)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/4] ARM: kill off arch_is_coherent
2012-08-14 3:49 [PATCH 0/4] Per device coherent DMA map ops Rob Herring
2012-08-14 3:49 ` [PATCH 1/4] ARM: add coherent dma ops Rob Herring
2012-08-14 3:49 ` [PATCH 2/4] ARM: add coherent iommu " Rob Herring
@ 2012-08-14 3:49 ` Rob Herring
2012-08-14 3:49 ` [PATCH 4/4] ARM: highbank: add coherent DMA setup Rob Herring
2012-08-20 14:16 ` [PATCH 0/4] Per device coherent DMA map ops Marek Szyprowski
4 siblings, 0 replies; 9+ messages in thread
From: Rob Herring @ 2012-08-14 3:49 UTC (permalink / raw)
To: linux-arm-kernel
From: Rob Herring <rob.herring@calxeda.com>
With ixp2xxx removed, there are no platforms that define arch_is_coherent,
so the last occurrences of arch_is_coherent can be removed. Any new
platform with coherent i/o should use coherent dma mapping functions.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
arch/arm/include/asm/barrier.h | 7 +++----
arch/arm/include/asm/memory.h | 8 --------
arch/arm/mm/mmu.c | 17 +++--------------
3 files changed, 6 insertions(+), 26 deletions(-)
diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 0511238..8dcd9c7 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -44,10 +44,9 @@
#define rmb() dsb()
#define wmb() mb()
#else
-#include <asm/memory.h>
-#define mb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
-#define rmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
-#define wmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
+#define mb() barrier()
+#define rmb() barrier()
+#define wmb() barrier()
#endif
#ifndef CONFIG_SMP
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index e965f1b..93ff0f1 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -272,14 +272,6 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
-/*
- * Optional coherency support. Currently used only by selected
- * Intel XSC3-based systems.
- */
-#ifndef arch_is_coherent
-#define arch_is_coherent() 0
-#endif
-
#endif
#include <asm-generic/memory_model.h>
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4c2d045..ee2a6f0 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -216,7 +216,7 @@ static struct mem_type mem_types[] = {
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PROT_SECT_DEVICE | PMD_SECT_WB,
.domain = DOMAIN_IO,
- },
+ },
[MT_DEVICE_WC] = { /* ioremap_wc */
.prot_pte = PROT_PTE_DEVICE | L_PTE_MT_DEV_WC,
.prot_l1 = PMD_TYPE_TABLE,
@@ -422,17 +422,6 @@ static void __init build_mem_type_table(void)
vecs_pgprot = kern_pgprot = user_pgprot = cp->pte;
/*
- * Enable CPU-specific coherency if supported.
- * (Only available on XSC3 at the moment.)
- */
- if (arch_is_coherent() && cpu_is_xsc3()) {
- mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S;
- mem_types[MT_MEMORY].prot_pte |= L_PTE_SHARED;
- mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED;
- mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_S;
- mem_types[MT_MEMORY_NONCACHED].prot_pte |= L_PTE_SHARED;
- }
- /*
* ARMv6 and above have extended page tables.
*/
if (cpu_arch >= CPU_ARCH_ARMv6 && (cr & CR_XP)) {
@@ -777,8 +766,8 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
create_mapping(md);
vm->addr = (void *)(md->virtual & PAGE_MASK);
vm->size = PAGE_ALIGN(md->length + (md->virtual & ~PAGE_MASK));
- vm->phys_addr = __pfn_to_phys(md->pfn);
- vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING;
+ vm->phys_addr = __pfn_to_phys(md->pfn);
+ vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING;
vm->flags |= VM_ARM_MTYPE(md->type);
vm->caller = iotable_init;
vm_area_add_early(vm++);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/4] ARM: highbank: add coherent DMA setup
2012-08-14 3:49 [PATCH 0/4] Per device coherent DMA map ops Rob Herring
` (2 preceding siblings ...)
2012-08-14 3:49 ` [PATCH 3/4] ARM: kill off arch_is_coherent Rob Herring
@ 2012-08-14 3:49 ` Rob Herring
2012-08-20 14:16 ` [PATCH 0/4] Per device coherent DMA map ops Marek Szyprowski
4 siblings, 0 replies; 9+ messages in thread
From: Rob Herring @ 2012-08-14 3:49 UTC (permalink / raw)
To: linux-arm-kernel
From: Rob Herring <rob.herring@calxeda.com>
Some highbank DMA masters can support coherent (ACP) or non-coherent DMA.
This sets up dma_map_ops for masters which are configured for coherent DMA.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
---
.../devicetree/bindings/ata/ahci-platform.txt | 3 ++
.../devicetree/bindings/dma/arm-pl330.txt | 3 ++
.../devicetree/bindings/net/calxeda-xgmac.txt | 3 ++
arch/arm/boot/dts/highbank.dts | 1 +
arch/arm/mach-highbank/highbank.c | 52 ++++++++++++++++++++
5 files changed, 62 insertions(+)
diff --git a/Documentation/devicetree/bindings/ata/ahci-platform.txt b/Documentation/devicetree/bindings/ata/ahci-platform.txt
index 8bb8a76..6c1ad01 100644
--- a/Documentation/devicetree/bindings/ata/ahci-platform.txt
+++ b/Documentation/devicetree/bindings/ata/ahci-platform.txt
@@ -8,6 +8,9 @@ Required properties:
- interrupts : <interrupt mapping for SATA IRQ>
- reg : <registers mapping>
+Optional properties:
+- dma-coherent : Present if dma operations are coherent
+
Example:
sata at ffe08000 {
compatible = "calxeda,hb-ahci";
diff --git a/Documentation/devicetree/bindings/dma/arm-pl330.txt b/Documentation/devicetree/bindings/dma/arm-pl330.txt
index a4cd273..36e27d5 100644
--- a/Documentation/devicetree/bindings/dma/arm-pl330.txt
+++ b/Documentation/devicetree/bindings/dma/arm-pl330.txt
@@ -9,6 +9,9 @@ Required properties:
region.
- interrupts: interrupt number to the cpu.
+Optional properties:
+- dma-coherent : Present if dma operations are coherent
+
Example:
pdma0: pdma at 12680000 {
diff --git a/Documentation/devicetree/bindings/net/calxeda-xgmac.txt b/Documentation/devicetree/bindings/net/calxeda-xgmac.txt
index 411727a..c8ae996 100644
--- a/Documentation/devicetree/bindings/net/calxeda-xgmac.txt
+++ b/Documentation/devicetree/bindings/net/calxeda-xgmac.txt
@@ -6,6 +6,9 @@ Required properties:
- interrupts : Should contain 3 xgmac interrupts. The 1st is main interrupt.
The 2nd is pwr mgt interrupt. The 3rd is low power state interrupt.
+Optional properties:
+- dma-coherent : Present if dma operations are coherent
+
Example:
ethernet at fff50000 {
diff --git a/arch/arm/boot/dts/highbank.dts b/arch/arm/boot/dts/highbank.dts
index 9fecf1a..7414577 100644
--- a/arch/arm/boot/dts/highbank.dts
+++ b/arch/arm/boot/dts/highbank.dts
@@ -121,6 +121,7 @@
compatible = "calxeda,hb-ahci";
reg = <0xffe08000 0x10000>;
interrupts = <0 83 4>;
+ dma-coherent;
};
sdhci at ffe0e000 {
diff --git a/arch/arm/mach-highbank/highbank.c b/arch/arm/mach-highbank/highbank.c
index d75b0a7..93617d6 100644
--- a/arch/arm/mach-highbank/highbank.c
+++ b/arch/arm/mach-highbank/highbank.c
@@ -15,6 +15,7 @@
*/
#include <linux/clk.h>
#include <linux/clkdev.h>
+#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
@@ -23,6 +24,7 @@
#include <linux/of_platform.h>
#include <linux/of_address.h>
#include <linux/smp.h>
+#include <linux/amba/bus.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
@@ -149,10 +151,60 @@ static void highbank_power_off(void)
cpu_do_idle();
}
+static int highbank_platform_notifier(struct notifier_block *nb,
+ unsigned long event, void *__dev)
+{
+ struct resource *res;
+ int reg = -1;
+ struct device *dev = __dev;
+
+ if (event != BUS_NOTIFY_ADD_DEVICE)
+ return NOTIFY_DONE;
+
+ if (of_device_is_compatible(dev->of_node, "calxeda,hb-ahci"))
+ reg = 0xc;
+ else if (of_device_is_compatible(dev->of_node, "calxeda,hb-sdhci"))
+ reg = 0x18;
+ else if (of_device_is_compatible(dev->of_node, "arm,pl330"))
+ reg = 0x20;
+ else if (of_device_is_compatible(dev->of_node, "calxeda,hb-xgmac")) {
+ res = platform_get_resource(to_platform_device(dev),
+ IORESOURCE_MEM, 0);
+ if (res) {
+ if (res->start == 0xfff50000)
+ reg = 0;
+ else if (res->start == 0xfff51000)
+ reg = 4;
+ }
+ }
+
+ if (reg < 0)
+ return NOTIFY_DONE;
+
+ if (of_property_read_bool(dev->of_node, "dma-coherent")) {
+ writel(0xff31, sregs_base + reg);
+ set_dma_ops(dev, &arm_coherent_dma_ops);
+ } else
+ writel(0, sregs_base + reg);
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block highbank_amba_nb = {
+ .notifier_call = highbank_platform_notifier,
+};
+
+static struct notifier_block highbank_platform_nb = {
+ .notifier_call = highbank_platform_notifier,
+};
+
static void __init highbank_init(void)
{
pm_power_off = highbank_power_off;
+ bus_register_notifier(&platform_bus_type, &highbank_platform_nb);
+ bus_register_notifier(&amba_bustype, &highbank_amba_nb);
+
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 0/4] Per device coherent DMA map ops
2012-08-14 3:49 [PATCH 0/4] Per device coherent DMA map ops Rob Herring
` (3 preceding siblings ...)
2012-08-14 3:49 ` [PATCH 4/4] ARM: highbank: add coherent DMA setup Rob Herring
@ 2012-08-20 14:16 ` Marek Szyprowski
2012-08-20 14:54 ` Rob Herring
4 siblings, 1 reply; 9+ messages in thread
From: Marek Szyprowski @ 2012-08-20 14:16 UTC (permalink / raw)
To: linux-arm-kernel
Hello,
On Tuesday, August 14, 2012 5:49 AM Rob Herring wrote:
> From: Rob Herring <rob.herring@calxeda.com>
>
> This is series adds coherent DMA map operations which can be set per device.
> Expanding on the first RFC[1], this version adds iommu ops and implements
> support on highbank platform.
>
> arch_is_coherent is never defined true since removing ixp2xxx, so we can
> remove it altogether.
>
> Rob
>
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2012-August/113921.html
>
> Rob Herring (4):
> ARM: add coherent dma ops
> ARM: add coherent iommu dma ops
> ARM: kill off arch_is_coherent
> ARM: highbank: add coherent DMA setup
>
> .../devicetree/bindings/ata/ahci-platform.txt | 3 +
> .../devicetree/bindings/dma/arm-pl330.txt | 3 +
> .../devicetree/bindings/net/calxeda-xgmac.txt | 3 +
> arch/arm/boot/dts/highbank.dts | 1 +
> arch/arm/include/asm/barrier.h | 7 +-
> arch/arm/include/asm/dma-mapping.h | 1 +
> arch/arm/include/asm/memory.h | 8 -
> arch/arm/mach-highbank/highbank.c | 52 ++++
> arch/arm/mm/dma-mapping.c | 254 ++++++++++++++++----
> arch/arm/mm/mmu.c | 17 +-
> 10 files changed, 271 insertions(+), 78 deletions(-)
I like those patches and I would like to take it to my dma-mapping-next tree, but I wonder
how to handle merging of the last patch. Arnd, Olof - could You tell me how to do it? I assume
that it should be somehow merged by arm-soc tree, so it depends on the earlier patches.
Would it be enough if I take them to the topic branch? Then I can continue works on
dma-mapping subsystem by merging that topic branch to my dma-mapping-next and you will
also take it as a dependency for highbank patches. Am I right?
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 0/4] Per device coherent DMA map ops
2012-08-20 14:16 ` [PATCH 0/4] Per device coherent DMA map ops Marek Szyprowski
@ 2012-08-20 14:54 ` Rob Herring
2012-08-20 20:06 ` Arnd Bergmann
0 siblings, 1 reply; 9+ messages in thread
From: Rob Herring @ 2012-08-20 14:54 UTC (permalink / raw)
To: linux-arm-kernel
On 08/20/2012 09:16 AM, Marek Szyprowski wrote:
> Hello,
>
> On Tuesday, August 14, 2012 5:49 AM Rob Herring wrote:
>
>> From: Rob Herring <rob.herring@calxeda.com>
>>
>> This is series adds coherent DMA map operations which can be set per device.
>> Expanding on the first RFC[1], this version adds iommu ops and implements
>> support on highbank platform.
>>
>> arch_is_coherent is never defined true since removing ixp2xxx, so we can
>> remove it altogether.
>>
>> Rob
>>
>> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2012-August/113921.html
>>
>> Rob Herring (4):
>> ARM: add coherent dma ops
>> ARM: add coherent iommu dma ops
>> ARM: kill off arch_is_coherent
>> ARM: highbank: add coherent DMA setup
>>
>> .../devicetree/bindings/ata/ahci-platform.txt | 3 +
>> .../devicetree/bindings/dma/arm-pl330.txt | 3 +
>> .../devicetree/bindings/net/calxeda-xgmac.txt | 3 +
>> arch/arm/boot/dts/highbank.dts | 1 +
>> arch/arm/include/asm/barrier.h | 7 +-
>> arch/arm/include/asm/dma-mapping.h | 1 +
>> arch/arm/include/asm/memory.h | 8 -
>> arch/arm/mach-highbank/highbank.c | 52 ++++
>> arch/arm/mm/dma-mapping.c | 254 ++++++++++++++++----
>> arch/arm/mm/mmu.c | 17 +-
>> 10 files changed, 271 insertions(+), 78 deletions(-)
>
> I like those patches and I would like to take it to my dma-mapping-next tree, but I wonder
> how to handle merging of the last patch. Arnd, Olof - could You tell me how to do it? I assume
> that it should be somehow merged by arm-soc tree, so it depends on the earlier patches.
> Would it be enough if I take them to the topic branch? Then I can continue works on
> dma-mapping subsystem by merging that topic branch to my dma-mapping-next and you will
> also take it as a dependency for highbank patches. Am I right?
>
I'm fine with it all going in thru your tree. Seems a bit silly to deal
with tracking the dependencies all for 1 patch. The only possible
conflict I see is with the dts file and some other SATA related changes
which will go in via the ata tree. But the merge is trivial and the
conflict will be there either with your tree or arm-soc.
Rob
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 0/4] Per device coherent DMA map ops
2012-08-20 14:54 ` Rob Herring
@ 2012-08-20 20:06 ` Arnd Bergmann
2012-08-21 14:48 ` Marek Szyprowski
0 siblings, 1 reply; 9+ messages in thread
From: Arnd Bergmann @ 2012-08-20 20:06 UTC (permalink / raw)
To: linux-arm-kernel
On Monday 20 August 2012, Rob Herring wrote:
> >
> > I like those patches and I would like to take it to my dma-mapping-next tree, but I wonder
> > how to handle merging of the last patch. Arnd, Olof - could You tell me how to do it? I assume
> > that it should be somehow merged by arm-soc tree, so it depends on the earlier patches.
> > Would it be enough if I take them to the topic branch? Then I can continue works on
> > dma-mapping subsystem by merging that topic branch to my dma-mapping-next and you will
> > also take it as a dependency for highbank patches. Am I right?
> >
>
> I'm fine with it all going in thru your tree. Seems a bit silly to deal
> with tracking the dependencies all for 1 patch. The only possible
> conflict I see is with the dts file and some other SATA related changes
> which will go in via the ata tree. But the merge is trivial and the
> conflict will be there either with your tree or arm-soc.
Right. I see no problem having this patch go through the dma-mapping tree.
Arnd
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 0/4] Per device coherent DMA map ops
2012-08-20 20:06 ` Arnd Bergmann
@ 2012-08-21 14:48 ` Marek Szyprowski
0 siblings, 0 replies; 9+ messages in thread
From: Marek Szyprowski @ 2012-08-21 14:48 UTC (permalink / raw)
To: linux-arm-kernel
Hello,
On Monday, August 20, 2012 10:06 PM Arnd Bergmann wrote:
> On Monday 20 August 2012, Rob Herring wrote:
> > >
> > > I like those patches and I would like to take it to my dma-mapping-next tree, but I wonder
> > > how to handle merging of the last patch. Arnd, Olof - could You tell me how to do it? I
> assume
> > > that it should be somehow merged by arm-soc tree, so it depends on the earlier patches.
> > > Would it be enough if I take them to the topic branch? Then I can continue works on
> > > dma-mapping subsystem by merging that topic branch to my dma-mapping-next and you will
> > > also take it as a dependency for highbank patches. Am I right?
> > >
> >
> > I'm fine with it all going in thru your tree. Seems a bit silly to deal
> > with tracking the dependencies all for 1 patch. The only possible
> > conflict I see is with the dts file and some other SATA related changes
> > which will go in via the ata tree. But the merge is trivial and the
> > conflict will be there either with your tree or arm-soc.
>
> Right. I see no problem having this patch go through the dma-mapping tree.
Ok, I've took them to my dma-mapping-next tree and I will handle pushing them to Linus.
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-08-21 14:48 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-14 3:49 [PATCH 0/4] Per device coherent DMA map ops Rob Herring
2012-08-14 3:49 ` [PATCH 1/4] ARM: add coherent dma ops Rob Herring
2012-08-14 3:49 ` [PATCH 2/4] ARM: add coherent iommu " Rob Herring
2012-08-14 3:49 ` [PATCH 3/4] ARM: kill off arch_is_coherent Rob Herring
2012-08-14 3:49 ` [PATCH 4/4] ARM: highbank: add coherent DMA setup Rob Herring
2012-08-20 14:16 ` [PATCH 0/4] Per device coherent DMA map ops Marek Szyprowski
2012-08-20 14:54 ` Rob Herring
2012-08-20 20:06 ` Arnd Bergmann
2012-08-21 14:48 ` Marek Szyprowski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).