linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code
@ 2014-06-02 10:19 ritesh.harjani at gmail.com
  2014-06-02 10:19 ` [PATCH 1/4] arm: dma-iommu: Move out dma_iommu_mapping struct ritesh.harjani at gmail.com
  2014-06-03 13:01 ` [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code Will Deacon
  0 siblings, 2 replies; 6+ messages in thread
From: ritesh.harjani at gmail.com @ 2014-06-02 10:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ritesh Harjani <ritesh.harjani@gmail.com>

Hi All, 

This patch series is to refractor iommu related common code from
arch/arm/dma-mapping.c to lib/iommu-helper.c based on the various
discussions with the maintainers/experts [1].

Currently the only user of the common lib/iommu-helper code will
be ARM & ARM64 but later various architecture might try to use this
iommu lib helper functions.

Major change of this refactoring starts with bringing out struct dma_iommu_mapping 
*mapping variable from arch/arm/include/asm/device.h to include/linux/device.h
and by moving out complete structure defination of dma_iommu_mapping to
inclue/linux/iommu-helper.h. Link [2] give more details on why this was done,
also this change got approval from Will Daecon [2].

There are 1/2 more function definitions which I can think of moving out, but
those can be done once this patch series is approved as those are not very
big changes.

[1]: https://www.mail-archive.com/iommu at lists.linux-foundation.org/msg03458.html 
[2]: https://www.mail-archive.com/iommu at lists.linux-foundation.org/msg04272.html

Ritesh Harjani (4):
  arm: dma-iommu: Move out dma_iommu_mapping struct
  arm: dma-mapping: Refractor attach/detach dev function calls
  arm: dma-mapping: Refractor iommu_alloc/free funcs
  arm:dma-iommu: Move out complete func defs

 arch/arm/Kconfig                          |  46 +--
 arch/arm/include/asm/device.h             |   9 -
 arch/arm/include/asm/dma-iommu.h          |  16 +-
 arch/arm/mm/dma-mapping.c                 | 566 +++---------------------------
 drivers/gpu/drm/exynos/exynos_drm_iommu.c |  10 +-
 include/linux/device.h                    |   4 +
 include/linux/iommu-helper.h              |  64 ++++
 lib/iommu-helper.c                        | 561 +++++++++++++++++++++++++++++
 8 files changed, 705 insertions(+), 571 deletions(-)

-- 
1.8.1.3

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] arm: dma-iommu: Move out dma_iommu_mapping struct
  2014-06-02 10:19 [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code ritesh.harjani at gmail.com
@ 2014-06-02 10:19 ` ritesh.harjani at gmail.com
  2014-06-02 10:19   ` [PATCH 2/4] arm: dma-mapping: Refractor attach/detach dev function calls ritesh.harjani at gmail.com
  2014-06-03 13:01 ` [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code Will Deacon
  1 sibling, 1 reply; 6+ messages in thread
From: ritesh.harjani at gmail.com @ 2014-06-02 10:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ritesh Harjani <ritesh.harjani@gmail.com>

This patch is the 1st in the refractoring patch series of
arm, dma-iommu-mapping code, which brings dma_iommu_mapping
structure out to include/linux/iommu-helper.h

1. take out dma_iommu_mapping variable from arch/arm/asm/include/device.h
to include/linux/device.h

2. take out complete structure defination of dma_iommu_mapping to
include/linux/iommu-helper.h

3. Protect this variable with a config and let the archs define this
config based on their use.

Change-Id: Ic9d5e6e2346258698348153de39ded20d730fe72
Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
---
 arch/arm/Kconfig                          |  4 ++++
 arch/arm/include/asm/device.h             |  9 ---------
 arch/arm/include/asm/dma-iommu.h          | 16 +---------------
 arch/arm/mm/dma-mapping.c                 | 20 ++++++++++----------
 drivers/gpu/drm/exynos/exynos_drm_iommu.c | 10 +++++-----
 include/linux/device.h                    |  4 ++++
 include/linux/iommu-helper.h              | 22 ++++++++++++++++++++++
 7 files changed, 46 insertions(+), 39 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c0b31fc..20717fb 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -78,6 +78,7 @@ config ARM_DMA_USE_IOMMU
 	bool
 	select ARM_HAS_SG_CHAIN
 	select NEED_SG_DMA_LENGTH
+	select DMA_USE_IOMMU_HELPER_MAPPING
 
 if ARM_DMA_USE_IOMMU
 
@@ -1945,6 +1946,9 @@ config SWIOTLB
 config IOMMU_HELPER
 	def_bool SWIOTLB
 
+config DMA_USE_IOMMU_HELPER_MAPPING
+	def_bool n
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
diff --git a/arch/arm/include/asm/device.h b/arch/arm/include/asm/device.h
index dc662fc..6e2cb0e 100644
--- a/arch/arm/include/asm/device.h
+++ b/arch/arm/include/asm/device.h
@@ -14,9 +14,6 @@ struct dev_archdata {
 #ifdef CONFIG_IOMMU_API
 	void *iommu; /* private IOMMU data */
 #endif
-#ifdef CONFIG_ARM_DMA_USE_IOMMU
-	struct dma_iommu_mapping	*mapping;
-#endif
 };
 
 struct omap_device;
@@ -27,10 +24,4 @@ struct pdev_archdata {
 #endif
 };
 
-#ifdef CONFIG_ARM_DMA_USE_IOMMU
-#define to_dma_iommu_mapping(dev) ((dev)->archdata.mapping)
-#else
-#define to_dma_iommu_mapping(dev) NULL
-#endif
-
 #endif
diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h
index 8e3fcb9..50c010b 100644
--- a/arch/arm/include/asm/dma-iommu.h
+++ b/arch/arm/include/asm/dma-iommu.h
@@ -8,21 +8,7 @@
 #include <linux/dma-debug.h>
 #include <linux/kmemcheck.h>
 #include <linux/kref.h>
-
-struct dma_iommu_mapping {
-	/* iommu specific data */
-	struct iommu_domain	*domain;
-
-	unsigned long		**bitmaps;	/* array of bitmaps */
-	unsigned int		nr_bitmaps;	/* nr of elements in array */
-	unsigned int		extensions;
-	size_t			bitmap_size;	/* size of a single bitmap */
-	size_t			bits;		/* per bitmap */
-	dma_addr_t		base;
-
-	spinlock_t		lock;
-	struct kref		kref;
-};
+#include <linux/iommu-helper.h>
 
 struct dma_iommu_mapping *
 arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size);
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 3d43c41..b82561e 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1292,7 +1292,7 @@ err:
 static dma_addr_t
 __iommu_create_mapping(struct device *dev, struct page **pages, size_t size)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
 	dma_addr_t dma_addr, iova;
 	int i, ret = DMA_ERROR_CODE;
@@ -1328,7 +1328,7 @@ fail:
 
 static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 
 	/*
 	 * add optional in-page offset from iova to size and align
@@ -1541,7 +1541,7 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
 			  enum dma_data_direction dir, struct dma_attrs *attrs,
 			  bool is_coherent)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	dma_addr_t iova, iova_base;
 	int ret = 0;
 	unsigned int count;
@@ -1762,7 +1762,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p
 	     unsigned long offset, size_t size, enum dma_data_direction dir,
 	     struct dma_attrs *attrs)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	dma_addr_t dma_addr;
 	int ret, prot, len = PAGE_ALIGN(size + offset);
 
@@ -1815,7 +1815,7 @@ static void arm_coherent_iommu_unmap_page(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir,
 		struct dma_attrs *attrs)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	dma_addr_t iova = handle & PAGE_MASK;
 	int offset = handle & ~PAGE_MASK;
 	int len = PAGE_ALIGN(size + offset);
@@ -1840,7 +1840,7 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir,
 		struct dma_attrs *attrs)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	dma_addr_t iova = handle & PAGE_MASK;
 	struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
 	int offset = handle & ~PAGE_MASK;
@@ -1859,7 +1859,7 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
 static void arm_iommu_sync_single_for_cpu(struct device *dev,
 		dma_addr_t handle, size_t size, enum dma_data_direction dir)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	dma_addr_t iova = handle & PAGE_MASK;
 	struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
 	unsigned int offset = handle & ~PAGE_MASK;
@@ -1873,7 +1873,7 @@ static void arm_iommu_sync_single_for_cpu(struct device *dev,
 static void arm_iommu_sync_single_for_device(struct device *dev,
 		dma_addr_t handle, size_t size, enum dma_data_direction dir)
 {
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 	dma_addr_t iova = handle & PAGE_MASK;
 	struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
 	unsigned int offset = handle & ~PAGE_MASK;
@@ -2045,7 +2045,7 @@ int arm_iommu_attach_device(struct device *dev,
 		return err;
 
 	kref_get(&mapping->kref);
-	dev->archdata.mapping = mapping;
+	dev->mapping = mapping;
 	set_dma_ops(dev, &iommu_ops);
 
 	pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev));
@@ -2072,7 +2072,7 @@ void arm_iommu_detach_device(struct device *dev)
 
 	iommu_detach_device(mapping->domain, dev);
 	kref_put(&mapping->kref, release_iommu_mapping);
-	dev->archdata.mapping = NULL;
+	dev->mapping = NULL;
 	set_dma_ops(dev, NULL);
 
 	pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev));
diff --git a/drivers/gpu/drm/exynos/exynos_drm_iommu.c b/drivers/gpu/drm/exynos/exynos_drm_iommu.c
index 091068f..2dabdbf 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_iommu.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_iommu.c
@@ -46,7 +46,7 @@ int drm_create_iommu_mapping(struct drm_device *drm_dev)
 	dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms),
 					GFP_KERNEL);
 	dma_set_max_seg_size(dev, 0xffffffffu);
-	dev->archdata.mapping = mapping;
+	dev->mapping = mapping;
 
 	return 0;
 }
@@ -63,7 +63,7 @@ void drm_release_iommu_mapping(struct drm_device *drm_dev)
 {
 	struct device *dev = drm_dev->dev;
 
-	arm_iommu_release_mapping(dev->archdata.mapping);
+	arm_iommu_release_mapping(dev->mapping);
 }
 
 /*
@@ -81,7 +81,7 @@ int drm_iommu_attach_device(struct drm_device *drm_dev,
 	struct device *dev = drm_dev->dev;
 	int ret;
 
-	if (!dev->archdata.mapping) {
+	if (!dev->mapping) {
 		DRM_ERROR("iommu_mapping is null.\n");
 		return -EFAULT;
 	}
@@ -91,7 +91,7 @@ int drm_iommu_attach_device(struct drm_device *drm_dev,
 					GFP_KERNEL);
 	dma_set_max_seg_size(subdrv_dev, 0xffffffffu);
 
-	ret = arm_iommu_attach_device(subdrv_dev, dev->archdata.mapping);
+	ret = arm_iommu_attach_device(subdrv_dev, dev->mapping);
 	if (ret < 0) {
 		DRM_DEBUG_KMS("failed iommu attach.\n");
 		return ret;
@@ -124,7 +124,7 @@ void drm_iommu_detach_device(struct drm_device *drm_dev,
 				struct device *subdrv_dev)
 {
 	struct device *dev = drm_dev->dev;
-	struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+	struct dma_iommu_mapping *mapping = dev->mapping;
 
 	if (!mapping || !mapping->domain)
 		return;
diff --git a/include/linux/device.h b/include/linux/device.h
index c0a1261..c73df6f 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -705,6 +705,10 @@ struct device {
 	/* arch specific additions */
 	struct dev_archdata	archdata;
 
+#ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
+	struct dma_iommu_mapping	*mapping;
+#endif
+
 	struct device_node	*of_node; /* associated device tree node */
 	struct acpi_dev_node	acpi_node; /* associated ACPI device node */
 
diff --git a/include/linux/iommu-helper.h b/include/linux/iommu-helper.h
index 86bdeff..0c5e4c7 100644
--- a/include/linux/iommu-helper.h
+++ b/include/linux/iommu-helper.h
@@ -3,6 +3,28 @@
 
 #include <linux/kernel.h>
 
+#ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
+struct dma_iommu_mapping {
+	/* iommu specific data */
+	struct iommu_domain	*domain;
+
+	unsigned long		**bitmaps;	/* array of bitmaps */
+	unsigned int		nr_bitmaps;	/* nr of elements in array */
+	unsigned int		extensions;
+	size_t			bitmap_size;	/* size of a single bitmap */
+	size_t			bits;		/* per bitmap */
+	dma_addr_t		base;
+
+	spinlock_t		lock;
+	struct kref		kref;
+};
+
+#define to_dma_iommu_mapping(dev) ((dev)->mapping)
+#else
+#define to_dma_iommu_mapping(dev) NULL
+#endif
+
+
 static inline unsigned long iommu_device_max_index(unsigned long size,
 						   unsigned long offset,
 						   u64 dma_mask)
-- 
1.8.1.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] arm: dma-mapping: Refractor attach/detach dev function calls
  2014-06-02 10:19 ` [PATCH 1/4] arm: dma-iommu: Move out dma_iommu_mapping struct ritesh.harjani at gmail.com
@ 2014-06-02 10:19   ` ritesh.harjani at gmail.com
  2014-06-02 10:19     ` [PATCH 3/4] arm: dma-mapping: Refractor iommu_alloc/free funcs ritesh.harjani at gmail.com
  0 siblings, 1 reply; 6+ messages in thread
From: ritesh.harjani at gmail.com @ 2014-06-02 10:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ritesh Harjani <ritesh.harjani@gmail.com>

Refractor following function calls to lib/iommu-helper.c

arm_iommu_attach/detach device function calls.
arm_iommu_init/release_mapping function calls.

Change-Id: Ic69a8b6b7008599a6e98b670b11a61ff6a5bac99
Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
---
 arch/arm/mm/dma-mapping.c    | 101 +++++--------------------------
 include/linux/iommu-helper.h |  10 ++++
 lib/iommu-helper.c           | 140 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 165 insertions(+), 86 deletions(-)

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index b82561e..38fc146 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1924,7 +1924,8 @@ struct dma_map_ops iommu_coherent_ops = {
  * @base: start address of the valid IO address space
  * @size: maximum size of the valid IO address space
  *
- * Creates a mapping structure which holds information about used/unused
+ * Calls for lib/iommu-helper function which creates a mapping
+ * structure which holds information about used/unused
  * IO address ranges, which is required to perform memory allocation and
  * mapping with IOMMU aware functions.
  *
@@ -1934,71 +1935,10 @@ struct dma_map_ops iommu_coherent_ops = {
 struct dma_iommu_mapping *
 arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size)
 {
-	unsigned int bits = size >> PAGE_SHIFT;
-	unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long);
-	struct dma_iommu_mapping *mapping;
-	int extensions = 1;
-	int err = -ENOMEM;
-
-	if (!bitmap_size)
-		return ERR_PTR(-EINVAL);
-
-	if (bitmap_size > PAGE_SIZE) {
-		extensions = bitmap_size / PAGE_SIZE;
-		bitmap_size = PAGE_SIZE;
-	}
-
-	mapping = kzalloc(sizeof(struct dma_iommu_mapping), GFP_KERNEL);
-	if (!mapping)
-		goto err;
-
-	mapping->bitmap_size = bitmap_size;
-	mapping->bitmaps = kzalloc(extensions * sizeof(unsigned long *),
-				GFP_KERNEL);
-	if (!mapping->bitmaps)
-		goto err2;
-
-	mapping->bitmaps[0] = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!mapping->bitmaps[0])
-		goto err3;
-
-	mapping->nr_bitmaps = 1;
-	mapping->extensions = extensions;
-	mapping->base = base;
-	mapping->bits = BITS_PER_BYTE * bitmap_size;
-
-	spin_lock_init(&mapping->lock);
-
-	mapping->domain = iommu_domain_alloc(bus);
-	if (!mapping->domain)
-		goto err4;
-
-	kref_init(&mapping->kref);
-	return mapping;
-err4:
-	kfree(mapping->bitmaps[0]);
-err3:
-	kfree(mapping->bitmaps);
-err2:
-	kfree(mapping);
-err:
-	return ERR_PTR(err);
+	return __iommu_init_mapping(bus, base, size);
 }
 EXPORT_SYMBOL_GPL(arm_iommu_create_mapping);
 
-static void release_iommu_mapping(struct kref *kref)
-{
-	int i;
-	struct dma_iommu_mapping *mapping =
-		container_of(kref, struct dma_iommu_mapping, kref);
-
-	iommu_domain_free(mapping->domain);
-	for (i = 0; i < mapping->nr_bitmaps; i++)
-		kfree(mapping->bitmaps[i]);
-	kfree(mapping->bitmaps);
-	kfree(mapping);
-}
-
 static int extend_iommu_mapping(struct dma_iommu_mapping *mapping)
 {
 	int next_bitmap;
@@ -2019,8 +1959,7 @@ static int extend_iommu_mapping(struct dma_iommu_mapping *mapping)
 
 void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping)
 {
-	if (mapping)
-		kref_put(&mapping->kref, release_iommu_mapping);
+	__iommu_release_mapping(mapping);
 }
 EXPORT_SYMBOL_GPL(arm_iommu_release_mapping);
 
@@ -2030,8 +1969,9 @@ EXPORT_SYMBOL_GPL(arm_iommu_release_mapping);
  * @mapping: io address space mapping structure (returned from
  *	arm_iommu_create_mapping)
  *
- * Attaches specified io address space mapping to the provided device,
- * this replaces the dma operations (dma_map_ops pointer) with the
+ * Calls for lib/iommu-helper which attaches specified io
+ * address space mapping to the provided device, this
+ * replaces the dma operations (dma_map_ops pointer) with the
  * IOMMU aware version. More than one client might be attached to
  * the same io address space mapping.
  */
@@ -2040,13 +1980,12 @@ int arm_iommu_attach_device(struct device *dev,
 {
 	int err;
 
-	err = iommu_attach_device(mapping->domain, dev);
-	if (err)
-		return err;
+	err = __iommu_attach_device(dev, mapping);
 
-	kref_get(&mapping->kref);
-	dev->mapping = mapping;
-	set_dma_ops(dev, &iommu_ops);
+	if (!err)
+		set_dma_ops(dev, &iommu_ops);
+	else
+		return err;
 
 	pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev));
 	return 0;
@@ -2057,24 +1996,14 @@ EXPORT_SYMBOL_GPL(arm_iommu_attach_device);
  * arm_iommu_detach_device
  * @dev: valid struct device pointer
  *
- * Detaches the provided device from a previously attached map.
+ * Calls for lib/iommu-helper which detaches the provided
+ * device from a previously attached map.
  * This voids the dma operations (dma_map_ops pointer)
  */
 void arm_iommu_detach_device(struct device *dev)
 {
-	struct dma_iommu_mapping *mapping;
-
-	mapping = to_dma_iommu_mapping(dev);
-	if (!mapping) {
-		dev_warn(dev, "Not attached\n");
-		return;
-	}
-
-	iommu_detach_device(mapping->domain, dev);
-	kref_put(&mapping->kref, release_iommu_mapping);
-	dev->mapping = NULL;
+	__iommu_detach_device(dev);
 	set_dma_ops(dev, NULL);
-
 	pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev));
 }
 EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
diff --git a/include/linux/iommu-helper.h b/include/linux/iommu-helper.h
index 0c5e4c7..c6a315d 100644
--- a/include/linux/iommu-helper.h
+++ b/include/linux/iommu-helper.h
@@ -19,6 +19,16 @@ struct dma_iommu_mapping {
 	struct kref		kref;
 };
 
+extern void __iommu_detach_device(struct device *dev);
+
+extern void __iommu_release_mapping(struct dma_iommu_mapping *mapping);
+
+extern int __iommu_attach_device(struct device *dev,
+			    struct dma_iommu_mapping *mapping);
+
+extern struct dma_iommu_mapping *
+__iommu_init_mapping(struct bus_type *bus, dma_addr_t base, size_t size);
+
 #define to_dma_iommu_mapping(dev) ((dev)->mapping)
 #else
 #define to_dma_iommu_mapping(dev) NULL
diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index c27e269..28daaa5 100644
--- a/lib/iommu-helper.c
+++ b/lib/iommu-helper.c
@@ -6,6 +6,15 @@
 #include <linux/bitmap.h>
 #include <linux/bug.h>
 
+#ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
+#include <linux/iommu.h>
+#include <linux/device.h>
+#include <linux/iommu-helper.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/errno.h>
+#endif
+
 int iommu_is_span_boundary(unsigned int index, unsigned int nr,
 			   unsigned long shift,
 			   unsigned long boundary_size)
@@ -39,3 +48,134 @@ again:
 	return -1;
 }
 EXPORT_SYMBOL(iommu_area_alloc);
+
+#ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
+
+/**
+ * __iommu_init_mapping
+ * @bus: pointer to the bus holding the client device (for IOMMU calls)
+ * @base: start address of the valid IO address space
+ * @size: maximum size of the valid IO address space
+ *
+ * Creates a mapping structure which holds information about used/unused
+ * IO address ranges, which is required to perform memory allocation and
+ * mapping with IOMMU aware functions.
+ *
+ */
+
+struct dma_iommu_mapping *
+__iommu_init_mapping(struct bus_type *bus, dma_addr_t base, size_t size)
+{
+	unsigned int bits = size >> PAGE_SHIFT;
+	unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long);
+	struct dma_iommu_mapping *mapping;
+	int extensions = 1;
+	int err = -ENOMEM;
+
+	if (!bitmap_size)
+		return ERR_PTR(-EINVAL);
+
+	if (bitmap_size > PAGE_SIZE) {
+		extensions = bitmap_size / PAGE_SIZE;
+		bitmap_size = PAGE_SIZE;
+	}
+
+	mapping = kzalloc(sizeof(struct dma_iommu_mapping), GFP_KERNEL);
+	if (!mapping)
+		goto err;
+
+	mapping->bitmap_size = bitmap_size;
+	mapping->bitmaps = kzalloc(extensions * sizeof(unsigned long *),
+				GFP_KERNEL);
+	if (!mapping->bitmaps)
+		goto err2;
+
+	mapping->bitmaps[0] = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!mapping->bitmaps[0])
+		goto err3;
+
+	mapping->nr_bitmaps = 1;
+	mapping->extensions = extensions;
+	mapping->base = base;
+	mapping->bits = BITS_PER_BYTE * bitmap_size;
+
+	spin_lock_init(&mapping->lock);
+
+	mapping->domain = iommu_domain_alloc(bus);
+	if (!mapping->domain)
+		goto err4;
+
+	kref_init(&mapping->kref);
+	return mapping;
+err4:
+	kfree(mapping->bitmaps[0]);
+err3:
+	kfree(mapping->bitmaps);
+err2:
+	kfree(mapping);
+err:
+	return ERR_PTR(err);
+}
+
+static void release_iommu_mapping(struct kref *kref)
+{
+	int i;
+	struct dma_iommu_mapping *mapping =
+		container_of(kref, struct dma_iommu_mapping, kref);
+
+	iommu_domain_free(mapping->domain);
+	for (i = 0; i < mapping->nr_bitmaps; i++)
+		kfree(mapping->bitmaps[i]);
+	kfree(mapping->bitmaps);
+	kfree(mapping);
+}
+
+
+void __iommu_release_mapping(struct dma_iommu_mapping *mapping)
+{
+	if (mapping)
+		kref_put(&mapping->kref, release_iommu_mapping);
+}
+
+/**
+ * __iommu_detach_device
+ * @dev: valid struct device pointer
+ *
+ * Detaches the provided device from a previously attached map.
+ */
+void __iommu_detach_device(struct device *dev)
+{
+	struct dma_iommu_mapping *mapping;
+
+	mapping = to_dma_iommu_mapping(dev);
+	if (!mapping) {
+		dev_warn(dev, "Not attached\n");
+		return;
+	}
+
+	iommu_detach_device(mapping->domain, dev);
+	kref_put(&mapping->kref, release_iommu_mapping);
+	dev->mapping = NULL;
+}
+
+/**
+ * __iommu_attach_device
+ * @dev: valid struct device pointer
+ * @mapping: io address space mapping structure
+ *
+ * Attaches specified io address space mapping to the provided device.
+ */
+int __iommu_attach_device(struct device *dev,
+			    struct dma_iommu_mapping *mapping)
+{
+	int err;
+
+	err = iommu_attach_device(mapping->domain, dev);
+	if (err)
+		return err;
+
+	kref_get(&mapping->kref);
+	dev->mapping = mapping;
+	return 0;
+}
+#endif
-- 
1.8.1.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] arm: dma-mapping: Refractor iommu_alloc/free funcs
  2014-06-02 10:19   ` [PATCH 2/4] arm: dma-mapping: Refractor attach/detach dev function calls ritesh.harjani at gmail.com
@ 2014-06-02 10:19     ` ritesh.harjani at gmail.com
  2014-06-02 10:19       ` [PATCH 4/4] arm:dma-iommu: Move out complete func defs ritesh.harjani at gmail.com
  0 siblings, 1 reply; 6+ messages in thread
From: ritesh.harjani at gmail.com @ 2014-06-02 10:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ritesh Harjani <ritesh.harjani@gmail.com>

iommu_alloc/free_buffer can be moved out to
lib/iommu_helper.c as a part of refactoring
arm iommu dma-mapping code.

Change-Id: I5fba02f64cb4913f6d0200189267826df47df8d6
Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
---
 arch/arm/mm/dma-mapping.c    | 95 +-------------------------------------------
 include/linux/iommu-helper.h |  8 ++++
 lib/iommu-helper.c           | 95 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 38fc146..268004c 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1161,98 +1161,6 @@ static inline void __free_iova(struct dma_iommu_mapping *mapping,
 	spin_unlock_irqrestore(&mapping->lock, flags);
 }
 
-static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
-					  gfp_t gfp, struct dma_attrs *attrs)
-{
-	struct page **pages;
-	int count = size >> PAGE_SHIFT;
-	int array_size = count * sizeof(struct page *);
-	int i = 0;
-
-	if (array_size <= PAGE_SIZE)
-		pages = kzalloc(array_size, gfp);
-	else
-		pages = vzalloc(array_size);
-	if (!pages)
-		return NULL;
-
-	if (dma_get_attr(DMA_ATTR_FORCE_CONTIGUOUS, attrs))
-	{
-		unsigned long order = get_order(size);
-		struct page *page;
-
-		page = dma_alloc_from_contiguous(dev, count, order);
-		if (!page)
-			goto error;
-
-		__dma_clear_buffer(page, size);
-
-		for (i = 0; i < count; i++)
-			pages[i] = page + i;
-
-		return pages;
-	}
-
-	/*
-	 * IOMMU can map any pages, so himem can also be used here
-	 */
-	gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
-
-	while (count) {
-		int j, order = __fls(count);
-
-		pages[i] = alloc_pages(gfp, order);
-		while (!pages[i] && order)
-			pages[i] = alloc_pages(gfp, --order);
-		if (!pages[i])
-			goto error;
-
-		if (order) {
-			split_page(pages[i], order);
-			j = 1 << order;
-			while (--j)
-				pages[i + j] = pages[i] + j;
-		}
-
-		__dma_clear_buffer(pages[i], PAGE_SIZE << order);
-		i += 1 << order;
-		count -= 1 << order;
-	}
-
-	return pages;
-error:
-	while (i--)
-		if (pages[i])
-			__free_pages(pages[i], 0);
-	if (array_size <= PAGE_SIZE)
-		kfree(pages);
-	else
-		vfree(pages);
-	return NULL;
-}
-
-static int __iommu_free_buffer(struct device *dev, struct page **pages,
-			       size_t size, struct dma_attrs *attrs)
-{
-	int count = size >> PAGE_SHIFT;
-	int array_size = count * sizeof(struct page *);
-	int i;
-
-	if (dma_get_attr(DMA_ATTR_FORCE_CONTIGUOUS, attrs)) {
-		dma_release_from_contiguous(dev, pages[0], count);
-	} else {
-		for (i = 0; i < count; i++)
-			if (pages[i])
-				__free_pages(pages[i], 0);
-	}
-
-	if (array_size <= PAGE_SIZE)
-		kfree(pages);
-	else
-		vfree(pages);
-	return 0;
-}
-
 /*
  * Create a CPU mapping for a specified pages
  */
@@ -1417,7 +1325,8 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
 	 */
 	gfp &= ~(__GFP_COMP);
 
-	pages = __iommu_alloc_buffer(dev, size, gfp, attrs);
+	pages = __iommu_alloc_buffer(dev, size, gfp, attrs,
+					__dma_clear_buffer);
 	if (!pages)
 		return NULL;
 
diff --git a/include/linux/iommu-helper.h b/include/linux/iommu-helper.h
index c6a315d..b27b7cb8 100644
--- a/include/linux/iommu-helper.h
+++ b/include/linux/iommu-helper.h
@@ -2,6 +2,7 @@
 #define _LINUX_IOMMU_HELPER_H
 
 #include <linux/kernel.h>
+#include <linux/dma-attrs.h>
 
 #ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
 struct dma_iommu_mapping {
@@ -19,6 +20,13 @@ struct dma_iommu_mapping {
 	struct kref		kref;
 };
 
+extern struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
+					  gfp_t gfp, struct dma_attrs *attrs,
+			void (*arch_clear_buffer_cb)(struct page*, size_t));
+
+extern int __iommu_free_buffer(struct device *dev, struct page **pages,
+			       size_t size, struct dma_attrs *attrs);
+
 extern void __iommu_detach_device(struct device *dev);
 
 extern void __iommu_release_mapping(struct dma_iommu_mapping *mapping);
diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index 28daaa5..e0f643a 100644
--- a/lib/iommu-helper.c
+++ b/lib/iommu-helper.c
@@ -13,6 +13,8 @@
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
 #include <linux/errno.h>
+#include <linux/dma-contiguous.h>
+#include <linux/mm.h>
 #endif
 
 int iommu_is_span_boundary(unsigned int index, unsigned int nr,
@@ -51,6 +53,99 @@ EXPORT_SYMBOL(iommu_area_alloc);
 
 #ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
 
+struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
+					  gfp_t gfp, struct dma_attrs *attrs,
+			void (*arch_clear_buffer_cb)(struct page*, size_t))
+{
+	struct page **pages;
+	int count = size >> PAGE_SHIFT;
+	int array_size = count * sizeof(struct page *);
+	int i = 0;
+
+	if (array_size <= PAGE_SIZE)
+		pages = kzalloc(array_size, gfp);
+	else
+		pages = vzalloc(array_size);
+	if (!pages)
+		return NULL;
+
+	if (dma_get_attr(DMA_ATTR_FORCE_CONTIGUOUS, attrs)) {
+		unsigned long order = get_order(size);
+		struct page *page;
+
+		page = dma_alloc_from_contiguous(dev, count, order);
+		if (!page)
+			goto error;
+
+		if (arch_clear_buffer_cb)
+			arch_clear_buffer_cb(page, size);
+
+		for (i = 0; i < count; i++)
+			pages[i] = page + i;
+
+		return pages;
+	}
+
+	/*
+	 * IOMMU can map any pages, so himem can also be used here
+	 */
+	gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
+
+	while (count) {
+		int j, order = __fls(count);
+
+		pages[i] = alloc_pages(gfp, order);
+		while (!pages[i] && order)
+			pages[i] = alloc_pages(gfp, --order);
+		if (!pages[i])
+			goto error;
+
+		if (order) {
+			split_page(pages[i], order);
+			j = 1 << order;
+			while (--j)
+				pages[i + j] = pages[i] + j;
+		}
+		if (arch_clear_buffer_cb)
+			arch_clear_buffer_cb(pages[i], PAGE_SIZE << order);
+		i += 1 << order;
+		count -= 1 << order;
+	}
+
+	return pages;
+error:
+	while (i--)
+		if (pages[i])
+			__free_pages(pages[i], 0);
+	if (array_size <= PAGE_SIZE)
+		kfree(pages);
+	else
+		vfree(pages);
+	return NULL;
+}
+
+int __iommu_free_buffer(struct device *dev, struct page **pages,
+			       size_t size, struct dma_attrs *attrs)
+{
+	int count = size >> PAGE_SHIFT;
+	int array_size = count * sizeof(struct page *);
+	int i;
+
+	if (dma_get_attr(DMA_ATTR_FORCE_CONTIGUOUS, attrs)) {
+		dma_release_from_contiguous(dev, pages[0], count);
+	} else {
+		for (i = 0; i < count; i++)
+			if (pages[i])
+				__free_pages(pages[i], 0);
+	}
+
+	if (array_size <= PAGE_SIZE)
+		kfree(pages);
+	else
+		vfree(pages);
+	return 0;
+}
+
 /**
  * __iommu_init_mapping
  * @bus: pointer to the bus holding the client device (for IOMMU calls)
-- 
1.8.1.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] arm:dma-iommu: Move out complete func defs
  2014-06-02 10:19     ` [PATCH 3/4] arm: dma-mapping: Refractor iommu_alloc/free funcs ritesh.harjani at gmail.com
@ 2014-06-02 10:19       ` ritesh.harjani at gmail.com
  0 siblings, 0 replies; 6+ messages in thread
From: ritesh.harjani at gmail.com @ 2014-06-02 10:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ritesh Harjani <ritesh.harjani@gmail.com>

Move out complete function definations from
arch/arm/dma-mapping to lib/iommu-helper

1. Moved out iova alloc/free routine and make them
statically defined.

2. Moved out complete function definations calling
for alloc/free_iova routing to lib/iommu-helper.c

3. Seprated out cache maintainance from iommu_map/unmap
function routine, to be called from within arch/arm/dma-mapping.c

Change-Id: I6abfa820fe1450b7de70de1a1ace15263a29dfc4
Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
---
 arch/arm/Kconfig             |  42 ++---
 arch/arm/mm/dma-mapping.c    | 362 +++----------------------------------------
 include/linux/iommu-helper.h |  28 +++-
 lib/iommu-helper.c           | 329 ++++++++++++++++++++++++++++++++++++++-
 4 files changed, 399 insertions(+), 362 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 20717fb..977427d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -80,27 +80,6 @@ config ARM_DMA_USE_IOMMU
 	select NEED_SG_DMA_LENGTH
 	select DMA_USE_IOMMU_HELPER_MAPPING
 
-if ARM_DMA_USE_IOMMU
-
-config ARM_DMA_IOMMU_ALIGNMENT
-	int "Maximum PAGE_SIZE order of alignment for DMA IOMMU buffers"
-	range 4 9
-	default 8
-	help
-	  DMA mapping framework by default aligns all buffers to the smallest
-	  PAGE_SIZE order which is greater than or equal to the requested buffer
-	  size. This works well for buffers up to a few hundreds kilobytes, but
-	  for larger buffers it just a waste of address space. Drivers which has
-	  relatively small addressing window (like 64Mib) might run out of
-	  virtual space with just a few allocations.
-
-	  With this parameter you can specify the maximum PAGE_SIZE order for
-	  DMA IOMMU buffers. Larger buffers will be aligned only to this
-	  specified order. The order is expressed as a power of two multiplied
-	  by the PAGE_SIZE.
-
-endif
-
 config HAVE_PWM
 	bool
 
@@ -1949,6 +1928,27 @@ config IOMMU_HELPER
 config DMA_USE_IOMMU_HELPER_MAPPING
 	def_bool n
 
+if DMA_USE_IOMMU_HELPER_MAPPING
+
+config DMA_IOMMU_ALIGNMENT
+	int "Maximum PAGE_SIZE order of alignment for DMA IOMMU buffers"
+	range 4 9
+	default 8
+	help
+	  DMA mapping framework by default aligns all buffers to the smallest
+	  PAGE_SIZE order which is greater than or equal to the requested buffer
+	  size. This works well for buffers up to a few hundreds kilobytes, but
+	  for larger buffers it just a waste of address space. Drivers which has
+	  relatively small addressing window (like 64Mib) might run out of
+	  virtual space with just a few allocations.
+
+	  With this parameter you can specify the maximum PAGE_SIZE order for
+	  DMA IOMMU buffers. Larger buffers will be aligned only to this
+	  specified order. The order is expressed as a power of two multiplied
+	  by the PAGE_SIZE.
+
+endif
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 268004c..6546fba 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1066,101 +1066,6 @@ fs_initcall(dma_debug_do_init);
 
 /* IOMMU */
 
-static int extend_iommu_mapping(struct dma_iommu_mapping *mapping);
-
-static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
-				      size_t size)
-{
-	unsigned int order = get_order(size);
-	unsigned int align = 0;
-	unsigned int count, start;
-	size_t mapping_size = mapping->bits << PAGE_SHIFT;
-	unsigned long flags;
-	dma_addr_t iova;
-	int i;
-
-	if (order > CONFIG_ARM_DMA_IOMMU_ALIGNMENT)
-		order = CONFIG_ARM_DMA_IOMMU_ALIGNMENT;
-
-	count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	align = (1 << order) - 1;
-
-	spin_lock_irqsave(&mapping->lock, flags);
-	for (i = 0; i < mapping->nr_bitmaps; i++) {
-		start = bitmap_find_next_zero_area(mapping->bitmaps[i],
-				mapping->bits, 0, count, align);
-
-		if (start > mapping->bits)
-			continue;
-
-		bitmap_set(mapping->bitmaps[i], start, count);
-		break;
-	}
-
-	/*
-	 * No unused range found. Try to extend the existing mapping
-	 * and perform a second attempt to reserve an IO virtual
-	 * address range of size bytes.
-	 */
-	if (i == mapping->nr_bitmaps) {
-		if (extend_iommu_mapping(mapping)) {
-			spin_unlock_irqrestore(&mapping->lock, flags);
-			return DMA_ERROR_CODE;
-		}
-
-		start = bitmap_find_next_zero_area(mapping->bitmaps[i],
-				mapping->bits, 0, count, align);
-
-		if (start > mapping->bits) {
-			spin_unlock_irqrestore(&mapping->lock, flags);
-			return DMA_ERROR_CODE;
-		}
-
-		bitmap_set(mapping->bitmaps[i], start, count);
-	}
-	spin_unlock_irqrestore(&mapping->lock, flags);
-
-	iova = mapping->base + (mapping_size * i);
-	iova += start << PAGE_SHIFT;
-
-	return iova;
-}
-
-static inline void __free_iova(struct dma_iommu_mapping *mapping,
-			       dma_addr_t addr, size_t size)
-{
-	unsigned int start, count;
-	size_t mapping_size = mapping->bits << PAGE_SHIFT;
-	unsigned long flags;
-	dma_addr_t bitmap_base;
-	u32 bitmap_index;
-
-	if (!size)
-		return;
-
-	bitmap_index = (u32) (addr - mapping->base) / (u32) mapping_size;
-	BUG_ON(addr < mapping->base || bitmap_index > mapping->extensions);
-
-	bitmap_base = mapping->base + mapping_size * bitmap_index;
-
-	start = (addr - bitmap_base) >>	PAGE_SHIFT;
-
-	if (addr + size > bitmap_base + mapping_size) {
-		/*
-		 * The address range to be freed reaches into the iova
-		 * range of the next bitmap. This should not happen as
-		 * we don't allow this in __alloc_iova (at the
-		 * moment).
-		 */
-		BUG();
-	} else
-		count = size >> PAGE_SHIFT;
-
-	spin_lock_irqsave(&mapping->lock, flags);
-	bitmap_clear(mapping->bitmaps[bitmap_index], start, count);
-	spin_unlock_irqrestore(&mapping->lock, flags);
-}
-
 /*
  * Create a CPU mapping for a specified pages
  */
@@ -1194,62 +1099,6 @@ err:
 	return NULL;
 }
 
-/*
- * Create a mapping in device IO address space for specified pages
- */
-static dma_addr_t
-__iommu_create_mapping(struct device *dev, struct page **pages, size_t size)
-{
-	struct dma_iommu_mapping *mapping = dev->mapping;
-	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	dma_addr_t dma_addr, iova;
-	int i, ret = DMA_ERROR_CODE;
-
-	dma_addr = __alloc_iova(mapping, size);
-	if (dma_addr == DMA_ERROR_CODE)
-		return dma_addr;
-
-	iova = dma_addr;
-	for (i = 0; i < count; ) {
-		unsigned int next_pfn = page_to_pfn(pages[i]) + 1;
-		phys_addr_t phys = page_to_phys(pages[i]);
-		unsigned int len, j;
-
-		for (j = i + 1; j < count; j++, next_pfn++)
-			if (page_to_pfn(pages[j]) != next_pfn)
-				break;
-
-		len = (j - i) << PAGE_SHIFT;
-		ret = iommu_map(mapping->domain, iova, phys, len,
-				IOMMU_READ|IOMMU_WRITE);
-		if (ret < 0)
-			goto fail;
-		iova += len;
-		i = j;
-	}
-	return dma_addr;
-fail:
-	iommu_unmap(mapping->domain, dma_addr, iova-dma_addr);
-	__free_iova(mapping, dma_addr, size);
-	return DMA_ERROR_CODE;
-}
-
-static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
-{
-	struct dma_iommu_mapping *mapping = dev->mapping;
-
-	/*
-	 * add optional in-page offset from iova to size and align
-	 * result to page size
-	 */
-	size = PAGE_ALIGN((iova & ~PAGE_MASK) + size);
-	iova &= PAGE_MASK;
-
-	iommu_unmap(mapping->domain, iova, size);
-	__free_iova(mapping, iova, size);
-	return 0;
-}
-
 static struct page **__atomic_get_pages(void *addr)
 {
 	struct dma_pool *pool = &atomic_pool;
@@ -1421,120 +1270,6 @@ static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
 					 GFP_KERNEL);
 }
 
-static int __dma_direction_to_prot(enum dma_data_direction dir)
-{
-	int prot;
-
-	switch (dir) {
-	case DMA_BIDIRECTIONAL:
-		prot = IOMMU_READ | IOMMU_WRITE;
-		break;
-	case DMA_TO_DEVICE:
-		prot = IOMMU_READ;
-		break;
-	case DMA_FROM_DEVICE:
-		prot = IOMMU_WRITE;
-		break;
-	default:
-		prot = 0;
-	}
-
-	return prot;
-}
-
-/*
- * Map a part of the scatter-gather list into contiguous io address space
- */
-static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
-			  size_t size, dma_addr_t *handle,
-			  enum dma_data_direction dir, struct dma_attrs *attrs,
-			  bool is_coherent)
-{
-	struct dma_iommu_mapping *mapping = dev->mapping;
-	dma_addr_t iova, iova_base;
-	int ret = 0;
-	unsigned int count;
-	struct scatterlist *s;
-	int prot;
-
-	size = PAGE_ALIGN(size);
-	*handle = DMA_ERROR_CODE;
-
-	iova_base = iova = __alloc_iova(mapping, size);
-	if (iova == DMA_ERROR_CODE)
-		return -ENOMEM;
-
-	for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {
-		phys_addr_t phys = page_to_phys(sg_page(s));
-		unsigned int len = PAGE_ALIGN(s->offset + s->length);
-
-		if (!is_coherent &&
-			!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
-			__dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
-
-		prot = __dma_direction_to_prot(dir);
-
-		ret = iommu_map(mapping->domain, iova, phys, len, prot);
-		if (ret < 0)
-			goto fail;
-		count += len >> PAGE_SHIFT;
-		iova += len;
-	}
-	*handle = iova_base;
-
-	return 0;
-fail:
-	iommu_unmap(mapping->domain, iova_base, count * PAGE_SIZE);
-	__free_iova(mapping, iova_base, size);
-	return ret;
-}
-
-static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
-		     enum dma_data_direction dir, struct dma_attrs *attrs,
-		     bool is_coherent)
-{
-	struct scatterlist *s = sg, *dma = sg, *start = sg;
-	int i, count = 0;
-	unsigned int offset = s->offset;
-	unsigned int size = s->offset + s->length;
-	unsigned int max = dma_get_max_seg_size(dev);
-
-	for (i = 1; i < nents; i++) {
-		s = sg_next(s);
-
-		s->dma_address = DMA_ERROR_CODE;
-		s->dma_length = 0;
-
-		if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
-			if (__map_sg_chunk(dev, start, size, &dma->dma_address,
-			    dir, attrs, is_coherent) < 0)
-				goto bad_mapping;
-
-			dma->dma_address += offset;
-			dma->dma_length = size - offset;
-
-			size = offset = s->offset;
-			start = s;
-			dma = sg_next(dma);
-			count += 1;
-		}
-		size += s->length;
-	}
-	if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs,
-		is_coherent) < 0)
-		goto bad_mapping;
-
-	dma->dma_address += offset;
-	dma->dma_length = size - offset;
-
-	return count+1;
-
-bad_mapping:
-	for_each_sg(sg, s, count, i)
-		__iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s));
-	return 0;
-}
-
 /**
  * arm_coherent_iommu_map_sg - map a set of SG buffers for streaming mode DMA
  * @dev: valid struct device pointer
@@ -1550,7 +1285,7 @@ bad_mapping:
 int arm_coherent_iommu_map_sg(struct device *dev, struct scatterlist *sg,
 		int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
 {
-	return __iommu_map_sg(dev, sg, nents, dir, attrs, true);
+	return __iommu_map_sg(dev, sg, nents, dir, attrs);
 }
 
 /**
@@ -1568,25 +1303,15 @@ int arm_coherent_iommu_map_sg(struct device *dev, struct scatterlist *sg,
 int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg,
 		int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
 {
-	return __iommu_map_sg(dev, sg, nents, dir, attrs, false);
-}
-
-static void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
-		int nents, enum dma_data_direction dir, struct dma_attrs *attrs,
-		bool is_coherent)
-{
 	struct scatterlist *s;
-	int i;
-
-	for_each_sg(sg, s, nents, i) {
-		if (sg_dma_len(s))
-			__iommu_remove_mapping(dev, sg_dma_address(s),
-					       sg_dma_len(s));
-		if (!is_coherent &&
-		    !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
-			__dma_page_dev_to_cpu(sg_page(s), s->offset,
-					      s->length, dir);
+	int i, ret;
+	ret = __iommu_map_sg(dev, sg, nents, dir, attrs);
+	if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) {
+		for_each_sg(sg, s, ret, i)
+			__dma_page_cpu_to_dev(sg_page(s), s->offset,
+					s->length, dir);
 	}
+	return ret;
 }
 
 /**
@@ -1602,7 +1327,7 @@ static void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
 void arm_coherent_iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
 		int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
 {
-	__iommu_unmap_sg(dev, sg, nents, dir, attrs, true);
+	__iommu_unmap_sg(dev, sg, nents, dir, attrs);
 }
 
 /**
@@ -1618,7 +1343,16 @@ void arm_coherent_iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
 void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
 			enum dma_data_direction dir, struct dma_attrs *attrs)
 {
-	__iommu_unmap_sg(dev, sg, nents, dir, attrs, false);
+	struct scatterlist *s;
+	int i;
+
+	__iommu_unmap_sg(dev, sg, nents, dir, attrs);
+
+	for_each_sg(sg, s, nents, i) {
+		if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+			__dma_page_dev_to_cpu(sg_page(s), s->offset,
+					      s->length, dir);
+	}
 }
 
 /**
@@ -1671,24 +1405,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p
 	     unsigned long offset, size_t size, enum dma_data_direction dir,
 	     struct dma_attrs *attrs)
 {
-	struct dma_iommu_mapping *mapping = dev->mapping;
-	dma_addr_t dma_addr;
-	int ret, prot, len = PAGE_ALIGN(size + offset);
-
-	dma_addr = __alloc_iova(mapping, len);
-	if (dma_addr == DMA_ERROR_CODE)
-		return dma_addr;
-
-	prot = __dma_direction_to_prot(dir);
-
-	ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);
-	if (ret < 0)
-		goto fail;
-
-	return dma_addr + offset;
-fail:
-	__free_iova(mapping, dma_addr, len);
-	return DMA_ERROR_CODE;
+	return __iommu_map_page(dev, page, offset, size, dir);
 }
 
 /**
@@ -1708,7 +1425,7 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
 	if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
 		__dma_page_cpu_to_dev(page, offset, size, dir);
 
-	return arm_coherent_iommu_map_page(dev, page, offset, size, dir, attrs);
+	return __iommu_map_page(dev, page, offset, size, dir);
 }
 
 /**
@@ -1724,16 +1441,7 @@ static void arm_coherent_iommu_unmap_page(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir,
 		struct dma_attrs *attrs)
 {
-	struct dma_iommu_mapping *mapping = dev->mapping;
-	dma_addr_t iova = handle & PAGE_MASK;
-	int offset = handle & ~PAGE_MASK;
-	int len = PAGE_ALIGN(size + offset);
-
-	if (!iova)
-		return;
-
-	iommu_unmap(mapping->domain, iova, len);
-	__free_iova(mapping, iova, len);
+	__iommu_unmap_page(dev, handle, size, dir);
 }
 
 /**
@@ -1753,16 +1461,12 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
 	dma_addr_t iova = handle & PAGE_MASK;
 	struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
 	int offset = handle & ~PAGE_MASK;
-	int len = PAGE_ALIGN(size + offset);
-
-	if (!iova)
-		return;
 
 	if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
 		__dma_page_dev_to_cpu(page, offset, size, dir);
 
-	iommu_unmap(mapping->domain, iova, len);
-	__free_iova(mapping, iova, len);
+	__iommu_unmap_page(dev, handle, size, dir);
+
 }
 
 static void arm_iommu_sync_single_for_cpu(struct device *dev,
@@ -1848,24 +1552,6 @@ arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size)
 }
 EXPORT_SYMBOL_GPL(arm_iommu_create_mapping);
 
-static int extend_iommu_mapping(struct dma_iommu_mapping *mapping)
-{
-	int next_bitmap;
-
-	if (mapping->nr_bitmaps > mapping->extensions)
-		return -EINVAL;
-
-	next_bitmap = mapping->nr_bitmaps;
-	mapping->bitmaps[next_bitmap] = kzalloc(mapping->bitmap_size,
-						GFP_ATOMIC);
-	if (!mapping->bitmaps[next_bitmap])
-		return -ENOMEM;
-
-	mapping->nr_bitmaps++;
-
-	return 0;
-}
-
 void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping)
 {
 	__iommu_release_mapping(mapping);
diff --git a/include/linux/iommu-helper.h b/include/linux/iommu-helper.h
index b27b7cb8..1e2e5f2 100644
--- a/include/linux/iommu-helper.h
+++ b/include/linux/iommu-helper.h
@@ -5,6 +5,12 @@
 #include <linux/dma-attrs.h>
 
 #ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
+#include <linux/mm_types.h>
+#include <linux/dma-debug.h>
+#include <linux/kmemcheck.h>
+#include <linux/kref.h>
+#include <linux/dma-mapping.h>
+
 struct dma_iommu_mapping {
 	/* iommu specific data */
 	struct iommu_domain	*domain;
@@ -20,6 +26,25 @@ struct dma_iommu_mapping {
 	struct kref		kref;
 };
 
+extern dma_addr_t __iommu_create_mapping(struct device *dev, struct page **pages,
+					size_t size);
+
+extern int __iommu_remove_mapping(struct device *dev, dma_addr_t iova,
+				size_t size);
+
+extern dma_addr_t __iommu_map_page(struct device *dev, struct page *page,
+		unsigned long offset, size_t size, enum dma_data_direction dir);
+
+extern void __iommu_unmap_page(struct device *dev, dma_addr_t handle,
+		size_t size, enum dma_data_direction dir);
+
+extern int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+		     enum dma_data_direction dir, struct dma_attrs *attrs);
+
+extern void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
+		int nents, enum dma_data_direction dir,
+		struct dma_attrs *attrs);
+
 extern struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
 					  gfp_t gfp, struct dma_attrs *attrs,
 			void (*arch_clear_buffer_cb)(struct page*, size_t));
@@ -29,14 +54,13 @@ extern int __iommu_free_buffer(struct device *dev, struct page **pages,
 
 extern void __iommu_detach_device(struct device *dev);
 
-extern void __iommu_release_mapping(struct dma_iommu_mapping *mapping);
-
 extern int __iommu_attach_device(struct device *dev,
 			    struct dma_iommu_mapping *mapping);
 
 extern struct dma_iommu_mapping *
 __iommu_init_mapping(struct bus_type *bus, dma_addr_t base, size_t size);
 
+extern void __iommu_release_mapping(struct dma_iommu_mapping *mapping);
 #define to_dma_iommu_mapping(dev) ((dev)->mapping)
 #else
 #define to_dma_iommu_mapping(dev) NULL
diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index e0f643a..75d900e 100644
--- a/lib/iommu-helper.c
+++ b/lib/iommu-helper.c
@@ -8,13 +8,14 @@
 
 #ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
 #include <linux/iommu.h>
-#include <linux/device.h>
 #include <linux/iommu-helper.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
 #include <linux/errno.h>
 #include <linux/dma-contiguous.h>
 #include <linux/mm.h>
+
+#include <asm/dma-mapping.h>
 #endif
 
 int iommu_is_span_boundary(unsigned int index, unsigned int nr,
@@ -53,6 +54,195 @@ EXPORT_SYMBOL(iommu_area_alloc);
 
 #ifdef CONFIG_DMA_USE_IOMMU_HELPER_MAPPING
 
+/* IOMMU */
+static int extend_iommu_mapping(struct dma_iommu_mapping *mapping)
+{
+	int next_bitmap;
+
+	if (mapping->nr_bitmaps > mapping->extensions)
+		return -EINVAL;
+
+	next_bitmap = mapping->nr_bitmaps;
+	mapping->bitmaps[next_bitmap] = kzalloc(mapping->bitmap_size,
+						GFP_ATOMIC);
+	if (!mapping->bitmaps[next_bitmap])
+		return -ENOMEM;
+
+	mapping->nr_bitmaps++;
+
+	return 0;
+}
+
+static int __dma_direction_to_prot(enum dma_data_direction dir)
+{
+	int prot;
+
+	switch (dir) {
+	case DMA_BIDIRECTIONAL:
+		prot = IOMMU_READ | IOMMU_WRITE;
+		break;
+	case DMA_TO_DEVICE:
+		prot = IOMMU_READ;
+		break;
+	case DMA_FROM_DEVICE:
+		prot = IOMMU_WRITE;
+		break;
+	default:
+		prot = 0;
+	}
+
+	return prot;
+}
+
+static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
+				      size_t size)
+{
+	unsigned int order = get_order(size);
+	unsigned int align = 0;
+	unsigned int count, start;
+	size_t mapping_size = mapping->bits << PAGE_SHIFT;
+	unsigned long flags;
+	dma_addr_t iova;
+	int i;
+
+	if (order > CONFIG_DMA_IOMMU_ALIGNMENT)
+		order = CONFIG_DMA_IOMMU_ALIGNMENT;
+
+	count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	align = (1 << order) - 1;
+
+	spin_lock_irqsave(&mapping->lock, flags);
+	for (i = 0; i < mapping->nr_bitmaps; i++) {
+		start = bitmap_find_next_zero_area(mapping->bitmaps[i],
+				mapping->bits, 0, count, align);
+
+		if (start > mapping->bits)
+			continue;
+
+		bitmap_set(mapping->bitmaps[i], start, count);
+		break;
+	}
+
+	/*
+	 * No unused range found. Try to extend the existing mapping
+	 * and perform a second attempt to reserve an IO virtual
+	 * address range of size bytes.
+	 */
+	if (i == mapping->nr_bitmaps) {
+		if (extend_iommu_mapping(mapping)) {
+			spin_unlock_irqrestore(&mapping->lock, flags);
+			return DMA_ERROR_CODE;
+		}
+
+		start = bitmap_find_next_zero_area(mapping->bitmaps[i],
+				mapping->bits, 0, count, align);
+
+		if (start > mapping->bits) {
+			spin_unlock_irqrestore(&mapping->lock, flags);
+			return DMA_ERROR_CODE;
+		}
+
+		bitmap_set(mapping->bitmaps[i], start, count);
+	}
+	spin_unlock_irqrestore(&mapping->lock, flags);
+
+	iova = mapping->base + (mapping_size * i);
+	iova += start << PAGE_SHIFT;
+
+	return iova;
+}
+
+static inline void __free_iova(struct dma_iommu_mapping *mapping,
+			       dma_addr_t addr, size_t size)
+{
+	unsigned int start, count;
+	size_t mapping_size = mapping->bits << PAGE_SHIFT;
+	unsigned long flags;
+	dma_addr_t bitmap_base;
+	u32 bitmap_index;
+
+	if (!size)
+		return;
+
+	bitmap_index = (u32) (addr - mapping->base) / (u32) mapping_size;
+	BUG_ON(addr < mapping->base || bitmap_index > mapping->extensions);
+
+	bitmap_base = mapping->base + mapping_size * bitmap_index;
+
+	start = (addr - bitmap_base) >>	PAGE_SHIFT;
+
+	if (addr + size > bitmap_base + mapping_size) {
+		/*
+		 * The address range to be freed reaches into the iova
+		 * range of the next bitmap. This should not happen as
+		 * we don't allow this in __alloc_iova (at the
+		 * moment).
+		 */
+		BUG();
+	} else
+		count = size >> PAGE_SHIFT;
+
+	spin_lock_irqsave(&mapping->lock, flags);
+	bitmap_clear(mapping->bitmaps[bitmap_index], start, count);
+	spin_unlock_irqrestore(&mapping->lock, flags);
+}
+
+/*
+ * Create a mapping in device IO address space for specified pages
+ */
+dma_addr_t
+__iommu_create_mapping(struct device *dev, struct page **pages, size_t size)
+{
+	struct dma_iommu_mapping *mapping = dev->mapping;
+	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	dma_addr_t dma_addr, iova;
+	int i, ret = DMA_ERROR_CODE;
+
+	dma_addr = __alloc_iova(mapping, size);
+	if (dma_addr == DMA_ERROR_CODE)
+		return dma_addr;
+
+	iova = dma_addr;
+	for (i = 0; i < count; ) {
+		unsigned int next_pfn = page_to_pfn(pages[i]) + 1;
+		phys_addr_t phys = page_to_phys(pages[i]);
+		unsigned int len, j;
+
+		for (j = i + 1; j < count; j++, next_pfn++)
+			if (page_to_pfn(pages[j]) != next_pfn)
+				break;
+
+		len = (j - i) << PAGE_SHIFT;
+		ret = iommu_map(mapping->domain, iova, phys, len,
+				IOMMU_READ|IOMMU_WRITE);
+		if (ret < 0)
+			goto fail;
+		iova += len;
+		i = j;
+	}
+	return dma_addr;
+fail:
+	iommu_unmap(mapping->domain, dma_addr, iova-dma_addr);
+	__free_iova(mapping, dma_addr, size);
+	return DMA_ERROR_CODE;
+}
+
+int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
+{
+	struct dma_iommu_mapping *mapping = dev->mapping;
+
+	/*
+	 * add optional in-page offset from iova to size and align
+	 * result to page size
+	 */
+	size = PAGE_ALIGN((iova & ~PAGE_MASK) + size);
+	iova &= PAGE_MASK;
+
+	iommu_unmap(mapping->domain, iova, size);
+	__free_iova(mapping, iova, size);
+	return 0;
+}
+
 struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
 					  gfp_t gfp, struct dma_attrs *attrs,
 			void (*arch_clear_buffer_cb)(struct page*, size_t))
@@ -146,6 +336,143 @@ int __iommu_free_buffer(struct device *dev, struct page **pages,
 	return 0;
 }
 
+/*
+ * Map a part of the scatter-gather list into contiguous io address space
+ */
+static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
+			  size_t size, dma_addr_t *handle,
+			  enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	struct dma_iommu_mapping *mapping = dev->mapping;
+	dma_addr_t iova, iova_base;
+	int ret = 0;
+	unsigned int count;
+	struct scatterlist *s;
+	int prot;
+
+	size = PAGE_ALIGN(size);
+	*handle = DMA_ERROR_CODE;
+
+	iova_base = iova = __alloc_iova(mapping, size);
+	if (iova == DMA_ERROR_CODE)
+		return -ENOMEM;
+
+	for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {
+		phys_addr_t phys = page_to_phys(sg_page(s));
+		unsigned int len = PAGE_ALIGN(s->offset + s->length);
+
+		prot = __dma_direction_to_prot(dir);
+
+		ret = iommu_map(mapping->domain, iova, phys, len, prot);
+		if (ret < 0)
+			goto fail;
+		count += len >> PAGE_SHIFT;
+		iova += len;
+	}
+	*handle = iova_base;
+
+	return 0;
+fail:
+	iommu_unmap(mapping->domain, iova_base, count * PAGE_SIZE);
+	__free_iova(mapping, iova_base, size);
+	return ret;
+}
+
+int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+		     enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	struct scatterlist *s = sg, *dma = sg, *start = sg;
+	int i, count = 0;
+	unsigned int offset = s->offset;
+	unsigned int size = s->offset + s->length;
+	unsigned int max = dma_get_max_seg_size(dev);
+
+	for (i = 1; i < nents; i++) {
+		s = sg_next(s);
+
+		s->dma_address = DMA_ERROR_CODE;
+		s->dma_length = 0;
+
+		if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
+			if (__map_sg_chunk(dev, start, size, &dma->dma_address,
+			    dir, attrs) < 0)
+				goto bad_mapping;
+
+			dma->dma_address += offset;
+			dma->dma_length = size - offset;
+
+			size = offset = s->offset;
+			start = s;
+			dma = sg_next(dma);
+			count += 1;
+		}
+		size += s->length;
+	}
+	if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs) < 0)
+		goto bad_mapping;
+
+	dma->dma_address += offset;
+	dma->dma_length = size - offset;
+
+	return count+1;
+
+bad_mapping:
+	for_each_sg(sg, s, count, i)
+		__iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s));
+	return 0;
+}
+
+void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
+		int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	struct scatterlist *s;
+	int i;
+
+	for_each_sg(sg, s, nents, i) {
+		if (sg_dma_len(s))
+			__iommu_remove_mapping(dev, sg_dma_address(s),
+					       sg_dma_len(s));
+}
+}
+
+dma_addr_t __iommu_map_page(struct device *dev, struct page *page,
+		unsigned long offset, size_t size, enum dma_data_direction dir)
+{
+	struct dma_iommu_mapping *mapping = dev->mapping;
+	dma_addr_t dma_addr;
+	int ret, prot, len = PAGE_ALIGN(size + offset);
+
+	dma_addr = __alloc_iova(mapping, len);
+	if (dma_addr == DMA_ERROR_CODE)
+		return dma_addr;
+
+	prot = __dma_direction_to_prot(dir);
+
+	ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);
+	if (ret < 0)
+		goto fail;
+
+	return dma_addr + offset;
+fail:
+	__free_iova(mapping, dma_addr, len);
+	return DMA_ERROR_CODE;
+}
+
+void __iommu_unmap_page(struct device *dev, dma_addr_t handle,
+		size_t size, enum dma_data_direction dir)
+{
+	struct dma_iommu_mapping *mapping = dev->mapping;
+	dma_addr_t iova = handle & PAGE_MASK;
+	int offset = handle & ~PAGE_MASK;
+	int len = PAGE_ALIGN(size + offset);
+
+	if (!iova)
+		return;
+
+	iommu_unmap(mapping->domain, iova, len);
+	__free_iova(mapping, iova, len);
+}
+
 /**
  * __iommu_init_mapping
  * @bus: pointer to the bus holding the client device (for IOMMU calls)
-- 
1.8.1.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code
  2014-06-02 10:19 [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code ritesh.harjani at gmail.com
  2014-06-02 10:19 ` [PATCH 1/4] arm: dma-iommu: Move out dma_iommu_mapping struct ritesh.harjani at gmail.com
@ 2014-06-03 13:01 ` Will Deacon
  1 sibling, 0 replies; 6+ messages in thread
From: Will Deacon @ 2014-06-03 13:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jun 02, 2014 at 11:19:19AM +0100, ritesh.harjani at gmail.com wrote:
> From: Ritesh Harjani <ritesh.harjani@gmail.com>
> Hi All,

Hi Ritesh,

Thanks for the new patches. I have a few comments on the series as a whole.

> This patch series is to refractor iommu related common code from
> arch/arm/dma-mapping.c to lib/iommu-helper.c based on the various
> discussions with the maintainers/experts [1].
> 
> Currently the only user of the common lib/iommu-helper code will
> be ARM & ARM64 but later various architecture might try to use this
> iommu lib helper functions.
> 
> Major change of this refactoring starts with bringing out struct dma_iommu_mapping 
> *mapping variable from arch/arm/include/asm/device.h to include/linux/device.h
> and by moving out complete structure defination of dma_iommu_mapping to
> inclue/linux/iommu-helper.h. Link [2] give more details on why this was done,
> also this change got approval from Will Daecon [2].

Well, I can't approve changes to include/linux/device.h, so that probably
needs to be acked by Grant and/or Greg. Could you split that patch out into
a separate change please, so that it can go in independently?

Also, I think you could merge patches 2 and 3, no?

> There are 1/2 more function definitions which I can think of moving out, but
> those can be done once this patch series is approved as those are not very
> big changes.

This certainly looks good to start with, although I think you should
consider renaming the functions in the helper library so that they aren't
prefixed with double underscores. Maybe iommu_helper_* instead?

Finally, please drop the ChangeId entries from your commit messages (and
you've consistently misspelled refactor as refractor).

Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-06-03 13:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-02 10:19 [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code ritesh.harjani at gmail.com
2014-06-02 10:19 ` [PATCH 1/4] arm: dma-iommu: Move out dma_iommu_mapping struct ritesh.harjani at gmail.com
2014-06-02 10:19   ` [PATCH 2/4] arm: dma-mapping: Refractor attach/detach dev function calls ritesh.harjani at gmail.com
2014-06-02 10:19     ` [PATCH 3/4] arm: dma-mapping: Refractor iommu_alloc/free funcs ritesh.harjani at gmail.com
2014-06-02 10:19       ` [PATCH 4/4] arm:dma-iommu: Move out complete func defs ritesh.harjani at gmail.com
2014-06-03 13:01 ` [PATCH 0/4] arm:dma-mapping Refactoring iommu dma-mapping code Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).