devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 RESEND 0/4] CMA & device tree, once again
@ 2014-07-14  8:28 Marek Szyprowski
       [not found] ` <1405326487-15346-1-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-14  8:28 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Marek Szyprowski, linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Michal Nazarewicz, Grant Likely, Tomasz Figa, Sascha Hauer,
	Laura Abbott, Nishanth Peethambaran, Marc, Josh Cartwright,
	Catalin Marinas, Will Deacon, Paul Mackerras, Jon Medhurst,
	Joonsoo Kim, Aneesh Kumar K.V., Andrew Morton

Hello,

(please ignore previous patchset, I've attached wrong version of patches)

This is one more respin of the patches which add support for creating
reserved memory regions defined in device tree. The last attempt
(http://lists.linaro.org/pipermail/linaro-mm-sig/2014-February/003738.html)
ended in merging only half of the code, so right now we have complete
documentation merged and only basic code, which implements a half of it
is written in the documentation. Although the merged patches allow to
reserve memory, there is no way of using it for devices and drivers.

This situation makes CMA rather useless, as the main architecture (ARM),
which used it, has been converted from board-file based system
initialization to device tree. Thus there is no place to use direct
calls to dma_declare_contiguous() and some new solution, which bases on
device tree, is urgently needed.

This patch series fixes this issue. It provides two, already widely
discussed and already present in the kernel, drivers for reserved
memory: first based on DMA-coherent allocator, second using Contiguous
Memory Allocator. The first one nicely implements typical 'carved out'
reserved memory way of allocating contiguous buffers in a kernel-style
way. The memory is used exclusively by devices assigned to the given
memory region. The second one allows to reuse reserved memory for
movable kernel pages (like disk buffers, anonymous memory) and migrates
it out when device to allocates contiguous memory buffer. Both driver
provides memory buffers via standard dma-mapping API.

The patches have been rebased on top of latest CMA and mm changes merged
to akmp kernel tree.

To define a 64MiB CMA region following node is needed:

multimedia_reserved: multimedia_mem_region {
	compatible = "shared-dma-pool";
	reusable;
	size = <0x4000000>;
	alignment = <0x400000>;
};

Similarly, one can define 64MiB region with DMA coherent memory:

multimedia_reserved: multimedia_mem_region {
	compatible = "shared-dma-pool";
	no-map;
	size = <0x4000000>;
	alignment = <0x400000>;
};

Then the defined region can be assigned to devices:

scaler: scaler@12500000 {
	memory-region = <&multimedia_reserved>;
	/* ... */
};
codec: codec@12600000 {
	memory-region = <&multimedia_reserved>;
	/* ... */
};

Best regards
Marek Szyprowski
Samsung R&D Institute Poland

Changes since v1:
(http://www.spinics.net/lists/arm-kernel/msg343702.html)
- fixed possible memory leak in case of reserved memory allocation failure
  (thanks to Joonsoo Kim)

Changes since the version posted in '[PATCH v6 00/11] reserved-memory 
regions/CMA in devicetree, again' thread 
http://lists.linaro.org/pipermail/linaro-mm-sig/2014-February/003738.html:
- rebased on top of '[PATCH v3 -next 0/9] CMA: generalize CMA reserved
area management code' patch series on v3.16-rc3
- improved dma-coherent driver, now it correctly handles assigning more
than one device to the given memory region


Patch summary:

Marek Szyprowski (4):
  drivers: of: add automated assignment of reserved regions to client
    devices
  drivers: of: initialize and assign reserved memory to newly created
    devices
  drivers: dma-coherent: add initialization from device tree
  drivers: dma-contiguous: add initialization from device tree

 drivers/base/dma-coherent.c     | 137 ++++++++++++++++++++++++++++++++++------
 drivers/base/dma-contiguous.c   |  67 ++++++++++++++++++++
 drivers/of/of_reserved_mem.c    |  70 ++++++++++++++++++++
 drivers/of/platform.c           |   7 ++
 include/linux/cma.h             |   3 +
 include/linux/of_reserved_mem.h |   7 ++
 mm/cma.c                        |  62 ++++++++++++++----
 7 files changed, 323 insertions(+), 30 deletions(-)

-- 
1.9.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 RESEND 1/4] drivers: of: add automated assignment of reserved regions to client devices
       [not found] ` <1405326487-15346-1-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
@ 2014-07-14  8:28   ` Marek Szyprowski
  2014-07-28 14:15     ` Grant Likely
  2014-07-14  8:28   ` [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices Marek Szyprowski
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-14  8:28 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Marek Szyprowski, linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Michal Nazarewicz, Grant Likely, Tomasz Figa, Sascha Hauer,
	Laura Abbott, Nishanth Peethambaran, Marc, Josh Cartwright,
	Catalin Marinas, Will Deacon, Paul Mackerras, Jon Medhurst,
	Joonsoo Kim, Aneesh Kumar K.V., Andrew Morton

This patch adds code for automated assignment of reserved memory regions
to struct device. reserved_mem->ops->device_init()/device_cleanup()
callbacks are called to perform reserved memory driver specific
initialization and cleanup

Based on previous code provided by Josh Cartwright <joshc-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>

Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
---
 drivers/of/of_reserved_mem.c    | 70 +++++++++++++++++++++++++++++++++++++++++
 include/linux/of_reserved_mem.h |  7 +++++
 2 files changed, 77 insertions(+)

diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 632aae861375..59fb12e84e6b 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -206,8 +206,16 @@ void __init fdt_init_reserved_mem(void)
 	for (i = 0; i < reserved_mem_count; i++) {
 		struct reserved_mem *rmem = &reserved_mem[i];
 		unsigned long node = rmem->fdt_node;
+		int len;
+		const __be32 *prop;
 		int err = 0;
 
+		prop = of_get_flat_dt_prop(node, "phandle", &len);
+		if (!prop)
+			prop = of_get_flat_dt_prop(node, "linux,phandle", &len);
+		if (prop)
+			rmem->phandle = of_read_number(prop, len/4);
+
 		if (rmem->size == 0)
 			err = __reserved_mem_alloc_size(node, rmem->name,
 						 &rmem->base, &rmem->size);
@@ -215,3 +223,65 @@ void __init fdt_init_reserved_mem(void)
 			__reserved_mem_init_node(rmem);
 	}
 }
+
+static inline struct reserved_mem *__find_rmem(struct device_node *node)
+{
+	unsigned int i;
+
+	if (!node->phandle)
+		return NULL;
+
+	for (i = 0; i < reserved_mem_count; i++)
+		if (reserved_mem[i].phandle == node->phandle)
+			return &reserved_mem[i];
+	return NULL;
+}
+
+/**
+ * of_reserved_mem_device_init() - assign reserved memory region to given device
+ *
+ * This function assign memory region pointed by "memory-region" device tree
+ * property to the given device.
+ */
+void of_reserved_mem_device_init(struct device *dev)
+{
+	struct reserved_mem *rmem;
+	struct device_node *np;
+
+	np = of_parse_phandle(dev->of_node, "memory-region", 0);
+	if (!np)
+		return;
+
+	rmem = __find_rmem(np);
+	of_node_put(np);
+
+	if (!rmem || !rmem->ops || !rmem->ops->device_init)
+		return;
+
+	rmem->ops->device_init(rmem, dev);
+	dev_info(dev, "assigned reserved memory node %s\n", rmem->name);
+}
+
+/**
+ * of_reserved_mem_device_release() - release reserved memory device structures
+ *
+ * This function releases structures allocated for memory region handling for
+ * the given device.
+ */
+void of_reserved_mem_device_release(struct device *dev)
+{
+	struct reserved_mem *rmem;
+	struct device_node *np;
+
+	np = of_parse_phandle(dev->of_node, "memory-region", 0);
+	if (!np)
+		return;
+
+	rmem = __find_rmem(np);
+	of_node_put(np);
+
+	if (!rmem || !rmem->ops || !rmem->ops->device_release)
+		return;
+
+	rmem->ops->device_release(rmem, dev);
+}
diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h
index 4669ddfdd5af..5b5efae09135 100644
--- a/include/linux/of_reserved_mem.h
+++ b/include/linux/of_reserved_mem.h
@@ -8,6 +8,7 @@ struct reserved_mem_ops;
 struct reserved_mem {
 	const char			*name;
 	unsigned long			fdt_node;
+	unsigned long			phandle;
 	const struct reserved_mem_ops	*ops;
 	phys_addr_t			base;
 	phys_addr_t			size;
@@ -27,10 +28,16 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem);
 	_OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
 
 #ifdef CONFIG_OF_RESERVED_MEM
+void of_reserved_mem_device_init(struct device *dev);
+void of_reserved_mem_device_release(struct device *dev);
+
 void fdt_init_reserved_mem(void);
 void fdt_reserved_mem_save_node(unsigned long node, const char *uname,
 			       phys_addr_t base, phys_addr_t size);
 #else
+static inline void of_reserved_mem_device_init(struct device *dev) { }
+static inline void of_reserved_mem_device_release(struct device *pdev) { }
+
 static inline void fdt_init_reserved_mem(void) { }
 static inline void fdt_reserved_mem_save_node(unsigned long node,
 		const char *uname, phys_addr_t base, phys_addr_t size) { }
-- 
1.9.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices
       [not found] ` <1405326487-15346-1-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
  2014-07-14  8:28   ` [PATCH v2 RESEND 1/4] drivers: of: add automated assignment of reserved regions to client devices Marek Szyprowski
@ 2014-07-14  8:28   ` Marek Szyprowski
       [not found]     ` <1405326487-15346-3-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
  2014-07-14  8:28   ` [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree Marek Szyprowski
  2014-07-14  8:28   ` [PATCH v2 RESEND 4/4] drivers: dma-contiguous: " Marek Szyprowski
  3 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-14  8:28 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Marek Szyprowski, linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Michal Nazarewicz, Grant Likely, Tomasz Figa, Sascha Hauer,
	Laura Abbott, Nishanth Peethambaran, Marc, Josh Cartwright,
	Catalin Marinas, Will Deacon, Paul Mackerras, Jon Medhurst,
	Joonsoo Kim, Aneesh Kumar K.V., Andrew Morton

Use recently introduced of_reserved_mem_device_init() function to
automatically assign respective reserved memory region to the newly created
platform and amba device.

Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
---
 drivers/of/platform.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/of/platform.c b/drivers/of/platform.c
index 500436f9be7f..a9055d3da5c2 100644
--- a/drivers/of/platform.c
+++ b/drivers/of/platform.c
@@ -21,6 +21,7 @@
 #include <linux/of_device.h>
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/platform_device.h>
 
 const struct of_device_id of_default_bus_match_table[] = {
@@ -233,12 +234,15 @@ static struct platform_device *of_platform_device_create_pdata(
 	dev->dev.bus = &platform_bus_type;
 	dev->dev.platform_data = platform_data;
 
+	of_reserved_mem_device_init(&dev->dev);
+
 	/* We do not fill the DMA ops for platform devices by default.
 	 * This is currently the responsibility of the platform code
 	 * to do such, possibly using a device notifier
 	 */
 
 	if (of_device_add(dev) != 0) {
+		of_reserved_mem_device_release(&dev->dev);
 		platform_device_put(dev);
 		goto err_clear_flag;
 	}
@@ -300,6 +304,8 @@ static struct amba_device *of_amba_device_create(struct device_node *node,
 	else
 		of_device_make_bus_id(&dev->dev);
 
+	of_reserved_mem_device_init(&dev->dev);
+
 	/* Allow the HW Peripheral ID to be overridden */
 	prop = of_get_property(node, "arm,primecell-periphid", NULL);
 	if (prop)
@@ -326,6 +332,7 @@ static struct amba_device *of_amba_device_create(struct device_node *node,
 	return dev;
 
 err_free:
+	of_reserved_mem_device_release(&dev->dev);
 	amba_device_put(dev);
 err_clear_flag:
 	of_node_clear_flag(node, OF_POPULATED);
-- 
1.9.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree
       [not found] ` <1405326487-15346-1-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
  2014-07-14  8:28   ` [PATCH v2 RESEND 1/4] drivers: of: add automated assignment of reserved regions to client devices Marek Szyprowski
  2014-07-14  8:28   ` [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices Marek Szyprowski
@ 2014-07-14  8:28   ` Marek Szyprowski
  2014-07-29 21:54     ` Grant Likely
  2014-07-14  8:28   ` [PATCH v2 RESEND 4/4] drivers: dma-contiguous: " Marek Szyprowski
  3 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-14  8:28 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Marek Szyprowski, linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Michal Nazarewicz, Grant Likely, Tomasz Figa, Sascha Hauer,
	Laura Abbott, Nishanth Peethambaran, Marc, Josh Cartwright,
	Catalin Marinas, Will Deacon, Paul Mackerras, Jon Medhurst,
	Joonsoo Kim, Aneesh Kumar K.V., Andrew Morton

Initialization procedure of dma coherent pool has been split into two
parts, so memory pool can now be initialized without assigning to
particular struct device. Then initialized region can be assigned to
more than one struct device. To protect from concurent allocations from
different devices, a spinlock has been added to dma_coherent_mem
structure. The last part of this patch adds support for handling
'shared-dma-pool' reserved-memory device tree nodes.

Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
---
 drivers/base/dma-coherent.c | 137 ++++++++++++++++++++++++++++++++++++++------
 1 file changed, 118 insertions(+), 19 deletions(-)

diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 7d6e84a51424..7185a4f247e1 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -14,11 +14,14 @@ struct dma_coherent_mem {
 	int		size;
 	int		flags;
 	unsigned long	*bitmap;
+	spinlock_t	spinlock;
 };
 
-int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
-				dma_addr_t device_addr, size_t size, int flags)
+static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t device_addr,
+			     size_t size, int flags,
+			     struct dma_coherent_mem **mem)
 {
+	struct dma_coherent_mem *dma_mem = NULL;
 	void __iomem *mem_base = NULL;
 	int pages = size >> PAGE_SHIFT;
 	int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
@@ -27,27 +30,26 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
 		goto out;
 	if (!size)
 		goto out;
-	if (dev->dma_mem)
-		goto out;
-
-	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
 
 	mem_base = ioremap(phys_addr, size);
 	if (!mem_base)
 		goto out;
 
-	dev->dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
-	if (!dev->dma_mem)
+	dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
+	if (!dma_mem)
 		goto out;
-	dev->dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!dev->dma_mem->bitmap)
+	dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!dma_mem->bitmap)
 		goto free1_out;
 
-	dev->dma_mem->virt_base = mem_base;
-	dev->dma_mem->device_base = device_addr;
-	dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
-	dev->dma_mem->size = pages;
-	dev->dma_mem->flags = flags;
+	dma_mem->virt_base = mem_base;
+	dma_mem->device_base = device_addr;
+	dma_mem->pfn_base = PFN_DOWN(phys_addr);
+	dma_mem->size = pages;
+	dma_mem->flags = flags;
+	spin_lock_init(&dma_mem->spinlock);
+
+	*mem = dma_mem;
 
 	if (flags & DMA_MEMORY_MAP)
 		return DMA_MEMORY_MAP;
@@ -55,12 +57,51 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
 	return DMA_MEMORY_IO;
 
  free1_out:
-	kfree(dev->dma_mem);
+	kfree(dma_mem);
  out:
 	if (mem_base)
 		iounmap(mem_base);
 	return 0;
 }
+
+static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
+{
+	if (!mem)
+		return;
+	iounmap(mem->virt_base);
+	kfree(mem->bitmap);
+	kfree(mem);
+}
+
+static int dma_assign_coherent_memory(struct device *dev,
+				      struct dma_coherent_mem *mem)
+{
+	if (dev->dma_mem)
+		return -EBUSY;
+
+	dev->dma_mem = mem;
+	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
+
+	return 0;
+}
+
+int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
+				dma_addr_t device_addr, size_t size, int flags)
+{
+	struct dma_coherent_mem *mem;
+	int ret;
+
+	ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags,
+				       &mem);
+	if (ret == 0)
+		return 0;
+
+	if (dma_assign_coherent_memory(dev, mem) == 0)
+		return ret;
+
+	dma_release_coherent_memory(mem);
+	return 0;
+}
 EXPORT_SYMBOL(dma_declare_coherent_memory);
 
 void dma_release_declared_memory(struct device *dev)
@@ -69,10 +110,8 @@ void dma_release_declared_memory(struct device *dev)
 
 	if (!mem)
 		return;
+	dma_release_coherent_memory(mem);
 	dev->dma_mem = NULL;
-	iounmap(mem->virt_base);
-	kfree(mem->bitmap);
-	kfree(mem);
 }
 EXPORT_SYMBOL(dma_release_declared_memory);
 
@@ -80,6 +119,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
 					dma_addr_t device_addr, size_t size)
 {
 	struct dma_coherent_mem *mem = dev->dma_mem;
+	unsigned long flags;
 	int pos, err;
 
 	size += device_addr & ~PAGE_MASK;
@@ -87,8 +127,11 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
 	if (!mem)
 		return ERR_PTR(-EINVAL);
 
+	spin_lock_irqsave(&mem->spinlock, flags);
 	pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
 	err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
+	spin_unlock_irqrestore(&mem->spinlock, flags);
+
 	if (err != 0)
 		return ERR_PTR(err);
 	return mem->virt_base + (pos << PAGE_SHIFT);
@@ -115,6 +158,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 {
 	struct dma_coherent_mem *mem;
 	int order = get_order(size);
+	unsigned long flags;
 	int pageno;
 
 	if (!dev)
@@ -124,6 +168,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 		return 0;
 
 	*ret = NULL;
+	spin_lock_irqsave(&mem->spinlock, flags);
 
 	if (unlikely(size > (mem->size << PAGE_SHIFT)))
 		goto err;
@@ -138,10 +183,12 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 	*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
 	*ret = mem->virt_base + (pageno << PAGE_SHIFT);
 	memset(*ret, 0, size);
+	spin_unlock_irqrestore(&mem->spinlock, flags);
 
 	return 1;
 
 err:
+	spin_unlock_irqrestore(&mem->spinlock, flags);
 	/*
 	 * In the case where the allocation can not be satisfied from the
 	 * per-device area, try to fall back to generic memory if the
@@ -171,8 +218,11 @@ int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
 	if (mem && vaddr >= mem->virt_base && vaddr <
 		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
 		int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
+		unsigned long flags;
 
+		spin_lock_irqsave(&mem->spinlock, flags);
 		bitmap_release_region(mem->bitmap, page, order);
+		spin_unlock_irqrestore(&mem->spinlock, flags);
 		return 1;
 	}
 	return 0;
@@ -218,3 +268,52 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 	return 0;
 }
 EXPORT_SYMBOL(dma_mmap_from_coherent);
+
+/*
+ * Support for reserved memory regions defined in device tree
+ */
+#ifdef CONFIG_OF_RESERVED_MEM
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+
+static void rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
+{
+	struct dma_coherent_mem *mem = rmem->priv;
+	if (!mem &&
+	    dma_init_coherent_memory(rmem->base, rmem->base, rmem->size,
+				     DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE,
+				     &mem) != DMA_MEMORY_MAP) {
+		pr_info("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
+			&rmem->base, (unsigned long)rmem->size / SZ_1M);
+		return;
+	}
+	rmem->priv = mem;
+	dma_assign_coherent_memory(dev, mem);
+}
+
+static void rmem_dma_device_release(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	dev->dma_mem = NULL;
+}
+
+static const struct reserved_mem_ops rmem_dma_ops = {
+	.device_init	= rmem_dma_device_init,
+	.device_release	= rmem_dma_device_release,
+};
+
+static int __init rmem_dma_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL))
+		return -EINVAL;
+
+	rmem->ops = &rmem_dma_ops;
+	pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
+#endif
-- 
1.9.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 RESEND 4/4] drivers: dma-contiguous: add initialization from device tree
       [not found] ` <1405326487-15346-1-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
                     ` (2 preceding siblings ...)
  2014-07-14  8:28   ` [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree Marek Szyprowski
@ 2014-07-14  8:28   ` Marek Szyprowski
  2014-07-29 21:58     ` Grant Likely
  3 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-14  8:28 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Marek Szyprowski, linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Michal Nazarewicz, Grant Likely, Tomasz Figa, Sascha Hauer,
	Laura Abbott, Nishanth Peethambaran, Marc, Josh Cartwright,
	Catalin Marinas, Will Deacon, Paul Mackerras, Jon Medhurst,
	Joonsoo Kim, Aneesh Kumar K.V., Andrew Morton

Add a function to create CMA region from previously reserved memory
and add support for handling 'shared-dma-pool' reserved-memory device
tree nodes.

Based on previous code provided by Josh Cartwright <joshc-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>

Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
---
 drivers/base/dma-contiguous.c | 67 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/cma.h           |  3 ++
 mm/cma.c                      | 62 ++++++++++++++++++++++++++++++++-------
 3 files changed, 121 insertions(+), 11 deletions(-)

diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 6606abdf880c..b77ea8bac176 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -211,3 +211,70 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 {
 	return cma_release(dev_get_cma_area(dev), pages, count);
 }
+
+/*
+ * Support for reserved memory regions defined in device tree
+ */
+#ifdef CONFIG_OF_RESERVED_MEM
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+
+#undef pr_fmt
+#define pr_fmt(fmt) fmt
+
+static void rmem_cma_device_init(struct reserved_mem *rmem, struct device *dev)
+{
+	struct cma *cma = rmem->priv;
+	dev_set_cma_area(dev, cma);
+}
+
+static void rmem_cma_device_release(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	dev_set_cma_area(dev, NULL);
+}
+
+static const struct reserved_mem_ops rmem_cma_ops = {
+	.device_init	= rmem_cma_device_init,
+	.device_release = rmem_cma_device_release,
+};
+
+static int __init rmem_cma_setup(struct reserved_mem *rmem)
+{
+	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
+	phys_addr_t mask = align - 1;
+	unsigned long node = rmem->fdt_node;
+	struct cma *cma;
+	int err;
+
+	if (!of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if ((rmem->base & mask) || (rmem->size & mask)) {
+		pr_err("Reserved memory: incorrect alignment of CMA region\n");
+		return -EINVAL;
+	}
+
+	err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
+	if (err) {
+		pr_err("Reserved memory: unable to setup CMA region\n");
+		return err;
+	}
+	/* Architecture specific contiguous memory fixup. */
+	dma_contiguous_early_fixup(rmem->base, rmem->size);
+
+	if (of_get_flat_dt_prop(node, "linux,cma-default", NULL))
+		dma_contiguous_set_default(cma);
+
+	rmem->ops = &rmem_cma_ops;
+	rmem->priv = cma;
+
+	pr_info("Reserved memory: created CMA memory pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+	return 0;
+}
+RESERVEDMEM_OF_DECLARE(cma, "shared-dma-pool", rmem_cma_setup);
+#endif
diff --git a/include/linux/cma.h b/include/linux/cma.h
index 32cab7a425f9..9a18a2b1934c 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -16,6 +16,9 @@ extern int __init cma_declare_contiguous(phys_addr_t size,
 			phys_addr_t base, phys_addr_t limit,
 			phys_addr_t alignment, unsigned int order_per_bit,
 			bool fixed, struct cma **res_cma);
+extern int cma_init_reserved_mem(phys_addr_t size,
+					phys_addr_t base, int order_per_bit,
+					struct cma **res_cma);
 extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align);
 extern bool cma_release(struct cma *cma, struct page *pages, int count);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index 4b251b037e1b..b3d8b925ad34 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -140,6 +140,54 @@ static int __init cma_init_reserved_areas(void)
 core_initcall(cma_init_reserved_areas);
 
 /**
+ * cma_init_reserved_mem() - create custom contiguous area from reserved memory
+ * @base: Base address of the reserved area
+ * @size: Size of the reserved area (in bytes),
+ * @order_per_bit: Order of pages represented by one bit on bitmap.
+ * @res_cma: Pointer to store the created cma region.
+ *
+ * This function creates custom contiguous area from already reserved memory.
+ */
+int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
+				 int order_per_bit, struct cma **res_cma)
+{
+	struct cma *cma;
+	phys_addr_t alignment;
+
+	/* Sanity checks */
+	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size || !memblock_is_region_reserved(base, size))
+		return -EINVAL;
+
+	/* ensure minimal alignment requied by mm core */
+	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
+
+	/* alignment should be aligned with order_per_bit */
+	if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit))
+		return -EINVAL;
+
+	if (ALIGN(base, alignment) != base || ALIGN(size, alignment) != size)
+		return -EINVAL;
+
+	/*
+	 * Each reserved area must be initialised later, when more kernel
+	 * subsystems (like slab allocator) are available.
+	 */
+	cma = &cma_areas[cma_area_count];
+	cma->base_pfn = PFN_DOWN(base);
+	cma->count = size >> PAGE_SHIFT;
+	cma->order_per_bit = order_per_bit;
+	*res_cma = cma;
+	cma_area_count++;
+
+	return 0;
+}
+
+/**
  * cma_declare_contiguous() - reserve custom contiguous area
  * @base: Base address of the reserved area optional, use 0 for any
  * @size: Size of the reserved area (in bytes),
@@ -162,7 +210,6 @@ int __init cma_declare_contiguous(phys_addr_t base,
 			phys_addr_t alignment, unsigned int order_per_bit,
 			bool fixed, struct cma **res_cma)
 {
-	struct cma *cma;
 	int ret = 0;
 
 	pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
@@ -214,16 +261,9 @@ int __init cma_declare_contiguous(phys_addr_t base,
 		}
 	}
 
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma = &cma_areas[cma_area_count];
-	cma->base_pfn = PFN_DOWN(base);
-	cma->count = size >> PAGE_SHIFT;
-	cma->order_per_bit = order_per_bit;
-	*res_cma = cma;
-	cma_area_count++;
+	ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma);
+	if (ret)
+		goto err;
 
 	pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
 		(unsigned long)base);
-- 
1.9.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 1/4] drivers: of: add automated assignment of reserved regions to client devices
  2014-07-14  8:28   ` [PATCH v2 RESEND 1/4] drivers: of: add automated assignment of reserved regions to client devices Marek Szyprowski
@ 2014-07-28 14:15     ` Grant Likely
  0 siblings, 0 replies; 14+ messages in thread
From: Grant Likely @ 2014-07-28 14:15 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Marek Szyprowski, linaro-mm-sig, devicetree, Arnd Bergmann,
	Michal Nazarewicz, Tomasz Figa, Sascha Hauer, Laura Abbott,
	Nishanth Peethambaran, Marc, Josh Cartwright, Catalin Marinas,
	Will Deacon, Paul Mackerras, Jon Medhurst, Joonsoo Kim,
	Aneesh Kumar K.V., Andrew Morton

On Mon, 14 Jul 2014 10:28:04 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
> This patch adds code for automated assignment of reserved memory regions
> to struct device. reserved_mem->ops->device_init()/device_cleanup()
> callbacks are called to perform reserved memory driver specific
> initialization and cleanup
> 
> Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>
> 
> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>

Applied, thanks

g.

> ---
>  drivers/of/of_reserved_mem.c    | 70 +++++++++++++++++++++++++++++++++++++++++
>  include/linux/of_reserved_mem.h |  7 +++++
>  2 files changed, 77 insertions(+)
> 
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 632aae861375..59fb12e84e6b 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -206,8 +206,16 @@ void __init fdt_init_reserved_mem(void)
>  	for (i = 0; i < reserved_mem_count; i++) {
>  		struct reserved_mem *rmem = &reserved_mem[i];
>  		unsigned long node = rmem->fdt_node;
> +		int len;
> +		const __be32 *prop;
>  		int err = 0;
>  
> +		prop = of_get_flat_dt_prop(node, "phandle", &len);
> +		if (!prop)
> +			prop = of_get_flat_dt_prop(node, "linux,phandle", &len);
> +		if (prop)
> +			rmem->phandle = of_read_number(prop, len/4);
> +
>  		if (rmem->size == 0)
>  			err = __reserved_mem_alloc_size(node, rmem->name,
>  						 &rmem->base, &rmem->size);
> @@ -215,3 +223,65 @@ void __init fdt_init_reserved_mem(void)
>  			__reserved_mem_init_node(rmem);
>  	}
>  }
> +
> +static inline struct reserved_mem *__find_rmem(struct device_node *node)
> +{
> +	unsigned int i;
> +
> +	if (!node->phandle)
> +		return NULL;
> +
> +	for (i = 0; i < reserved_mem_count; i++)
> +		if (reserved_mem[i].phandle == node->phandle)
> +			return &reserved_mem[i];
> +	return NULL;
> +}
> +
> +/**
> + * of_reserved_mem_device_init() - assign reserved memory region to given device
> + *
> + * This function assign memory region pointed by "memory-region" device tree
> + * property to the given device.
> + */
> +void of_reserved_mem_device_init(struct device *dev)
> +{
> +	struct reserved_mem *rmem;
> +	struct device_node *np;
> +
> +	np = of_parse_phandle(dev->of_node, "memory-region", 0);
> +	if (!np)
> +		return;
> +
> +	rmem = __find_rmem(np);
> +	of_node_put(np);
> +
> +	if (!rmem || !rmem->ops || !rmem->ops->device_init)
> +		return;
> +
> +	rmem->ops->device_init(rmem, dev);
> +	dev_info(dev, "assigned reserved memory node %s\n", rmem->name);
> +}
> +
> +/**
> + * of_reserved_mem_device_release() - release reserved memory device structures
> + *
> + * This function releases structures allocated for memory region handling for
> + * the given device.
> + */
> +void of_reserved_mem_device_release(struct device *dev)
> +{
> +	struct reserved_mem *rmem;
> +	struct device_node *np;
> +
> +	np = of_parse_phandle(dev->of_node, "memory-region", 0);
> +	if (!np)
> +		return;
> +
> +	rmem = __find_rmem(np);
> +	of_node_put(np);
> +
> +	if (!rmem || !rmem->ops || !rmem->ops->device_release)
> +		return;
> +
> +	rmem->ops->device_release(rmem, dev);
> +}
> diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h
> index 4669ddfdd5af..5b5efae09135 100644
> --- a/include/linux/of_reserved_mem.h
> +++ b/include/linux/of_reserved_mem.h
> @@ -8,6 +8,7 @@ struct reserved_mem_ops;
>  struct reserved_mem {
>  	const char			*name;
>  	unsigned long			fdt_node;
> +	unsigned long			phandle;
>  	const struct reserved_mem_ops	*ops;
>  	phys_addr_t			base;
>  	phys_addr_t			size;
> @@ -27,10 +28,16 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem);
>  	_OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
>  
>  #ifdef CONFIG_OF_RESERVED_MEM
> +void of_reserved_mem_device_init(struct device *dev);
> +void of_reserved_mem_device_release(struct device *dev);
> +
>  void fdt_init_reserved_mem(void);
>  void fdt_reserved_mem_save_node(unsigned long node, const char *uname,
>  			       phys_addr_t base, phys_addr_t size);
>  #else
> +static inline void of_reserved_mem_device_init(struct device *dev) { }
> +static inline void of_reserved_mem_device_release(struct device *pdev) { }
> +
>  static inline void fdt_init_reserved_mem(void) { }
>  static inline void fdt_reserved_mem_save_node(unsigned long node,
>  		const char *uname, phys_addr_t base, phys_addr_t size) { }
> -- 
> 1.9.2
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices
       [not found]     ` <1405326487-15346-3-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
@ 2014-07-28 14:17       ` Grant Likely
  2014-07-29 14:47         ` Marek Szyprowski
  0 siblings, 1 reply; 14+ messages in thread
From: Grant Likely @ 2014-07-28 14:17 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Marek Szyprowski, linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Michal Nazarewicz, Tomasz Figa, Sascha Hauer, Laura Abbott,
	Nishanth Peethambaran, Marc, Josh Cartwright, Catalin Marinas,
	Will Deacon, Paul Mackerras, Jon Medhurst, Joonsoo Kim,
	Aneesh Kumar K.V., Andrew Morton

On Mon, 14 Jul 2014 10:28:05 +0200, Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> wrote:
> Use recently introduced of_reserved_mem_device_init() function to
> automatically assign respective reserved memory region to the newly created
> platform and amba device.
> 
> Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>

I'm still not okay with this patch. I don't think it is appropriate to
add the hook into the generic path that gets used for every platform
device. The devices that need reserved memory should explicitly request
it, and any setup be done at that time.

g.

> ---
>  drivers/of/platform.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/of/platform.c b/drivers/of/platform.c
> index 500436f9be7f..a9055d3da5c2 100644
> --- a/drivers/of/platform.c
> +++ b/drivers/of/platform.c
> @@ -21,6 +21,7 @@
>  #include <linux/of_device.h>
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/of_reserved_mem.h>
>  #include <linux/platform_device.h>
>  
>  const struct of_device_id of_default_bus_match_table[] = {
> @@ -233,12 +234,15 @@ static struct platform_device *of_platform_device_create_pdata(
>  	dev->dev.bus = &platform_bus_type;
>  	dev->dev.platform_data = platform_data;
>  
> +	of_reserved_mem_device_init(&dev->dev);
> +
>  	/* We do not fill the DMA ops for platform devices by default.
>  	 * This is currently the responsibility of the platform code
>  	 * to do such, possibly using a device notifier
>  	 */
>  
>  	if (of_device_add(dev) != 0) {
> +		of_reserved_mem_device_release(&dev->dev);
>  		platform_device_put(dev);
>  		goto err_clear_flag;
>  	}
> @@ -300,6 +304,8 @@ static struct amba_device *of_amba_device_create(struct device_node *node,
>  	else
>  		of_device_make_bus_id(&dev->dev);
>  
> +	of_reserved_mem_device_init(&dev->dev);
> +
>  	/* Allow the HW Peripheral ID to be overridden */
>  	prop = of_get_property(node, "arm,primecell-periphid", NULL);
>  	if (prop)
> @@ -326,6 +332,7 @@ static struct amba_device *of_amba_device_create(struct device_node *node,
>  	return dev;
>  
>  err_free:
> +	of_reserved_mem_device_release(&dev->dev);
>  	amba_device_put(dev);
>  err_clear_flag:
>  	of_node_clear_flag(node, OF_POPULATED);
> -- 
> 1.9.2
> 

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices
  2014-07-28 14:17       ` Grant Likely
@ 2014-07-29 14:47         ` Marek Szyprowski
  2014-07-29 19:46           ` Grant Likely
  0 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-29 14:47 UTC (permalink / raw)
  To: Grant Likely, linux-kernel, linux-arm-kernel
  Cc: linaro-mm-sig, devicetree, Arnd Bergmann, Michal Nazarewicz,
	Tomasz Figa, Sascha Hauer, Laura Abbott, Nishanth Peethambaran,
	Marc, Josh Cartwright, Catalin Marinas, Will Deacon,
	Paul Mackerras, Jon Medhurst, Joonsoo Kim, Aneesh Kumar K.V.,
	Andrew Morton

Hello,

On 2014-07-28 16:17, Grant Likely wrote:
> On Mon, 14 Jul 2014 10:28:05 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
>> Use recently introduced of_reserved_mem_device_init() function to
>> automatically assign respective reserved memory region to the newly created
>> platform and amba device.
>>
>> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> I'm still not okay with this patch. I don't think it is appropriate to
> add the hook into the generic path that gets used for every platform
> device. The devices that need reserved memory should explicitly request
> it, and any setup be done at that time.

Okay... I thought that it will be easier to have it done in generic 
code, if You don't
think so, then I give up and we will add it in all drivers requiring 
such memory regions.

What about patch 3/4 and 4/4? Would it be possible to have your ack to 
get them merged?
Right now patch 4/4 depends on changes from akpm tree, so it will be 
best to merge them
to akpm tree.

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices
  2014-07-29 14:47         ` Marek Szyprowski
@ 2014-07-29 19:46           ` Grant Likely
  0 siblings, 0 replies; 14+ messages in thread
From: Grant Likely @ 2014-07-29 19:46 UTC (permalink / raw)
  To: Marek Szyprowski, linux-kernel, linux-arm-kernel
  Cc: linaro-mm-sig, devicetree, Arnd Bergmann, Michal Nazarewicz,
	Tomasz Figa, Sascha Hauer, Laura Abbott, Nishanth Peethambaran,
	Marc, Josh Cartwright, Catalin Marinas, Will Deacon,
	Paul Mackerras, Jon Medhurst, Joonsoo Kim, Aneesh Kumar K.V.,
	Andrew Morton

On Tue, 29 Jul 2014 16:47:04 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
> Hello,
> 
> On 2014-07-28 16:17, Grant Likely wrote:
> > On Mon, 14 Jul 2014 10:28:05 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
> >> Use recently introduced of_reserved_mem_device_init() function to
> >> automatically assign respective reserved memory region to the newly created
> >> platform and amba device.
> >>
> >> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> > I'm still not okay with this patch. I don't think it is appropriate to
> > add the hook into the generic path that gets used for every platform
> > device. The devices that need reserved memory should explicitly request
> > it, and any setup be done at that time.
> 
> Okay... I thought that it will be easier to have it done in generic 
> code, if You don't
> think so, then I give up and we will add it in all drivers requiring 
> such memory regions.

The problem is that it makes the big assumption that every device is
going to conform to the pattern without any recourse if something
special needs to be done (such as to handle quirks). It also puts this
code into the probe path of every device, when the vast majority of
device drivers just don't care. When code like this is run
unconditionally from generic code, it becomes really difficult to
override when needed.

I've made the same comment on the PM clock domain patches.

> What about patch 3/4 and 4/4? Would it be possible to have your ack to 
> get them merged?
> Right now patch 4/4 depends on changes from akpm tree, so it will be 
> best to merge them
> to akpm tree.

I'll take a look.

g.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree
  2014-07-14  8:28   ` [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree Marek Szyprowski
@ 2014-07-29 21:54     ` Grant Likely
  2014-07-30  5:33       ` Marek Szyprowski
  0 siblings, 1 reply; 14+ messages in thread
From: Grant Likely @ 2014-07-29 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Marek Szyprowski, linaro-mm-sig, devicetree, Arnd Bergmann,
	Michal Nazarewicz, Tomasz Figa, Sascha Hauer, Laura Abbott,
	Nishanth Peethambaran, Marc, Josh Cartwright, Catalin Marinas,
	Will Deacon, Paul Mackerras, Jon Medhurst, Joonsoo Kim,
	Aneesh Kumar K.V., Andrew Morton

On Mon, 14 Jul 2014 10:28:06 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
> Initialization procedure of dma coherent pool has been split into two
> parts, so memory pool can now be initialized without assigning to
> particular struct device. Then initialized region can be assigned to
> more than one struct device. To protect from concurent allocations from
> different devices, a spinlock has been added to dma_coherent_mem
> structure. The last part of this patch adds support for handling
> 'shared-dma-pool' reserved-memory device tree nodes.
> 
> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>

I think this looks okay. It isn't in my area of expertise though.
Comments below.

> ---
>  drivers/base/dma-coherent.c | 137 ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 118 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
> index 7d6e84a51424..7185a4f247e1 100644
> --- a/drivers/base/dma-coherent.c
> +++ b/drivers/base/dma-coherent.c
> @@ -14,11 +14,14 @@ struct dma_coherent_mem {
>  	int		size;
>  	int		flags;
>  	unsigned long	*bitmap;
> +	spinlock_t	spinlock;
>  };
>  
> -int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
> -				dma_addr_t device_addr, size_t size, int flags)
> +static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t device_addr,
> +			     size_t size, int flags,
> +			     struct dma_coherent_mem **mem)

This is a bit odd. Why wouldn't you return the dma_mem pointer directly
instead of passing in a **mem argument?

>  {
> +	struct dma_coherent_mem *dma_mem = NULL;
>  	void __iomem *mem_base = NULL;
>  	int pages = size >> PAGE_SHIFT;
>  	int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
> @@ -27,27 +30,26 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
>  		goto out;
>  	if (!size)
>  		goto out;
> -	if (dev->dma_mem)
> -		goto out;
> -
> -	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
>  
>  	mem_base = ioremap(phys_addr, size);
>  	if (!mem_base)
>  		goto out;
>  
> -	dev->dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
> -	if (!dev->dma_mem)
> +	dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
> +	if (!dma_mem)
>  		goto out;
> -	dev->dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> -	if (!dev->dma_mem->bitmap)
> +	dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!dma_mem->bitmap)
>  		goto free1_out;
>  
> -	dev->dma_mem->virt_base = mem_base;
> -	dev->dma_mem->device_base = device_addr;
> -	dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
> -	dev->dma_mem->size = pages;
> -	dev->dma_mem->flags = flags;
> +	dma_mem->virt_base = mem_base;
> +	dma_mem->device_base = device_addr;
> +	dma_mem->pfn_base = PFN_DOWN(phys_addr);
> +	dma_mem->size = pages;
> +	dma_mem->flags = flags;
> +	spin_lock_init(&dma_mem->spinlock);
> +
> +	*mem = dma_mem;
>  
>  	if (flags & DMA_MEMORY_MAP)
>  		return DMA_MEMORY_MAP;
> @@ -55,12 +57,51 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
>  	return DMA_MEMORY_IO;
>  
>   free1_out:
> -	kfree(dev->dma_mem);
> +	kfree(dma_mem);
>   out:
>  	if (mem_base)
>  		iounmap(mem_base);
>  	return 0;
>  }
> +
> +static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
> +{
> +	if (!mem)
> +		return;
> +	iounmap(mem->virt_base);
> +	kfree(mem->bitmap);
> +	kfree(mem);
> +}
> +
> +static int dma_assign_coherent_memory(struct device *dev,
> +				      struct dma_coherent_mem *mem)
> +{
> +	if (dev->dma_mem)
> +		return -EBUSY;
> +
> +	dev->dma_mem = mem;
> +	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
> +
> +	return 0;
> +}
> +
> +int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
> +				dma_addr_t device_addr, size_t size, int flags)
> +{
> +	struct dma_coherent_mem *mem;
> +	int ret;
> +
> +	ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags,
> +				       &mem);
> +	if (ret == 0)
> +		return 0;
> +
> +	if (dma_assign_coherent_memory(dev, mem) == 0)
> +		return ret;
> +
> +	dma_release_coherent_memory(mem);
> +	return 0;
> +}
>  EXPORT_SYMBOL(dma_declare_coherent_memory);
>  
>  void dma_release_declared_memory(struct device *dev)
> @@ -69,10 +110,8 @@ void dma_release_declared_memory(struct device *dev)
>  
>  	if (!mem)
>  		return;
> +	dma_release_coherent_memory(mem);
>  	dev->dma_mem = NULL;
> -	iounmap(mem->virt_base);
> -	kfree(mem->bitmap);
> -	kfree(mem);
>  }
>  EXPORT_SYMBOL(dma_release_declared_memory);
>  
> @@ -80,6 +119,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
>  					dma_addr_t device_addr, size_t size)
>  {
>  	struct dma_coherent_mem *mem = dev->dma_mem;
> +	unsigned long flags;
>  	int pos, err;
>  
>  	size += device_addr & ~PAGE_MASK;
> @@ -87,8 +127,11 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
>  	if (!mem)
>  		return ERR_PTR(-EINVAL);
>  
> +	spin_lock_irqsave(&mem->spinlock, flags);
>  	pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
>  	err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
> +	spin_unlock_irqrestore(&mem->spinlock, flags);
> +
>  	if (err != 0)
>  		return ERR_PTR(err);
>  	return mem->virt_base + (pos << PAGE_SHIFT);
> @@ -115,6 +158,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
>  {
>  	struct dma_coherent_mem *mem;
>  	int order = get_order(size);
> +	unsigned long flags;
>  	int pageno;
>  
>  	if (!dev)
> @@ -124,6 +168,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
>  		return 0;
>  
>  	*ret = NULL;
> +	spin_lock_irqsave(&mem->spinlock, flags);
>  
>  	if (unlikely(size > (mem->size << PAGE_SHIFT)))
>  		goto err;
> @@ -138,10 +183,12 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
>  	*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
>  	*ret = mem->virt_base + (pageno << PAGE_SHIFT);
>  	memset(*ret, 0, size);
> +	spin_unlock_irqrestore(&mem->spinlock, flags);
>  
>  	return 1;
>  
>  err:
> +	spin_unlock_irqrestore(&mem->spinlock, flags);
>  	/*
>  	 * In the case where the allocation can not be satisfied from the
>  	 * per-device area, try to fall back to generic memory if the
> @@ -171,8 +218,11 @@ int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
>  	if (mem && vaddr >= mem->virt_base && vaddr <
>  		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
>  		int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
> +		unsigned long flags;
>  
> +		spin_lock_irqsave(&mem->spinlock, flags);
>  		bitmap_release_region(mem->bitmap, page, order);
> +		spin_unlock_irqrestore(&mem->spinlock, flags);
>  		return 1;
>  	}
>  	return 0;
> @@ -218,3 +268,52 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
>  	return 0;
>  }
>  EXPORT_SYMBOL(dma_mmap_from_coherent);
> +
> +/*
> + * Support for reserved memory regions defined in device tree
> + */
> +#ifdef CONFIG_OF_RESERVED_MEM
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +
> +static void rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
> +{
> +	struct dma_coherent_mem *mem = rmem->priv;

Will the reserved_mem->priv pointer ever point to some other kind of
structure? How do we know that the pointer here is always a
dma_coherent_mem struct (if there are other uses of priv, what is the
guarantee against another user assigning something to it?) Is it the
reserved_mem_ops below that provide the guarantee?

If it is a risk, then the alternative would be to put an explicit
dma_coherent_mem pointer into the reserved_mem structure.

> +	if (!mem &&
> +	    dma_init_coherent_memory(rmem->base, rmem->base, rmem->size,
> +				     DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE,
> +				     &mem) != DMA_MEMORY_MAP) {
> +		pr_info("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
> +			&rmem->base, (unsigned long)rmem->size / SZ_1M);
> +		return;
> +	}
> +	rmem->priv = mem;
> +	dma_assign_coherent_memory(dev, mem);
> +}
> +
> +static void rmem_dma_device_release(struct reserved_mem *rmem,
> +				    struct device *dev)
> +{
> +	dev->dma_mem = NULL;
> +}
> +
> +static const struct reserved_mem_ops rmem_dma_ops = {
> +	.device_init	= rmem_dma_device_init,
> +	.device_release	= rmem_dma_device_release,
> +};
> +
> +static int __init rmem_dma_setup(struct reserved_mem *rmem)
> +{
> +	unsigned long node = rmem->fdt_node;
> +
> +	if (of_get_flat_dt_prop(node, "reusable", NULL))
> +		return -EINVAL;
> +
> +	rmem->ops = &rmem_dma_ops;
> +	pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
> +		&rmem->base, (unsigned long)rmem->size / SZ_1M);
> +	return 0;
> +}
> +RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
> +#endif
> -- 
> 1.9.2
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 4/4] drivers: dma-contiguous: add initialization from device tree
  2014-07-14  8:28   ` [PATCH v2 RESEND 4/4] drivers: dma-contiguous: " Marek Szyprowski
@ 2014-07-29 21:58     ` Grant Likely
  0 siblings, 0 replies; 14+ messages in thread
From: Grant Likely @ 2014-07-29 21:58 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Marek Szyprowski, linaro-mm-sig, devicetree, Arnd Bergmann,
	Michal Nazarewicz, Tomasz Figa, Sascha Hauer, Laura Abbott,
	Nishanth Peethambaran, Marc, Josh Cartwright, Catalin Marinas,
	Will Deacon, Paul Mackerras, Jon Medhurst, Joonsoo Kim,
	Aneesh Kumar K.V., Andrew Morton

On Mon, 14 Jul 2014 10:28:07 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
> Add a function to create CMA region from previously reserved memory
> and add support for handling 'shared-dma-pool' reserved-memory device
> tree nodes.
> 
> Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>

I think this is okay too. Same questions apply as I sent on patch 3.

g.

> 
> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> ---
>  drivers/base/dma-contiguous.c | 67 +++++++++++++++++++++++++++++++++++++++++++
>  include/linux/cma.h           |  3 ++
>  mm/cma.c                      | 62 ++++++++++++++++++++++++++++++++-------
>  3 files changed, 121 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 6606abdf880c..b77ea8bac176 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -211,3 +211,70 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>  {
>  	return cma_release(dev_get_cma_area(dev), pages, count);
>  }
> +
> +/*
> + * Support for reserved memory regions defined in device tree
> + */
> +#ifdef CONFIG_OF_RESERVED_MEM
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +
> +#undef pr_fmt
> +#define pr_fmt(fmt) fmt
> +
> +static void rmem_cma_device_init(struct reserved_mem *rmem, struct device *dev)
> +{
> +	struct cma *cma = rmem->priv;
> +	dev_set_cma_area(dev, cma);
> +}
> +
> +static void rmem_cma_device_release(struct reserved_mem *rmem,
> +				    struct device *dev)
> +{
> +	dev_set_cma_area(dev, NULL);
> +}
> +
> +static const struct reserved_mem_ops rmem_cma_ops = {
> +	.device_init	= rmem_cma_device_init,
> +	.device_release = rmem_cma_device_release,
> +};
> +
> +static int __init rmem_cma_setup(struct reserved_mem *rmem)
> +{
> +	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
> +	phys_addr_t mask = align - 1;
> +	unsigned long node = rmem->fdt_node;
> +	struct cma *cma;
> +	int err;
> +
> +	if (!of_get_flat_dt_prop(node, "reusable", NULL) ||
> +	    of_get_flat_dt_prop(node, "no-map", NULL))
> +		return -EINVAL;
> +
> +	if ((rmem->base & mask) || (rmem->size & mask)) {
> +		pr_err("Reserved memory: incorrect alignment of CMA region\n");
> +		return -EINVAL;
> +	}
> +
> +	err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
> +	if (err) {
> +		pr_err("Reserved memory: unable to setup CMA region\n");
> +		return err;
> +	}
> +	/* Architecture specific contiguous memory fixup. */
> +	dma_contiguous_early_fixup(rmem->base, rmem->size);
> +
> +	if (of_get_flat_dt_prop(node, "linux,cma-default", NULL))
> +		dma_contiguous_set_default(cma);
> +
> +	rmem->ops = &rmem_cma_ops;
> +	rmem->priv = cma;
> +
> +	pr_info("Reserved memory: created CMA memory pool at %pa, size %ld MiB\n",
> +		&rmem->base, (unsigned long)rmem->size / SZ_1M);
> +
> +	return 0;
> +}
> +RESERVEDMEM_OF_DECLARE(cma, "shared-dma-pool", rmem_cma_setup);
> +#endif
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index 32cab7a425f9..9a18a2b1934c 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -16,6 +16,9 @@ extern int __init cma_declare_contiguous(phys_addr_t size,
>  			phys_addr_t base, phys_addr_t limit,
>  			phys_addr_t alignment, unsigned int order_per_bit,
>  			bool fixed, struct cma **res_cma);
> +extern int cma_init_reserved_mem(phys_addr_t size,
> +					phys_addr_t base, int order_per_bit,
> +					struct cma **res_cma);
>  extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align);
>  extern bool cma_release(struct cma *cma, struct page *pages, int count);
>  #endif
> diff --git a/mm/cma.c b/mm/cma.c
> index 4b251b037e1b..b3d8b925ad34 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -140,6 +140,54 @@ static int __init cma_init_reserved_areas(void)
>  core_initcall(cma_init_reserved_areas);
>  
>  /**
> + * cma_init_reserved_mem() - create custom contiguous area from reserved memory
> + * @base: Base address of the reserved area
> + * @size: Size of the reserved area (in bytes),
> + * @order_per_bit: Order of pages represented by one bit on bitmap.
> + * @res_cma: Pointer to store the created cma region.
> + *
> + * This function creates custom contiguous area from already reserved memory.
> + */
> +int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> +				 int order_per_bit, struct cma **res_cma)
> +{
> +	struct cma *cma;
> +	phys_addr_t alignment;
> +
> +	/* Sanity checks */
> +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size || !memblock_is_region_reserved(base, size))
> +		return -EINVAL;
> +
> +	/* ensure minimal alignment requied by mm core */
> +	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
> +
> +	/* alignment should be aligned with order_per_bit */
> +	if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit))
> +		return -EINVAL;
> +
> +	if (ALIGN(base, alignment) != base || ALIGN(size, alignment) != size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma = &cma_areas[cma_area_count];
> +	cma->base_pfn = PFN_DOWN(base);
> +	cma->count = size >> PAGE_SHIFT;
> +	cma->order_per_bit = order_per_bit;
> +	*res_cma = cma;
> +	cma_area_count++;
> +
> +	return 0;
> +}
> +
> +/**
>   * cma_declare_contiguous() - reserve custom contiguous area
>   * @base: Base address of the reserved area optional, use 0 for any
>   * @size: Size of the reserved area (in bytes),
> @@ -162,7 +210,6 @@ int __init cma_declare_contiguous(phys_addr_t base,
>  			phys_addr_t alignment, unsigned int order_per_bit,
>  			bool fixed, struct cma **res_cma)
>  {
> -	struct cma *cma;
>  	int ret = 0;
>  
>  	pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
> @@ -214,16 +261,9 @@ int __init cma_declare_contiguous(phys_addr_t base,
>  		}
>  	}
>  
> -	/*
> -	 * Each reserved area must be initialised later, when more kernel
> -	 * subsystems (like slab allocator) are available.
> -	 */
> -	cma = &cma_areas[cma_area_count];
> -	cma->base_pfn = PFN_DOWN(base);
> -	cma->count = size >> PAGE_SHIFT;
> -	cma->order_per_bit = order_per_bit;
> -	*res_cma = cma;
> -	cma_area_count++;
> +	ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma);
> +	if (ret)
> +		goto err;
>  
>  	pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
>  		(unsigned long)base);
> -- 
> 1.9.2
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree
  2014-07-29 21:54     ` Grant Likely
@ 2014-07-30  5:33       ` Marek Szyprowski
       [not found]         ` <53D883AE.6030209-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-30  5:33 UTC (permalink / raw)
  To: Grant Likely, linux-kernel, linux-arm-kernel
  Cc: linaro-mm-sig, devicetree, Arnd Bergmann, Michal Nazarewicz,
	Tomasz Figa, Sascha Hauer, Laura Abbott, Nishanth Peethambaran,
	Marc, Josh Cartwright, Catalin Marinas, Will Deacon,
	Paul Mackerras, Jon Medhurst, Joonsoo Kim, Aneesh Kumar K.V.,
	Andrew Morton

Hello,

On 2014-07-29 23:54, Grant Likely wrote:
> On Mon, 14 Jul 2014 10:28:06 +0200, Marek Szyprowski <m.szyprowski@samsung.com> wrote:
>> Initialization procedure of dma coherent pool has been split into two
>> parts, so memory pool can now be initialized without assigning to
>> particular struct device. Then initialized region can be assigned to
>> more than one struct device. To protect from concurent allocations from
>> different devices, a spinlock has been added to dma_coherent_mem
>> structure. The last part of this patch adds support for handling
>> 'shared-dma-pool' reserved-memory device tree nodes.
>>
>> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> I think this looks okay. It isn't in my area of expertise though.
> Comments below.
>
>> ---
>>   drivers/base/dma-coherent.c | 137 ++++++++++++++++++++++++++++++++++++++------
>>   1 file changed, 118 insertions(+), 19 deletions(-)
>>
>> diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
>> index 7d6e84a51424..7185a4f247e1 100644
>> --- a/drivers/base/dma-coherent.c
>> +++ b/drivers/base/dma-coherent.c
>> @@ -14,11 +14,14 @@ struct dma_coherent_mem {
>>   	int		size;
>>   	int		flags;
>>   	unsigned long	*bitmap;
>> +	spinlock_t	spinlock;
>>   };
>>   
>> -int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
>> -				dma_addr_t device_addr, size_t size, int flags)
>> +static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t device_addr,
>> +			     size_t size, int flags,
>> +			     struct dma_coherent_mem **mem)
> This is a bit odd. Why wouldn't you return the dma_mem pointer directly
> instead of passing in a **mem argument?

Because this function (as a direct successor of 
dma_declare_coherent_memory) doesn't
return typical error codes, but some custom values like DMA_MEMORY_MAP, 
DMA_MEMORY_IO
or zero (which means failure). I wanted to avoid confusion with typical 
error
handling path and IS_ERR/ERR_PTR usage used widely in other functions. 
This probably
should be unified with the rest of kernel some day, but right now I 
wanted to keep
the patch simple and easy to review.

>>   {
>> +	struct dma_coherent_mem *dma_mem = NULL;
>>   	void __iomem *mem_base = NULL;
>>   	int pages = size >> PAGE_SHIFT;
>>   	int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
>> @@ -27,27 +30,26 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
>>   		goto out;
>>   	if (!size)
>>   		goto out;
>> -	if (dev->dma_mem)
>> -		goto out;
>> -
>> -	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
>>   
>>   	mem_base = ioremap(phys_addr, size);
>>   	if (!mem_base)
>>   		goto out;
>>   
>> -	dev->dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
>> -	if (!dev->dma_mem)
>> +	dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
>> +	if (!dma_mem)
>>   		goto out;
>> -	dev->dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>> -	if (!dev->dma_mem->bitmap)
>> +	dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>> +	if (!dma_mem->bitmap)
>>   		goto free1_out;
>>   
>> -	dev->dma_mem->virt_base = mem_base;
>> -	dev->dma_mem->device_base = device_addr;
>> -	dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
>> -	dev->dma_mem->size = pages;
>> -	dev->dma_mem->flags = flags;
>> +	dma_mem->virt_base = mem_base;
>> +	dma_mem->device_base = device_addr;
>> +	dma_mem->pfn_base = PFN_DOWN(phys_addr);
>> +	dma_mem->size = pages;
>> +	dma_mem->flags = flags;
>> +	spin_lock_init(&dma_mem->spinlock);
>> +
>> +	*mem = dma_mem;
>>   
>>   	if (flags & DMA_MEMORY_MAP)
>>   		return DMA_MEMORY_MAP;
>> @@ -55,12 +57,51 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
>>   	return DMA_MEMORY_IO;
>>   
>>    free1_out:
>> -	kfree(dev->dma_mem);
>> +	kfree(dma_mem);
>>    out:
>>   	if (mem_base)
>>   		iounmap(mem_base);
>>   	return 0;
>>   }
>> +
>> +static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
>> +{
>> +	if (!mem)
>> +		return;
>> +	iounmap(mem->virt_base);
>> +	kfree(mem->bitmap);
>> +	kfree(mem);
>> +}
>> +
>> +static int dma_assign_coherent_memory(struct device *dev,
>> +				      struct dma_coherent_mem *mem)
>> +{
>> +	if (dev->dma_mem)
>> +		return -EBUSY;
>> +
>> +	dev->dma_mem = mem;
>> +	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
>> +
>> +	return 0;
>> +}
>> +
>> +int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
>> +				dma_addr_t device_addr, size_t size, int flags)
>> +{
>> +	struct dma_coherent_mem *mem;
>> +	int ret;
>> +
>> +	ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags,
>> +				       &mem);
>> +	if (ret == 0)
>> +		return 0;
>> +
>> +	if (dma_assign_coherent_memory(dev, mem) == 0)
>> +		return ret;
>> +
>> +	dma_release_coherent_memory(mem);
>> +	return 0;
>> +}
>>   EXPORT_SYMBOL(dma_declare_coherent_memory);
>>   
>>   void dma_release_declared_memory(struct device *dev)
>> @@ -69,10 +110,8 @@ void dma_release_declared_memory(struct device *dev)
>>   
>>   	if (!mem)
>>   		return;
>> +	dma_release_coherent_memory(mem);
>>   	dev->dma_mem = NULL;
>> -	iounmap(mem->virt_base);
>> -	kfree(mem->bitmap);
>> -	kfree(mem);
>>   }
>>   EXPORT_SYMBOL(dma_release_declared_memory);
>>   
>> @@ -80,6 +119,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
>>   					dma_addr_t device_addr, size_t size)
>>   {
>>   	struct dma_coherent_mem *mem = dev->dma_mem;
>> +	unsigned long flags;
>>   	int pos, err;
>>   
>>   	size += device_addr & ~PAGE_MASK;
>> @@ -87,8 +127,11 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
>>   	if (!mem)
>>   		return ERR_PTR(-EINVAL);
>>   
>> +	spin_lock_irqsave(&mem->spinlock, flags);
>>   	pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
>>   	err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
>> +	spin_unlock_irqrestore(&mem->spinlock, flags);
>> +
>>   	if (err != 0)
>>   		return ERR_PTR(err);
>>   	return mem->virt_base + (pos << PAGE_SHIFT);
>> @@ -115,6 +158,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
>>   {
>>   	struct dma_coherent_mem *mem;
>>   	int order = get_order(size);
>> +	unsigned long flags;
>>   	int pageno;
>>   
>>   	if (!dev)
>> @@ -124,6 +168,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
>>   		return 0;
>>   
>>   	*ret = NULL;
>> +	spin_lock_irqsave(&mem->spinlock, flags);
>>   
>>   	if (unlikely(size > (mem->size << PAGE_SHIFT)))
>>   		goto err;
>> @@ -138,10 +183,12 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
>>   	*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
>>   	*ret = mem->virt_base + (pageno << PAGE_SHIFT);
>>   	memset(*ret, 0, size);
>> +	spin_unlock_irqrestore(&mem->spinlock, flags);
>>   
>>   	return 1;
>>   
>>   err:
>> +	spin_unlock_irqrestore(&mem->spinlock, flags);
>>   	/*
>>   	 * In the case where the allocation can not be satisfied from the
>>   	 * per-device area, try to fall back to generic memory if the
>> @@ -171,8 +218,11 @@ int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
>>   	if (mem && vaddr >= mem->virt_base && vaddr <
>>   		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
>>   		int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
>> +		unsigned long flags;
>>   
>> +		spin_lock_irqsave(&mem->spinlock, flags);
>>   		bitmap_release_region(mem->bitmap, page, order);
>> +		spin_unlock_irqrestore(&mem->spinlock, flags);
>>   		return 1;
>>   	}
>>   	return 0;
>> @@ -218,3 +268,52 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
>>   	return 0;
>>   }
>>   EXPORT_SYMBOL(dma_mmap_from_coherent);
>> +
>> +/*
>> + * Support for reserved memory regions defined in device tree
>> + */
>> +#ifdef CONFIG_OF_RESERVED_MEM
>> +#include <linux/of.h>
>> +#include <linux/of_fdt.h>
>> +#include <linux/of_reserved_mem.h>
>> +
>> +static void rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
>> +{
>> +	struct dma_coherent_mem *mem = rmem->priv;
> Will the reserved_mem->priv pointer ever point to some other kind of
> structure? How do we know that the pointer here is always a
> dma_coherent_mem struct (if there are other uses of priv, what is the
> guarantee against another user assigning something to it?) Is it the
> reserved_mem_ops below that provide the guarantee?

reserved_mem_ops are set by the given reserved memory driver and access 
to priv
pointer is limited only to that driver. This pattern is used widely 
across the
whole kernel, so I don't think that a separate pointer to particular 
structure
type is needed.

> If it is a risk, then the alternative would be to put an explicit
> dma_coherent_mem pointer into the reserved_mem structure.

If one messes with priv pointers, he should expect serious problems and 
we really
cannot prevent him anyway.

>> +	if (!mem &&
>> +	    dma_init_coherent_memory(rmem->base, rmem->base, rmem->size,
>> +				     DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE,
>> +				     &mem) != DMA_MEMORY_MAP) {
>> +		pr_info("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
>> +			&rmem->base, (unsigned long)rmem->size / SZ_1M);
>> +		return;
>> +	}
>> +	rmem->priv = mem;
>> +	dma_assign_coherent_memory(dev, mem);
>> +}
>> +
>> +static void rmem_dma_device_release(struct reserved_mem *rmem,
>> +				    struct device *dev)
>> +{
>> +	dev->dma_mem = NULL;
>> +}
>> +
>> +static const struct reserved_mem_ops rmem_dma_ops = {
>> +	.device_init	= rmem_dma_device_init,
>> +	.device_release	= rmem_dma_device_release,
>> +};
>> +
>> +static int __init rmem_dma_setup(struct reserved_mem *rmem)
>> +{
>> +	unsigned long node = rmem->fdt_node;
>> +
>> +	if (of_get_flat_dt_prop(node, "reusable", NULL))
>> +		return -EINVAL;
>> +
>> +	rmem->ops = &rmem_dma_ops;
>> +	pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
>> +		&rmem->base, (unsigned long)rmem->size / SZ_1M);
>> +	return 0;
>> +}
>> +RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
>> +#endif
>> -- 
>> 1.9.2

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree
       [not found]         ` <53D883AE.6030209-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
@ 2014-07-30 23:49           ` Grant Likely
       [not found]             ` <CACxGe6s2G_vEnmo5LwK79bwkEWSNsegfiojDAdQZV01Wapo0eg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Grant Likely @ 2014-07-30 23:49 UTC (permalink / raw)
  To: Marek Szyprowski
  Cc: Linux Kernel Mailing List,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
	linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw@public.gmane.org,
	devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Arnd Bergmann,
	Michal Nazarewicz, Tomasz Figa, Sascha Hauer, Laura Abbott,
	Nishanth Peethambaran, Marc, Josh Cartwright, Catalin Marinas,
	Will Deacon, Paul Mackerras, Jon Medhurst, Joonsoo Kim,
	Aneesh Kumar K.V., Andrew Morton

On Tue, Jul 29, 2014 at 11:33 PM, Marek Szyprowski
<m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> wrote:
> Hello,
>
>
> On 2014-07-29 23:54, Grant Likely wrote:
>>
>> On Mon, 14 Jul 2014 10:28:06 +0200, Marek Szyprowski
>> <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> wrote:
>>>
>>> Initialization procedure of dma coherent pool has been split into two
>>> parts, so memory pool can now be initialized without assigning to
>>> particular struct device. Then initialized region can be assigned to
>>> more than one struct device. To protect from concurent allocations from
>>> different devices, a spinlock has been added to dma_coherent_mem
>>> structure. The last part of this patch adds support for handling
>>> 'shared-dma-pool' reserved-memory device tree nodes.
>>>
>>> Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
>>
>> I think this looks okay. It isn't in my area of expertise though.
>> Comments below.
>>
>>> ---
>>>   drivers/base/dma-coherent.c | 137
>>> ++++++++++++++++++++++++++++++++++++++------
>>>   1 file changed, 118 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
>>> index 7d6e84a51424..7185a4f247e1 100644
>>> --- a/drivers/base/dma-coherent.c
>>> +++ b/drivers/base/dma-coherent.c
>>> @@ -14,11 +14,14 @@ struct dma_coherent_mem {
>>>         int             size;
>>>         int             flags;
>>>         unsigned long   *bitmap;
>>> +       spinlock_t      spinlock;
>>>   };
>>>   -int dma_declare_coherent_memory(struct device *dev, phys_addr_t
>>> phys_addr,
>>> -                               dma_addr_t device_addr, size_t size, int
>>> flags)
>>> +static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t
>>> device_addr,
>>> +                            size_t size, int flags,
>>> +                            struct dma_coherent_mem **mem)
>>
>> This is a bit odd. Why wouldn't you return the dma_mem pointer directly
>> instead of passing in a **mem argument?
>
>
> Because this function (as a direct successor of dma_declare_coherent_memory)
> doesn't
> return typical error codes, but some custom values like DMA_MEMORY_MAP,
> DMA_MEMORY_IO
> or zero (which means failure). I wanted to avoid confusion with typical
> error
> handling path and IS_ERR/ERR_PTR usage used widely in other functions. This
> probably
> should be unified with the rest of kernel some day, but right now I wanted
> to keep
> the patch simple and easy to review.
>
>
>>>   {
>>> +       struct dma_coherent_mem *dma_mem = NULL;
>>>         void __iomem *mem_base = NULL;
>>>         int pages = size >> PAGE_SHIFT;
>>>         int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
>>> @@ -27,27 +30,26 @@ int dma_declare_coherent_memory(struct device *dev,
>>> phys_addr_t phys_addr,
>>>                 goto out;
>>>         if (!size)
>>>                 goto out;
>>> -       if (dev->dma_mem)
>>> -               goto out;
>>> -
>>> -       /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN
>>> */
>>>         mem_base = ioremap(phys_addr, size);
>>>         if (!mem_base)
>>>                 goto out;
>>>   -     dev->dma_mem = kzalloc(sizeof(struct dma_coherent_mem),
>>> GFP_KERNEL);
>>> -       if (!dev->dma_mem)
>>> +       dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
>>> +       if (!dma_mem)
>>>                 goto out;
>>> -       dev->dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>>> -       if (!dev->dma_mem->bitmap)
>>> +       dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>>> +       if (!dma_mem->bitmap)
>>>                 goto free1_out;
>>>   -     dev->dma_mem->virt_base = mem_base;
>>> -       dev->dma_mem->device_base = device_addr;
>>> -       dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
>>> -       dev->dma_mem->size = pages;
>>> -       dev->dma_mem->flags = flags;
>>> +       dma_mem->virt_base = mem_base;
>>> +       dma_mem->device_base = device_addr;
>>> +       dma_mem->pfn_base = PFN_DOWN(phys_addr);
>>> +       dma_mem->size = pages;
>>> +       dma_mem->flags = flags;
>>> +       spin_lock_init(&dma_mem->spinlock);
>>> +
>>> +       *mem = dma_mem;
>>>         if (flags & DMA_MEMORY_MAP)
>>>                 return DMA_MEMORY_MAP;
>>> @@ -55,12 +57,51 @@ int dma_declare_coherent_memory(struct device *dev,
>>> phys_addr_t phys_addr,
>>>         return DMA_MEMORY_IO;
>>>      free1_out:
>>> -       kfree(dev->dma_mem);
>>> +       kfree(dma_mem);
>>>    out:
>>>         if (mem_base)
>>>                 iounmap(mem_base);
>>>         return 0;
>>>   }
>>> +
>>> +static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
>>> +{
>>> +       if (!mem)
>>> +               return;
>>> +       iounmap(mem->virt_base);
>>> +       kfree(mem->bitmap);
>>> +       kfree(mem);
>>> +}
>>> +
>>> +static int dma_assign_coherent_memory(struct device *dev,
>>> +                                     struct dma_coherent_mem *mem)
>>> +{
>>> +       if (dev->dma_mem)
>>> +               return -EBUSY;
>>> +
>>> +       dev->dma_mem = mem;
>>> +       /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN
>>> */
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +int dma_declare_coherent_memory(struct device *dev, phys_addr_t
>>> phys_addr,
>>> +                               dma_addr_t device_addr, size_t size, int
>>> flags)
>>> +{
>>> +       struct dma_coherent_mem *mem;
>>> +       int ret;
>>> +
>>> +       ret = dma_init_coherent_memory(phys_addr, device_addr, size,
>>> flags,
>>> +                                      &mem);
>>> +       if (ret == 0)
>>> +               return 0;
>>> +
>>> +       if (dma_assign_coherent_memory(dev, mem) == 0)
>>> +               return ret;
>>> +
>>> +       dma_release_coherent_memory(mem);
>>> +       return 0;
>>> +}
>>>   EXPORT_SYMBOL(dma_declare_coherent_memory);
>>>     void dma_release_declared_memory(struct device *dev)
>>> @@ -69,10 +110,8 @@ void dma_release_declared_memory(struct device *dev)
>>>         if (!mem)
>>>                 return;
>>> +       dma_release_coherent_memory(mem);
>>>         dev->dma_mem = NULL;
>>> -       iounmap(mem->virt_base);
>>> -       kfree(mem->bitmap);
>>> -       kfree(mem);
>>>   }
>>>   EXPORT_SYMBOL(dma_release_declared_memory);
>>>   @@ -80,6 +119,7 @@ void *dma_mark_declared_memory_occupied(struct
>>> device *dev,
>>>                                         dma_addr_t device_addr, size_t
>>> size)
>>>   {
>>>         struct dma_coherent_mem *mem = dev->dma_mem;
>>> +       unsigned long flags;
>>>         int pos, err;
>>>         size += device_addr & ~PAGE_MASK;
>>> @@ -87,8 +127,11 @@ void *dma_mark_declared_memory_occupied(struct device
>>> *dev,
>>>         if (!mem)
>>>                 return ERR_PTR(-EINVAL);
>>>   +     spin_lock_irqsave(&mem->spinlock, flags);
>>>         pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
>>>         err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
>>> +       spin_unlock_irqrestore(&mem->spinlock, flags);
>>> +
>>>         if (err != 0)
>>>                 return ERR_PTR(err);
>>>         return mem->virt_base + (pos << PAGE_SHIFT);
>>> @@ -115,6 +158,7 @@ int dma_alloc_from_coherent(struct device *dev,
>>> ssize_t size,
>>>   {
>>>         struct dma_coherent_mem *mem;
>>>         int order = get_order(size);
>>> +       unsigned long flags;
>>>         int pageno;
>>>         if (!dev)
>>> @@ -124,6 +168,7 @@ int dma_alloc_from_coherent(struct device *dev,
>>> ssize_t size,
>>>                 return 0;
>>>         *ret = NULL;
>>> +       spin_lock_irqsave(&mem->spinlock, flags);
>>>         if (unlikely(size > (mem->size << PAGE_SHIFT)))
>>>                 goto err;
>>> @@ -138,10 +183,12 @@ int dma_alloc_from_coherent(struct device *dev,
>>> ssize_t size,
>>>         *dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
>>>         *ret = mem->virt_base + (pageno << PAGE_SHIFT);
>>>         memset(*ret, 0, size);
>>> +       spin_unlock_irqrestore(&mem->spinlock, flags);
>>>         return 1;
>>>     err:
>>> +       spin_unlock_irqrestore(&mem->spinlock, flags);
>>>         /*
>>>          * In the case where the allocation can not be satisfied from the
>>>          * per-device area, try to fall back to generic memory if the
>>> @@ -171,8 +218,11 @@ int dma_release_from_coherent(struct device *dev,
>>> int order, void *vaddr)
>>>         if (mem && vaddr >= mem->virt_base && vaddr <
>>>                    (mem->virt_base + (mem->size << PAGE_SHIFT))) {
>>>                 int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
>>> +               unsigned long flags;
>>>   +             spin_lock_irqsave(&mem->spinlock, flags);
>>>                 bitmap_release_region(mem->bitmap, page, order);
>>> +               spin_unlock_irqrestore(&mem->spinlock, flags);
>>>                 return 1;
>>>         }
>>>         return 0;
>>> @@ -218,3 +268,52 @@ int dma_mmap_from_coherent(struct device *dev,
>>> struct vm_area_struct *vma,
>>>         return 0;
>>>   }
>>>   EXPORT_SYMBOL(dma_mmap_from_coherent);
>>> +
>>> +/*
>>> + * Support for reserved memory regions defined in device tree
>>> + */
>>> +#ifdef CONFIG_OF_RESERVED_MEM
>>> +#include <linux/of.h>
>>> +#include <linux/of_fdt.h>
>>> +#include <linux/of_reserved_mem.h>
>>> +
>>> +static void rmem_dma_device_init(struct reserved_mem *rmem, struct
>>> device *dev)
>>> +{
>>> +       struct dma_coherent_mem *mem = rmem->priv;
>>
>> Will the reserved_mem->priv pointer ever point to some other kind of
>> structure? How do we know that the pointer here is always a
>> dma_coherent_mem struct (if there are other uses of priv, what is the
>> guarantee against another user assigning something to it?) Is it the
>> reserved_mem_ops below that provide the guarantee?
>
>
> reserved_mem_ops are set by the given reserved memory driver and access to
> priv
> pointer is limited only to that driver. This pattern is used widely across
> the
> whole kernel, so I don't think that a separate pointer to particular
> structure
> type is needed.

Yup, that's fine. I wanted to make sure.

Do I need to be taking these patches through the DT tree? Do patches 3
& 4 make sense without patch 2?

g.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree
       [not found]             ` <CACxGe6s2G_vEnmo5LwK79bwkEWSNsegfiojDAdQZV01Wapo0eg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-07-31  5:15               ` Marek Szyprowski
  0 siblings, 0 replies; 14+ messages in thread
From: Marek Szyprowski @ 2014-07-31  5:15 UTC (permalink / raw)
  To: Grant Likely
  Cc: Linux Kernel Mailing List,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
	linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw@public.gmane.org,
	devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Arnd Bergmann,
	Michal Nazarewicz, Tomasz Figa, Sascha Hauer, Laura Abbott,
	Nishanth Peethambaran, Marc, Josh Cartwright, Catalin Marinas,
	Will Deacon, Paul Mackerras, Jon Medhurst, Joonsoo Kim,
	Aneesh Kumar K.V., Andrew Morton

Hello,

On 2014-07-31 01:49, Grant Likely wrote:
> On Tue, Jul 29, 2014 at 11:33 PM, Marek Szyprowski
> <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> wrote:
>> Hello,
>>
>>
>> On 2014-07-29 23:54, Grant Likely wrote:
>>> On Mon, 14 Jul 2014 10:28:06 +0200, Marek Szyprowski
>>> <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> wrote:
>>>> Initialization procedure of dma coherent pool has been split into two
>>>> parts, so memory pool can now be initialized without assigning to
>>>> particular struct device. Then initialized region can be assigned to
>>>> more than one struct device. To protect from concurent allocations from
>>>> different devices, a spinlock has been added to dma_coherent_mem
>>>> structure. The last part of this patch adds support for handling
>>>> 'shared-dma-pool' reserved-memory device tree nodes.
>>>>
>>>> Signed-off-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
>>> I think this looks okay. It isn't in my area of expertise though.
>>> Comments below.
>>>
>>>> ---
>>>>    drivers/base/dma-coherent.c | 137
>>>> ++++++++++++++++++++++++++++++++++++++------
>>>>    1 file changed, 118 insertions(+), 19 deletions(-)
>>>>
>>>> diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
>>>> index 7d6e84a51424..7185a4f247e1 100644
>>>> --- a/drivers/base/dma-coherent.c
>>>> +++ b/drivers/base/dma-coherent.c
>>>> @@ -14,11 +14,14 @@ struct dma_coherent_mem {
>>>>          int             size;
>>>>          int             flags;
>>>>          unsigned long   *bitmap;
>>>> +       spinlock_t      spinlock;
>>>>    };
>>>>    -int dma_declare_coherent_memory(struct device *dev, phys_addr_t
>>>> phys_addr,
>>>> -                               dma_addr_t device_addr, size_t size, int
>>>> flags)
>>>> +static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t
>>>> device_addr,
>>>> +                            size_t size, int flags,
>>>> +                            struct dma_coherent_mem **mem)
>>> This is a bit odd. Why wouldn't you return the dma_mem pointer directly
>>> instead of passing in a **mem argument?
>>
>> Because this function (as a direct successor of dma_declare_coherent_memory)
>> doesn't
>> return typical error codes, but some custom values like DMA_MEMORY_MAP,
>> DMA_MEMORY_IO
>> or zero (which means failure). I wanted to avoid confusion with typical
>> error
>> handling path and IS_ERR/ERR_PTR usage used widely in other functions. This
>> probably
>> should be unified with the rest of kernel some day, but right now I wanted
>> to keep
>> the patch simple and easy to review.
>>
>>
>>>>    {
>>>> +       struct dma_coherent_mem *dma_mem = NULL;
>>>>          void __iomem *mem_base = NULL;
>>>>          int pages = size >> PAGE_SHIFT;
>>>>          int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
>>>> @@ -27,27 +30,26 @@ int dma_declare_coherent_memory(struct device *dev,
>>>> phys_addr_t phys_addr,
>>>>                  goto out;
>>>>          if (!size)
>>>>                  goto out;
>>>> -       if (dev->dma_mem)
>>>> -               goto out;
>>>> -
>>>> -       /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN
>>>> */
>>>>          mem_base = ioremap(phys_addr, size);
>>>>          if (!mem_base)
>>>>                  goto out;
>>>>    -     dev->dma_mem = kzalloc(sizeof(struct dma_coherent_mem),
>>>> GFP_KERNEL);
>>>> -       if (!dev->dma_mem)
>>>> +       dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
>>>> +       if (!dma_mem)
>>>>                  goto out;
>>>> -       dev->dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>>>> -       if (!dev->dma_mem->bitmap)
>>>> +       dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>>>> +       if (!dma_mem->bitmap)
>>>>                  goto free1_out;
>>>>    -     dev->dma_mem->virt_base = mem_base;
>>>> -       dev->dma_mem->device_base = device_addr;
>>>> -       dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
>>>> -       dev->dma_mem->size = pages;
>>>> -       dev->dma_mem->flags = flags;
>>>> +       dma_mem->virt_base = mem_base;
>>>> +       dma_mem->device_base = device_addr;
>>>> +       dma_mem->pfn_base = PFN_DOWN(phys_addr);
>>>> +       dma_mem->size = pages;
>>>> +       dma_mem->flags = flags;
>>>> +       spin_lock_init(&dma_mem->spinlock);
>>>> +
>>>> +       *mem = dma_mem;
>>>>          if (flags & DMA_MEMORY_MAP)
>>>>                  return DMA_MEMORY_MAP;
>>>> @@ -55,12 +57,51 @@ int dma_declare_coherent_memory(struct device *dev,
>>>> phys_addr_t phys_addr,
>>>>          return DMA_MEMORY_IO;
>>>>       free1_out:
>>>> -       kfree(dev->dma_mem);
>>>> +       kfree(dma_mem);
>>>>     out:
>>>>          if (mem_base)
>>>>                  iounmap(mem_base);
>>>>          return 0;
>>>>    }
>>>> +
>>>> +static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
>>>> +{
>>>> +       if (!mem)
>>>> +               return;
>>>> +       iounmap(mem->virt_base);
>>>> +       kfree(mem->bitmap);
>>>> +       kfree(mem);
>>>> +}
>>>> +
>>>> +static int dma_assign_coherent_memory(struct device *dev,
>>>> +                                     struct dma_coherent_mem *mem)
>>>> +{
>>>> +       if (dev->dma_mem)
>>>> +               return -EBUSY;
>>>> +
>>>> +       dev->dma_mem = mem;
>>>> +       /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN
>>>> */
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int dma_declare_coherent_memory(struct device *dev, phys_addr_t
>>>> phys_addr,
>>>> +                               dma_addr_t device_addr, size_t size, int
>>>> flags)
>>>> +{
>>>> +       struct dma_coherent_mem *mem;
>>>> +       int ret;
>>>> +
>>>> +       ret = dma_init_coherent_memory(phys_addr, device_addr, size,
>>>> flags,
>>>> +                                      &mem);
>>>> +       if (ret == 0)
>>>> +               return 0;
>>>> +
>>>> +       if (dma_assign_coherent_memory(dev, mem) == 0)
>>>> +               return ret;
>>>> +
>>>> +       dma_release_coherent_memory(mem);
>>>> +       return 0;
>>>> +}
>>>>    EXPORT_SYMBOL(dma_declare_coherent_memory);
>>>>      void dma_release_declared_memory(struct device *dev)
>>>> @@ -69,10 +110,8 @@ void dma_release_declared_memory(struct device *dev)
>>>>          if (!mem)
>>>>                  return;
>>>> +       dma_release_coherent_memory(mem);
>>>>          dev->dma_mem = NULL;
>>>> -       iounmap(mem->virt_base);
>>>> -       kfree(mem->bitmap);
>>>> -       kfree(mem);
>>>>    }
>>>>    EXPORT_SYMBOL(dma_release_declared_memory);
>>>>    @@ -80,6 +119,7 @@ void *dma_mark_declared_memory_occupied(struct
>>>> device *dev,
>>>>                                          dma_addr_t device_addr, size_t
>>>> size)
>>>>    {
>>>>          struct dma_coherent_mem *mem = dev->dma_mem;
>>>> +       unsigned long flags;
>>>>          int pos, err;
>>>>          size += device_addr & ~PAGE_MASK;
>>>> @@ -87,8 +127,11 @@ void *dma_mark_declared_memory_occupied(struct device
>>>> *dev,
>>>>          if (!mem)
>>>>                  return ERR_PTR(-EINVAL);
>>>>    +     spin_lock_irqsave(&mem->spinlock, flags);
>>>>          pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
>>>>          err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
>>>> +       spin_unlock_irqrestore(&mem->spinlock, flags);
>>>> +
>>>>          if (err != 0)
>>>>                  return ERR_PTR(err);
>>>>          return mem->virt_base + (pos << PAGE_SHIFT);
>>>> @@ -115,6 +158,7 @@ int dma_alloc_from_coherent(struct device *dev,
>>>> ssize_t size,
>>>>    {
>>>>          struct dma_coherent_mem *mem;
>>>>          int order = get_order(size);
>>>> +       unsigned long flags;
>>>>          int pageno;
>>>>          if (!dev)
>>>> @@ -124,6 +168,7 @@ int dma_alloc_from_coherent(struct device *dev,
>>>> ssize_t size,
>>>>                  return 0;
>>>>          *ret = NULL;
>>>> +       spin_lock_irqsave(&mem->spinlock, flags);
>>>>          if (unlikely(size > (mem->size << PAGE_SHIFT)))
>>>>                  goto err;
>>>> @@ -138,10 +183,12 @@ int dma_alloc_from_coherent(struct device *dev,
>>>> ssize_t size,
>>>>          *dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
>>>>          *ret = mem->virt_base + (pageno << PAGE_SHIFT);
>>>>          memset(*ret, 0, size);
>>>> +       spin_unlock_irqrestore(&mem->spinlock, flags);
>>>>          return 1;
>>>>      err:
>>>> +       spin_unlock_irqrestore(&mem->spinlock, flags);
>>>>          /*
>>>>           * In the case where the allocation can not be satisfied from the
>>>>           * per-device area, try to fall back to generic memory if the
>>>> @@ -171,8 +218,11 @@ int dma_release_from_coherent(struct device *dev,
>>>> int order, void *vaddr)
>>>>          if (mem && vaddr >= mem->virt_base && vaddr <
>>>>                     (mem->virt_base + (mem->size << PAGE_SHIFT))) {
>>>>                  int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
>>>> +               unsigned long flags;
>>>>    +             spin_lock_irqsave(&mem->spinlock, flags);
>>>>                  bitmap_release_region(mem->bitmap, page, order);
>>>> +               spin_unlock_irqrestore(&mem->spinlock, flags);
>>>>                  return 1;
>>>>          }
>>>>          return 0;
>>>> @@ -218,3 +268,52 @@ int dma_mmap_from_coherent(struct device *dev,
>>>> struct vm_area_struct *vma,
>>>>          return 0;
>>>>    }
>>>>    EXPORT_SYMBOL(dma_mmap_from_coherent);
>>>> +
>>>> +/*
>>>> + * Support for reserved memory regions defined in device tree
>>>> + */
>>>> +#ifdef CONFIG_OF_RESERVED_MEM
>>>> +#include <linux/of.h>
>>>> +#include <linux/of_fdt.h>
>>>> +#include <linux/of_reserved_mem.h>
>>>> +
>>>> +static void rmem_dma_device_init(struct reserved_mem *rmem, struct
>>>> device *dev)
>>>> +{
>>>> +       struct dma_coherent_mem *mem = rmem->priv;
>>> Will the reserved_mem->priv pointer ever point to some other kind of
>>> structure? How do we know that the pointer here is always a
>>> dma_coherent_mem struct (if there are other uses of priv, what is the
>>> guarantee against another user assigning something to it?) Is it the
>>> reserved_mem_ops below that provide the guarantee?
>>
>> reserved_mem_ops are set by the given reserved memory driver and access to
>> priv
>> pointer is limited only to that driver. This pattern is used widely across
>> the
>> whole kernel, so I don't think that a separate pointer to particular
>> structure
>> type is needed.
> Yup, that's fine. I wanted to make sure.
>
> Do I need to be taking these patches through the DT tree? Do patches 3
> & 4 make sense without patch 2?

Patches 3 and 4 are independent from patch 1&2. Patch 4 depends on the 
other CMA
patches, which has been merged to akpm tree. I think the easiest 
solution would
be to get your Ack for both patches and I will ask Andrew Morton to take 
them
together with other mm/CMA changes.

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-07-31  5:15 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-14  8:28 [PATCH v2 RESEND 0/4] CMA & device tree, once again Marek Szyprowski
     [not found] ` <1405326487-15346-1-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
2014-07-14  8:28   ` [PATCH v2 RESEND 1/4] drivers: of: add automated assignment of reserved regions to client devices Marek Szyprowski
2014-07-28 14:15     ` Grant Likely
2014-07-14  8:28   ` [PATCH v2 RESEND 2/4] drivers: of: initialize and assign reserved memory to newly created devices Marek Szyprowski
     [not found]     ` <1405326487-15346-3-git-send-email-m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
2014-07-28 14:17       ` Grant Likely
2014-07-29 14:47         ` Marek Szyprowski
2014-07-29 19:46           ` Grant Likely
2014-07-14  8:28   ` [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree Marek Szyprowski
2014-07-29 21:54     ` Grant Likely
2014-07-30  5:33       ` Marek Szyprowski
     [not found]         ` <53D883AE.6030209-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>
2014-07-30 23:49           ` Grant Likely
     [not found]             ` <CACxGe6s2G_vEnmo5LwK79bwkEWSNsegfiojDAdQZV01Wapo0eg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-07-31  5:15               ` Marek Szyprowski
2014-07-14  8:28   ` [PATCH v2 RESEND 4/4] drivers: dma-contiguous: " Marek Szyprowski
2014-07-29 21:58     ` Grant Likely

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).