* [PATCHv7 0/5] DMA Atomic pool for arm64
@ 2014-08-11 23:40 Laura Abbott
2014-08-11 23:40 ` [PATCHv7 1/5] lib/genalloc.c: Add power aligned algorithm Laura Abbott
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Laura Abbott @ 2014-08-11 23:40 UTC (permalink / raw)
To: linux-arm-kernel
Hi,
This is v7 of the series to add an atomic pool for arm64 and refactor some
of the dma atomic code. You know the drill.
Thanks,
Laura
v7: Added correct power aligned algorithm patch. Addressed comments from
Andrew.
Laura Abbott (5):
lib/genalloc.c: Add power aligned algorithm
lib/genalloc.c: Add genpool range check function
common: dma-mapping: Introduce common remapping functions
arm: use genalloc for the atomic pool
arm64: Add atomic pool for non-coherent and CMA allocations.
arch/arm/Kconfig | 1 +
arch/arm/mm/dma-mapping.c | 210 +++++++++----------------------
arch/arm64/Kconfig | 1 +
arch/arm64/mm/dma-mapping.c | 164 +++++++++++++++++++++---
drivers/base/dma-mapping.c | 68 ++++++++++
include/asm-generic/dma-mapping-common.h | 9 ++
include/linux/genalloc.h | 7 ++
lib/genalloc.c | 49 ++++++++
8 files changed, 338 insertions(+), 171 deletions(-)
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCHv7 1/5] lib/genalloc.c: Add power aligned algorithm
2014-08-11 23:40 [PATCHv7 0/5] DMA Atomic pool for arm64 Laura Abbott
@ 2014-08-11 23:40 ` Laura Abbott
2014-08-11 23:40 ` [PATCHv7 2/5] lib/genalloc.c: Add genpool range check function Laura Abbott
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Laura Abbott @ 2014-08-11 23:40 UTC (permalink / raw)
To: linux-arm-kernel
One of the more common algorithms used for allocation
is to align the start address of the allocation to
the order of size requested. Add this as an algorithm
option for genalloc.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
include/linux/genalloc.h | 4 ++++
lib/genalloc.c | 20 ++++++++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h
index 1c2fdaa..3cd0934 100644
--- a/include/linux/genalloc.h
+++ b/include/linux/genalloc.h
@@ -110,6 +110,10 @@ extern void gen_pool_set_algo(struct gen_pool *pool, genpool_algo_t algo,
extern unsigned long gen_pool_first_fit(unsigned long *map, unsigned long size,
unsigned long start, unsigned int nr, void *data);
+extern unsigned long gen_pool_first_fit_order_align(unsigned long *map,
+ unsigned long size, unsigned long start, unsigned int nr,
+ void *data);
+
extern unsigned long gen_pool_best_fit(unsigned long *map, unsigned long size,
unsigned long start, unsigned int nr, void *data);
diff --git a/lib/genalloc.c b/lib/genalloc.c
index bdb9a45..c2b3ad7 100644
--- a/lib/genalloc.c
+++ b/lib/genalloc.c
@@ -481,6 +481,26 @@ unsigned long gen_pool_first_fit(unsigned long *map, unsigned long size,
EXPORT_SYMBOL(gen_pool_first_fit);
/**
+ * gen_pool_first_fit_order_align - find the first available region
+ * of memory matching the size requirement. The region will be aligned
+ * to the order of the size specified.
+ * @map: The address to base the search on
+ * @size: The bitmap size in bits
+ * @start: The bitnumber to start searching at
+ * @nr: The number of zeroed bits we're looking for
+ * @data: additional data - unused
+ */
+unsigned long gen_pool_first_fit_order_align(unsigned long *map,
+ unsigned long size, unsigned long start,
+ unsigned int nr, void *data)
+{
+ unsigned long align_mask = roundup_pow_of_two(nr) - 1;
+
+ return bitmap_find_next_zero_area(map, size, start, nr, align_mask);
+}
+EXPORT_SYMBOL(gen_pool_first_fit_order_align);
+
+/**
* gen_pool_best_fit - find the best fitting region of memory
* macthing the size requirement (no alignment constraint)
* @map: The address to base the search on
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv7 2/5] lib/genalloc.c: Add genpool range check function
2014-08-11 23:40 [PATCHv7 0/5] DMA Atomic pool for arm64 Laura Abbott
2014-08-11 23:40 ` [PATCHv7 1/5] lib/genalloc.c: Add power aligned algorithm Laura Abbott
@ 2014-08-11 23:40 ` Laura Abbott
2014-08-11 23:40 ` [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions Laura Abbott
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Laura Abbott @ 2014-08-11 23:40 UTC (permalink / raw)
To: linux-arm-kernel
After allocating an address from a particular genpool,
there is no good way to verify if that address actually
belongs to a genpool. Introduce addr_in_gen_pool which
will return if an address plus size falls completely
within the genpool range.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
include/linux/genalloc.h | 3 +++
lib/genalloc.c | 29 +++++++++++++++++++++++++++++
2 files changed, 32 insertions(+)
diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h
index 3cd0934..1ccaab4 100644
--- a/include/linux/genalloc.h
+++ b/include/linux/genalloc.h
@@ -121,6 +121,9 @@ extern struct gen_pool *devm_gen_pool_create(struct device *dev,
int min_alloc_order, int nid);
extern struct gen_pool *dev_get_gen_pool(struct device *dev);
+bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start,
+ size_t size);
+
#ifdef CONFIG_OF
extern struct gen_pool *of_get_named_gen_pool(struct device_node *np,
const char *propname, int index);
diff --git a/lib/genalloc.c b/lib/genalloc.c
index c2b3ad7..c7a91cf 100644
--- a/lib/genalloc.c
+++ b/lib/genalloc.c
@@ -403,6 +403,35 @@ void gen_pool_for_each_chunk(struct gen_pool *pool,
EXPORT_SYMBOL(gen_pool_for_each_chunk);
/**
+ * addr_in_gen_pool - checks if an address falls within the range of a pool
+ * @pool: the generic memory pool
+ * @start: start address
+ * @size: size of the region
+ *
+ * Check if the range of addresses falls within the specified pool. Returns
+ * true if the entire range is contained in the pool and false otherwise.
+ */
+bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start,
+ size_t size)
+{
+ bool found = false;
+ unsigned long end = start + size;
+ struct gen_pool_chunk *chunk;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(chunk, &(pool)->chunks, next_chunk) {
+ if (start >= chunk->start_addr && start <= chunk->end_addr) {
+ if (end <= chunk->end_addr) {
+ found = true;
+ break;
+ }
+ }
+ }
+ rcu_read_unlock();
+ return found;
+}
+
+/**
* gen_pool_avail - get available free space of the pool
* @pool: pool to get available free space
*
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions
2014-08-11 23:40 [PATCHv7 0/5] DMA Atomic pool for arm64 Laura Abbott
2014-08-11 23:40 ` [PATCHv7 1/5] lib/genalloc.c: Add power aligned algorithm Laura Abbott
2014-08-11 23:40 ` [PATCHv7 2/5] lib/genalloc.c: Add genpool range check function Laura Abbott
@ 2014-08-11 23:40 ` Laura Abbott
2014-08-26 10:05 ` James Hogan
2014-08-11 23:40 ` [PATCHv7 4/5] arm: use genalloc for the atomic pool Laura Abbott
2014-08-11 23:40 ` [PATCHv7 5/5] arm64: Add atomic pool for non-coherent and CMA allocations Laura Abbott
4 siblings, 1 reply; 10+ messages in thread
From: Laura Abbott @ 2014-08-11 23:40 UTC (permalink / raw)
To: linux-arm-kernel
For architectures without coherent DMA, memory for DMA may
need to be remapped with coherent attributes. Factor out
the the remapping code from arm and put it in a
common location to reduce code duplication.
As part of this, the arm APIs are now migrated away from
ioremap_page_range to the common APIs which use map_vm_area for remapping.
This should be an equivalent change and using map_vm_area is more
correct as ioremap_page_range is intended to bring in io addresses
into the cpu space and not regular kernel managed memory.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm/mm/dma-mapping.c | 57 +++++---------------------
drivers/base/dma-mapping.c | 68 ++++++++++++++++++++++++++++++++
include/asm-generic/dma-mapping-common.h | 9 +++++
3 files changed, 86 insertions(+), 48 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 4c88935..f5190ac 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -297,37 +297,19 @@ static void *
__dma_alloc_remap(struct page *page, size_t size, gfp_t gfp, pgprot_t prot,
const void *caller)
{
- struct vm_struct *area;
- unsigned long addr;
-
/*
* DMA allocation can be mapped to user space, so lets
* set VM_USERMAP flags too.
*/
- area = get_vm_area_caller(size, VM_ARM_DMA_CONSISTENT | VM_USERMAP,
- caller);
- if (!area)
- return NULL;
- addr = (unsigned long)area->addr;
- area->phys_addr = __pfn_to_phys(page_to_pfn(page));
-
- if (ioremap_page_range(addr, addr + size, area->phys_addr, prot)) {
- vunmap((void *)addr);
- return NULL;
- }
- return (void *)addr;
+ return dma_common_contiguous_remap(page, size,
+ VM_ARM_DMA_CONSISTENT | VM_USERMAP,
+ prot, caller);
}
static void __dma_free_remap(void *cpu_addr, size_t size)
{
- unsigned int flags = VM_ARM_DMA_CONSISTENT | VM_USERMAP;
- struct vm_struct *area = find_vm_area(cpu_addr);
- if (!area || (area->flags & flags) != flags) {
- WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr);
- return;
- }
- unmap_kernel_range((unsigned long)cpu_addr, size);
- vunmap(cpu_addr);
+ dma_common_free_remap(cpu_addr, size,
+ VM_ARM_DMA_CONSISTENT | VM_USERMAP);
}
#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K
@@ -1261,29 +1243,8 @@ static void *
__iommu_alloc_remap(struct page **pages, size_t size, gfp_t gfp, pgprot_t prot,
const void *caller)
{
- unsigned int i, nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
- struct vm_struct *area;
- unsigned long p;
-
- area = get_vm_area_caller(size, VM_ARM_DMA_CONSISTENT | VM_USERMAP,
- caller);
- if (!area)
- return NULL;
-
- area->pages = pages;
- area->nr_pages = nr_pages;
- p = (unsigned long)area->addr;
-
- for (i = 0; i < nr_pages; i++) {
- phys_addr_t phys = __pfn_to_phys(page_to_pfn(pages[i]));
- if (ioremap_page_range(p, p + PAGE_SIZE, phys, prot))
- goto err;
- p += PAGE_SIZE;
- }
- return area->addr;
-err:
- unmap_kernel_range((unsigned long)area->addr, size);
- vunmap(area->addr);
+ return dma_common_pages_remap(pages, size,
+ VM_ARM_DMA_CONSISTENT | VM_USERMAP, prot, caller);
return NULL;
}
@@ -1491,8 +1452,8 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
}
if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) {
- unmap_kernel_range((unsigned long)cpu_addr, size);
- vunmap(cpu_addr);
+ dma_common_free_remap(cpu_addr, size,
+ VM_ARM_DMA_CONSISTENT | VM_USERMAP);
}
__iommu_remove_mapping(dev, handle, size);
diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index 6cd08e1..1bc46df 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -10,6 +10,8 @@
#include <linux/dma-mapping.h>
#include <linux/export.h>
#include <linux/gfp.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
#include <asm-generic/dma-coherent.h>
/*
@@ -267,3 +269,69 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
return ret;
}
EXPORT_SYMBOL(dma_common_mmap);
+
+/*
+ * remaps an allocated contiguous region into another vm_area.
+ * Cannot be used in non-sleeping contexts
+ */
+
+void *dma_common_contiguous_remap(struct page *page, size_t size,
+ unsigned long vm_flags,
+ pgprot_t prot, const void *caller)
+{
+ int i;
+ struct page **pages;
+ void *ptr;
+ unsigned long pfn;
+
+ pages = kmalloc(sizeof(struct page *) << get_order(size), GFP_KERNEL);
+ if (!pages)
+ return NULL;
+
+ for (i = 0, pfn = page_to_pfn(page); i < (size >> PAGE_SHIFT); i++)
+ pages[i] = pfn_to_page(pfn + i);
+
+ ptr = dma_common_pages_remap(pages, size, vm_flags, prot, caller);
+
+ kfree(pages);
+
+ return ptr;
+}
+
+/*
+ * remaps an array of PAGE_SIZE pages into another vm_area
+ * Cannot be used in non-sleeping contexts
+ */
+void *dma_common_pages_remap(struct page **pages, size_t size,
+ unsigned long vm_flags, pgprot_t prot,
+ const void *caller)
+{
+ struct vm_struct *area;
+
+ area = get_vm_area_caller(size, vm_flags, caller);
+ if (!area)
+ return NULL;
+
+ if (map_vm_area(area, prot, pages)) {
+ vunmap(area->addr);
+ return NULL;
+ }
+
+ return area->addr;
+}
+
+/*
+ * unmaps a range previously mapped by dma_common_*_remap
+ */
+void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
+{
+ struct vm_struct *area = find_vm_area(cpu_addr);
+
+ if (!area || (area->flags & vm_flags) != vm_flags) {
+ WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr);
+ return;
+ }
+
+ unmap_kernel_range((unsigned long)cpu_addr, size);
+ vunmap(cpu_addr);
+}
diff --git a/include/asm-generic/dma-mapping-common.h b/include/asm-generic/dma-mapping-common.h
index de8bf89..a9fd248 100644
--- a/include/asm-generic/dma-mapping-common.h
+++ b/include/asm-generic/dma-mapping-common.h
@@ -179,6 +179,15 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
+void *dma_common_contiguous_remap(struct page *page, size_t size,
+ unsigned long vm_flags,
+ pgprot_t prot, const void *caller);
+
+void *dma_common_pages_remap(struct page **pages, size_t size,
+ unsigned long vm_flags, pgprot_t prot,
+ const void *caller);
+void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags);
+
/**
* dma_mmap_attrs - map a coherent DMA allocation into user space
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv7 4/5] arm: use genalloc for the atomic pool
2014-08-11 23:40 [PATCHv7 0/5] DMA Atomic pool for arm64 Laura Abbott
` (2 preceding siblings ...)
2014-08-11 23:40 ` [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions Laura Abbott
@ 2014-08-11 23:40 ` Laura Abbott
2014-08-11 23:40 ` [PATCHv7 5/5] arm64: Add atomic pool for non-coherent and CMA allocations Laura Abbott
4 siblings, 0 replies; 10+ messages in thread
From: Laura Abbott @ 2014-08-11 23:40 UTC (permalink / raw)
To: linux-arm-kernel
ARM currently uses a bitmap for tracking atomic allocations.
genalloc already handles this type of memory pool allocation
so switch to using that instead.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm/Kconfig | 1 +
arch/arm/mm/dma-mapping.c | 153 +++++++++++++++-------------------------------
2 files changed, 50 insertions(+), 104 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 88acf8b..98776f5 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -14,6 +14,7 @@ config ARM
select CLONE_BACKWARDS
select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
+ select GENERIC_ALLOCATOR
select GENERIC_ATOMIC64 if (CPU_V7M || CPU_V6 || !CPU_32v6K || !AEABI)
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_IDLE_POLL_SETUP
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index f5190ac..c6633c0 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -12,6 +12,7 @@
#include <linux/bootmem.h>
#include <linux/module.h>
#include <linux/mm.h>
+#include <linux/genalloc.h>
#include <linux/gfp.h>
#include <linux/errno.h>
#include <linux/list.h>
@@ -313,23 +314,13 @@ static void __dma_free_remap(void *cpu_addr, size_t size)
}
#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K
+static struct gen_pool *atomic_pool;
-struct dma_pool {
- size_t size;
- spinlock_t lock;
- unsigned long *bitmap;
- unsigned long nr_pages;
- void *vaddr;
- struct page **pages;
-};
-
-static struct dma_pool atomic_pool = {
- .size = DEFAULT_DMA_COHERENT_POOL_SIZE,
-};
+static size_t atomic_pool_size = DEFAULT_DMA_COHERENT_POOL_SIZE;
static int __init early_coherent_pool(char *p)
{
- atomic_pool.size = memparse(p, &p);
+ atomic_pool_size = memparse(p, &p);
return 0;
}
early_param("coherent_pool", early_coherent_pool);
@@ -339,14 +330,14 @@ void __init init_dma_coherent_pool_size(unsigned long size)
/*
* Catch any attempt to set the pool size too late.
*/
- BUG_ON(atomic_pool.vaddr);
+ BUG_ON(atomic_pool);
/*
* Set architecture specific coherent pool size only if
* it has not been changed by kernel command line parameter.
*/
- if (atomic_pool.size == DEFAULT_DMA_COHERENT_POOL_SIZE)
- atomic_pool.size = size;
+ if (atomic_pool_size == DEFAULT_DMA_COHERENT_POOL_SIZE)
+ atomic_pool_size = size;
}
/*
@@ -354,52 +345,44 @@ void __init init_dma_coherent_pool_size(unsigned long size)
*/
static int __init atomic_pool_init(void)
{
- struct dma_pool *pool = &atomic_pool;
pgprot_t prot = pgprot_dmacoherent(PAGE_KERNEL);
gfp_t gfp = GFP_KERNEL | GFP_DMA;
- unsigned long nr_pages = pool->size >> PAGE_SHIFT;
- unsigned long *bitmap;
struct page *page;
- struct page **pages;
void *ptr;
- int bitmap_size = BITS_TO_LONGS(nr_pages) * sizeof(long);
-
- bitmap = kzalloc(bitmap_size, GFP_KERNEL);
- if (!bitmap)
- goto no_bitmap;
- pages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
- if (!pages)
- goto no_pages;
+ atomic_pool = gen_pool_create(PAGE_SHIFT, -1);
+ if (!atomic_pool)
+ goto out;
if (dev_get_cma_area(NULL))
- ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page,
- atomic_pool_init);
+ ptr = __alloc_from_contiguous(NULL, atomic_pool_size, prot,
+ &page, atomic_pool_init);
else
- ptr = __alloc_remap_buffer(NULL, pool->size, gfp, prot, &page,
- atomic_pool_init);
+ ptr = __alloc_remap_buffer(NULL, atomic_pool_size, gfp, prot,
+ &page, atomic_pool_init);
if (ptr) {
- int i;
-
- for (i = 0; i < nr_pages; i++)
- pages[i] = page + i;
-
- spin_lock_init(&pool->lock);
- pool->vaddr = ptr;
- pool->pages = pages;
- pool->bitmap = bitmap;
- pool->nr_pages = nr_pages;
- pr_info("DMA: preallocated %u KiB pool for atomic coherent allocations\n",
- (unsigned)pool->size / 1024);
+ int ret;
+
+ ret = gen_pool_add_virt(atomic_pool, (unsigned long)ptr,
+ page_to_phys(page),
+ atomic_pool_size, -1);
+ if (ret)
+ goto destroy_genpool;
+
+ gen_pool_set_algo(atomic_pool,
+ gen_pool_first_fit_order_align,
+ (void *)PAGE_SHIFT);
+ pr_info("DMA: preallocated %zd KiB pool for atomic coherent allocations\n",
+ atomic_pool_size / 1024);
return 0;
}
- kfree(pages);
-no_pages:
- kfree(bitmap);
-no_bitmap:
- pr_err("DMA: failed to allocate %u KiB pool for atomic coherent allocation\n",
- (unsigned)pool->size / 1024);
+destroy_genpool:
+ gen_pool_destroy(atomic_pool);
+ atomic_pool = NULL;
+out:
+ pr_err("DMA: failed to allocate %zx KiB pool for atomic coherent allocation\n",
+ atomic_pool_size / 1024);
return -ENOMEM;
}
/*
@@ -494,76 +477,36 @@ static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp,
static void *__alloc_from_pool(size_t size, struct page **ret_page)
{
- struct dma_pool *pool = &atomic_pool;
- unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
- unsigned int pageno;
- unsigned long flags;
+ unsigned long val;
void *ptr = NULL;
- unsigned long align_mask;
- if (!pool->vaddr) {
+ if (!atomic_pool) {
WARN(1, "coherent pool not initialised!\n");
return NULL;
}
- /*
- * Align the region allocation - allocations from pool are rather
- * small, so align them to their order in pages, minimum is a page
- * size. This helps reduce fragmentation of the DMA space.
- */
- align_mask = (1 << get_order(size)) - 1;
-
- spin_lock_irqsave(&pool->lock, flags);
- pageno = bitmap_find_next_zero_area(pool->bitmap, pool->nr_pages,
- 0, count, align_mask);
- if (pageno < pool->nr_pages) {
- bitmap_set(pool->bitmap, pageno, count);
- ptr = pool->vaddr + PAGE_SIZE * pageno;
- *ret_page = pool->pages[pageno];
- } else {
- pr_err_once("ERROR: %u KiB atomic DMA coherent pool is too small!\n"
- "Please increase it with coherent_pool= kernel parameter!\n",
- (unsigned)pool->size / 1024);
+ val = gen_pool_alloc(atomic_pool, size);
+ if (val) {
+ phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val);
+
+ *ret_page = phys_to_page(phys);
+ ptr = (void *)val;
}
- spin_unlock_irqrestore(&pool->lock, flags);
return ptr;
}
static bool __in_atomic_pool(void *start, size_t size)
{
- struct dma_pool *pool = &atomic_pool;
- void *end = start + size;
- void *pool_start = pool->vaddr;
- void *pool_end = pool->vaddr + pool->size;
-
- if (start < pool_start || start >= pool_end)
- return false;
-
- if (end <= pool_end)
- return true;
-
- WARN(1, "Wrong coherent size(%p-%p) from atomic pool(%p-%p)\n",
- start, end - 1, pool_start, pool_end - 1);
-
- return false;
+ return addr_in_gen_pool(atomic_pool, (unsigned long)start, size);
}
static int __free_from_pool(void *start, size_t size)
{
- struct dma_pool *pool = &atomic_pool;
- unsigned long pageno, count;
- unsigned long flags;
-
if (!__in_atomic_pool(start, size))
return 0;
- pageno = (start - pool->vaddr) >> PAGE_SHIFT;
- count = size >> PAGE_SHIFT;
-
- spin_lock_irqsave(&pool->lock, flags);
- bitmap_clear(pool->bitmap, pageno, count);
- spin_unlock_irqrestore(&pool->lock, flags);
+ gen_pool_free(atomic_pool, (unsigned long)start, size);
return 1;
}
@@ -1306,11 +1249,13 @@ static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t si
static struct page **__atomic_get_pages(void *addr)
{
- struct dma_pool *pool = &atomic_pool;
- struct page **pages = pool->pages;
- int offs = (addr - pool->vaddr) >> PAGE_SHIFT;
+ struct page *page;
+ phys_addr_t phys;
+
+ phys = gen_pool_virt_to_phys(atomic_pool, (unsigned long)addr);
+ page = phys_to_page(phys);
- return pages + offs;
+ return (struct page **)page;
}
static struct page **__iommu_get_pages(void *cpu_addr, struct dma_attrs *attrs)
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv7 5/5] arm64: Add atomic pool for non-coherent and CMA allocations.
2014-08-11 23:40 [PATCHv7 0/5] DMA Atomic pool for arm64 Laura Abbott
` (3 preceding siblings ...)
2014-08-11 23:40 ` [PATCHv7 4/5] arm: use genalloc for the atomic pool Laura Abbott
@ 2014-08-11 23:40 ` Laura Abbott
4 siblings, 0 replies; 10+ messages in thread
From: Laura Abbott @ 2014-08-11 23:40 UTC (permalink / raw)
To: linux-arm-kernel
Neither CMA nor noncoherent allocations support atomic allocations.
Add a dedicated atomic pool to support this.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm64/Kconfig | 1 +
arch/arm64/mm/dma-mapping.c | 164 +++++++++++++++++++++++++++++++++++++++-----
2 files changed, 146 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 839f48c..335374b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -16,6 +16,7 @@ config ARM64
select COMMON_CLK
select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS
+ select GENERIC_ALLOCATOR
select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_CPU_AUTOPROBE
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 4164c5a..90bb7b3 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -27,6 +27,7 @@
#include <linux/vmalloc.h>
#include <linux/swiotlb.h>
#include <linux/amba/bus.h>
+#include <linux/genalloc.h>
#include <asm/cacheflush.h>
@@ -41,6 +42,54 @@ static pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot,
return prot;
}
+static struct gen_pool *atomic_pool;
+
+#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K
+static size_t atomic_pool_size = DEFAULT_DMA_COHERENT_POOL_SIZE;
+
+static int __init early_coherent_pool(char *p)
+{
+ atomic_pool_size = memparse(p, &p);
+ return 0;
+}
+early_param("coherent_pool", early_coherent_pool);
+
+static void *__alloc_from_pool(size_t size, struct page **ret_page)
+{
+ unsigned long val;
+ void *ptr = NULL;
+
+ if (!atomic_pool) {
+ WARN(1, "coherent pool not initialised!\n");
+ return NULL;
+ }
+
+ val = gen_pool_alloc(atomic_pool, size);
+ if (val) {
+ phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val);
+
+ *ret_page = phys_to_page(phys);
+ ptr = (void *)val;
+ }
+
+ return ptr;
+}
+
+static bool __in_atomic_pool(void *start, size_t size)
+{
+ return addr_in_gen_pool(atomic_pool, (unsigned long)start, size);
+}
+
+static int __free_from_pool(void *start, size_t size)
+{
+ if (!__in_atomic_pool(start, size))
+ return 0;
+
+ gen_pool_free(atomic_pool, (unsigned long)start, size);
+
+ return 1;
+}
+
static void *__dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
struct dma_attrs *attrs)
@@ -53,7 +102,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
if (IS_ENABLED(CONFIG_ZONE_DMA) &&
dev->coherent_dma_mask <= DMA_BIT_MASK(32))
flags |= GFP_DMA;
- if (IS_ENABLED(CONFIG_DMA_CMA)) {
+ if (IS_ENABLED(CONFIG_DMA_CMA) && (flags & __GFP_WAIT)) {
struct page *page;
size = PAGE_ALIGN(size);
@@ -73,50 +122,54 @@ static void __dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle,
struct dma_attrs *attrs)
{
+ bool freed;
+ phys_addr_t paddr = dma_to_phys(dev, dma_handle);
+
if (dev == NULL) {
WARN_ONCE(1, "Use an actual device structure for DMA allocation\n");
return;
}
- if (IS_ENABLED(CONFIG_DMA_CMA)) {
- phys_addr_t paddr = dma_to_phys(dev, dma_handle);
-
- dma_release_from_contiguous(dev,
+ freed = dma_release_from_contiguous(dev,
phys_to_page(paddr),
size >> PAGE_SHIFT);
- } else {
+ if (!freed)
swiotlb_free_coherent(dev, size, vaddr, dma_handle);
- }
}
static void *__dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
struct dma_attrs *attrs)
{
- struct page *page, **map;
+ struct page *page;
void *ptr, *coherent_ptr;
- int order, i;
size = PAGE_ALIGN(size);
- order = get_order(size);
+
+ if (!(flags & __GFP_WAIT)) {
+ struct page *page = NULL;
+ void *addr = __alloc_from_pool(size, &page);
+
+ if (addr)
+ *dma_handle = phys_to_dma(dev, page_to_phys(page));
+
+ return addr;
+
+ }
ptr = __dma_alloc_coherent(dev, size, dma_handle, flags, attrs);
if (!ptr)
goto no_mem;
- map = kmalloc(sizeof(struct page *) << order, flags & ~GFP_DMA);
- if (!map)
- goto no_map;
/* remove any dirty cache lines on the kernel alias */
__dma_flush_range(ptr, ptr + size);
/* create a coherent mapping */
page = virt_to_page(ptr);
- for (i = 0; i < (size >> PAGE_SHIFT); i++)
- map[i] = page + i;
- coherent_ptr = vmap(map, size >> PAGE_SHIFT, VM_MAP,
- __get_dma_pgprot(attrs, __pgprot(PROT_NORMAL_NC), false));
- kfree(map);
+ coherent_ptr = dma_common_contiguous_remap(page, size, VM_USERMAP,
+ __get_dma_pgprot(attrs,
+ __pgprot(PROT_NORMAL_NC), false),
+ NULL);
if (!coherent_ptr)
goto no_map;
@@ -135,6 +188,8 @@ static void __dma_free_noncoherent(struct device *dev, size_t size,
{
void *swiotlb_addr = phys_to_virt(dma_to_phys(dev, dma_handle));
+ if (__free_from_pool(vaddr, size))
+ return;
vunmap(vaddr);
__dma_free_coherent(dev, size, swiotlb_addr, dma_handle, attrs);
}
@@ -332,6 +387,67 @@ static struct notifier_block amba_bus_nb = {
extern int swiotlb_late_init_with_default_size(size_t default_size);
+static int __init atomic_pool_init(void)
+{
+ pgprot_t prot = __pgprot(PROT_NORMAL_NC);
+ unsigned long nr_pages = atomic_pool_size >> PAGE_SHIFT;
+ struct page *page;
+ void *addr;
+ unsigned int pool_size_order = get_order(atomic_pool_size);
+
+ if (dev_get_cma_area(NULL))
+ page = dma_alloc_from_contiguous(NULL, nr_pages,
+ pool_size_order);
+ else
+ page = alloc_pages(GFP_DMA, pool_size_order);
+
+ if (page) {
+ int ret;
+ void *page_addr = page_address(page);
+
+ memset(page_addr, 0, atomic_pool_size);
+ __dma_flush_range(page_addr, page_addr + atomic_pool_size);
+
+ atomic_pool = gen_pool_create(PAGE_SHIFT, -1);
+ if (!atomic_pool)
+ goto free_page;
+
+ addr = dma_common_contiguous_remap(page, atomic_pool_size,
+ VM_USERMAP, prot, atomic_pool_init);
+
+ if (!addr)
+ goto destroy_genpool;
+
+ ret = gen_pool_add_virt(atomic_pool, (unsigned long)addr,
+ page_to_phys(page),
+ atomic_pool_size, -1);
+ if (ret)
+ goto remove_mapping;
+
+ gen_pool_set_algo(atomic_pool,
+ gen_pool_first_fit_order_align,
+ (void *)PAGE_SHIFT);
+
+ pr_info("DMA: preallocated %zu KiB pool for atomic allocations\n",
+ atomic_pool_size / 1024);
+ return 0;
+ }
+ goto out;
+
+remove_mapping:
+ dma_common_free_remap(addr, atomic_pool_size, VM_USERMAP);
+destroy_genpool:
+ gen_pool_destroy(atomic_pool);
+ atomic_pool = NULL;
+free_page:
+ if (!dma_release_from_contiguous(NULL, page, nr_pages))
+ __free_pages(page, pool_size_order);
+out:
+ pr_err("DMA: failed to allocate %zu KiB pool for atomic coherent allocation\n",
+ atomic_pool_size / 1024);
+ return -ENOMEM;
+}
+
static int __init swiotlb_late_init(void)
{
size_t swiotlb_size = min(SZ_64M, MAX_ORDER_NR_PAGES << PAGE_SHIFT);
@@ -346,7 +462,17 @@ static int __init swiotlb_late_init(void)
return swiotlb_late_init_with_default_size(swiotlb_size);
}
-arch_initcall(swiotlb_late_init);
+
+static int __init arm64_dma_init(void)
+{
+ int ret = 0;
+
+ ret |= swiotlb_late_init();
+ ret |= atomic_pool_init();
+
+ return ret;
+}
+arch_initcall(arm64_dma_init);
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions
2014-08-11 23:40 ` [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions Laura Abbott
@ 2014-08-26 10:05 ` James Hogan
2014-08-26 16:58 ` Laura Abbott
0 siblings, 1 reply; 10+ messages in thread
From: James Hogan @ 2014-08-26 10:05 UTC (permalink / raw)
To: linux-arm-kernel
On 12 August 2014 00:40, Laura Abbott <lauraa@codeaurora.org> wrote:
>
> For architectures without coherent DMA, memory for DMA may
> need to be remapped with coherent attributes. Factor out
> the the remapping code from arm and put it in a
> common location to reduce code duplication.
>
> As part of this, the arm APIs are now migrated away from
> ioremap_page_range to the common APIs which use map_vm_area for remapping.
> This should be an equivalent change and using map_vm_area is more
> correct as ioremap_page_range is intended to bring in io addresses
> into the cpu space and not regular kernel managed memory.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
This commit in linux-next () breaks the build for metag:
drivers/base/dma-mapping.c: In function ?dma_common_contiguous_remap?:
drivers/base/dma-mapping.c:294: error: implicit declaration of
function ?dma_common_pages_remap?
drivers/base/dma-mapping.c:294: warning: assignment makes pointer from
integer without a cast
drivers/base/dma-mapping.c: At top level:
drivers/base/dma-mapping.c:308: error: conflicting types for
?dma_common_pages_remap?
drivers/base/dma-mapping.c:294: error: previous implicit declaration
of ?dma_common_pages_remap? was here
Looks like metag isn't alone either:
$ git grep -L dma-mapping-common arch/*/include/asm/dma-mapping.h
arch/arc/include/asm/dma-mapping.h
arch/avr32/include/asm/dma-mapping.h
arch/blackfin/include/asm/dma-mapping.h
arch/c6x/include/asm/dma-mapping.h
arch/cris/include/asm/dma-mapping.h
arch/frv/include/asm/dma-mapping.h
arch/m68k/include/asm/dma-mapping.h
arch/metag/include/asm/dma-mapping.h
arch/mn10300/include/asm/dma-mapping.h
arch/parisc/include/asm/dma-mapping.h
arch/xtensa/include/asm/dma-mapping.h
I've checked a couple of these arches (blackfin, xtensa) which don't
include dma-mapping-common.h and their builds seem to be broken too.
Cheers
James
> ---
> arch/arm/mm/dma-mapping.c | 57 +++++---------------------
> drivers/base/dma-mapping.c | 68 ++++++++++++++++++++++++++++++++
> include/asm-generic/dma-mapping-common.h | 9 +++++
> 3 files changed, 86 insertions(+), 48 deletions(-)
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 4c88935..f5190ac 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -297,37 +297,19 @@ static void *
> __dma_alloc_remap(struct page *page, size_t size, gfp_t gfp, pgprot_t prot,
> const void *caller)
> {
> - struct vm_struct *area;
> - unsigned long addr;
> -
> /*
> * DMA allocation can be mapped to user space, so lets
> * set VM_USERMAP flags too.
> */
> - area = get_vm_area_caller(size, VM_ARM_DMA_CONSISTENT | VM_USERMAP,
> - caller);
> - if (!area)
> - return NULL;
> - addr = (unsigned long)area->addr;
> - area->phys_addr = __pfn_to_phys(page_to_pfn(page));
> -
> - if (ioremap_page_range(addr, addr + size, area->phys_addr, prot)) {
> - vunmap((void *)addr);
> - return NULL;
> - }
> - return (void *)addr;
> + return dma_common_contiguous_remap(page, size,
> + VM_ARM_DMA_CONSISTENT | VM_USERMAP,
> + prot, caller);
> }
>
> static void __dma_free_remap(void *cpu_addr, size_t size)
> {
> - unsigned int flags = VM_ARM_DMA_CONSISTENT | VM_USERMAP;
> - struct vm_struct *area = find_vm_area(cpu_addr);
> - if (!area || (area->flags & flags) != flags) {
> - WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr);
> - return;
> - }
> - unmap_kernel_range((unsigned long)cpu_addr, size);
> - vunmap(cpu_addr);
> + dma_common_free_remap(cpu_addr, size,
> + VM_ARM_DMA_CONSISTENT | VM_USERMAP);
> }
>
> #define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K
> @@ -1261,29 +1243,8 @@ static void *
> __iommu_alloc_remap(struct page **pages, size_t size, gfp_t gfp, pgprot_t prot,
> const void *caller)
> {
> - unsigned int i, nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
> - struct vm_struct *area;
> - unsigned long p;
> -
> - area = get_vm_area_caller(size, VM_ARM_DMA_CONSISTENT | VM_USERMAP,
> - caller);
> - if (!area)
> - return NULL;
> -
> - area->pages = pages;
> - area->nr_pages = nr_pages;
> - p = (unsigned long)area->addr;
> -
> - for (i = 0; i < nr_pages; i++) {
> - phys_addr_t phys = __pfn_to_phys(page_to_pfn(pages[i]));
> - if (ioremap_page_range(p, p + PAGE_SIZE, phys, prot))
> - goto err;
> - p += PAGE_SIZE;
> - }
> - return area->addr;
> -err:
> - unmap_kernel_range((unsigned long)area->addr, size);
> - vunmap(area->addr);
> + return dma_common_pages_remap(pages, size,
> + VM_ARM_DMA_CONSISTENT | VM_USERMAP, prot, caller);
> return NULL;
> }
>
> @@ -1491,8 +1452,8 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
> }
>
> if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) {
> - unmap_kernel_range((unsigned long)cpu_addr, size);
> - vunmap(cpu_addr);
> + dma_common_free_remap(cpu_addr, size,
> + VM_ARM_DMA_CONSISTENT | VM_USERMAP);
> }
>
> __iommu_remove_mapping(dev, handle, size);
> diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
> index 6cd08e1..1bc46df 100644
> --- a/drivers/base/dma-mapping.c
> +++ b/drivers/base/dma-mapping.c
> @@ -10,6 +10,8 @@
> #include <linux/dma-mapping.h>
> #include <linux/export.h>
> #include <linux/gfp.h>
> +#include <linux/slab.h>
> +#include <linux/vmalloc.h>
> #include <asm-generic/dma-coherent.h>
>
> /*
> @@ -267,3 +269,69 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> return ret;
> }
> EXPORT_SYMBOL(dma_common_mmap);
> +
> +/*
> + * remaps an allocated contiguous region into another vm_area.
> + * Cannot be used in non-sleeping contexts
> + */
> +
> +void *dma_common_contiguous_remap(struct page *page, size_t size,
> + unsigned long vm_flags,
> + pgprot_t prot, const void *caller)
> +{
> + int i;
> + struct page **pages;
> + void *ptr;
> + unsigned long pfn;
> +
> + pages = kmalloc(sizeof(struct page *) << get_order(size), GFP_KERNEL);
> + if (!pages)
> + return NULL;
> +
> + for (i = 0, pfn = page_to_pfn(page); i < (size >> PAGE_SHIFT); i++)
> + pages[i] = pfn_to_page(pfn + i);
> +
> + ptr = dma_common_pages_remap(pages, size, vm_flags, prot, caller);
> +
> + kfree(pages);
> +
> + return ptr;
> +}
> +
> +/*
> + * remaps an array of PAGE_SIZE pages into another vm_area
> + * Cannot be used in non-sleeping contexts
> + */
> +void *dma_common_pages_remap(struct page **pages, size_t size,
> + unsigned long vm_flags, pgprot_t prot,
> + const void *caller)
> +{
> + struct vm_struct *area;
> +
> + area = get_vm_area_caller(size, vm_flags, caller);
> + if (!area)
> + return NULL;
> +
> + if (map_vm_area(area, prot, pages)) {
> + vunmap(area->addr);
> + return NULL;
> + }
> +
> + return area->addr;
> +}
> +
> +/*
> + * unmaps a range previously mapped by dma_common_*_remap
> + */
> +void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
> +{
> + struct vm_struct *area = find_vm_area(cpu_addr);
> +
> + if (!area || (area->flags & vm_flags) != vm_flags) {
> + WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr);
> + return;
> + }
> +
> + unmap_kernel_range((unsigned long)cpu_addr, size);
> + vunmap(cpu_addr);
> +}
> diff --git a/include/asm-generic/dma-mapping-common.h b/include/asm-generic/dma-mapping-common.h
> index de8bf89..a9fd248 100644
> --- a/include/asm-generic/dma-mapping-common.h
> +++ b/include/asm-generic/dma-mapping-common.h
> @@ -179,6 +179,15 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
> extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> void *cpu_addr, dma_addr_t dma_addr, size_t size);
>
> +void *dma_common_contiguous_remap(struct page *page, size_t size,
> + unsigned long vm_flags,
> + pgprot_t prot, const void *caller);
> +
> +void *dma_common_pages_remap(struct page **pages, size_t size,
> + unsigned long vm_flags, pgprot_t prot,
> + const void *caller);
> +void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags);
> +
> /**
> * dma_mmap_attrs - map a coherent DMA allocation into user space
> * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
> --
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> hosted by The Linux Foundation
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions
2014-08-26 10:05 ` James Hogan
@ 2014-08-26 16:58 ` Laura Abbott
2014-08-27 9:30 ` James Hogan
2014-08-27 21:01 ` Mark Salter
0 siblings, 2 replies; 10+ messages in thread
From: Laura Abbott @ 2014-08-26 16:58 UTC (permalink / raw)
To: linux-arm-kernel
On 8/26/2014 3:05 AM, James Hogan wrote:
> On 12 August 2014 00:40, Laura Abbott <lauraa@codeaurora.org> wrote:
>>
>> For architectures without coherent DMA, memory for DMA may
>> need to be remapped with coherent attributes. Factor out
>> the the remapping code from arm and put it in a
>> common location to reduce code duplication.
>>
>> As part of this, the arm APIs are now migrated away from
>> ioremap_page_range to the common APIs which use map_vm_area for remapping.
>> This should be an equivalent change and using map_vm_area is more
>> correct as ioremap_page_range is intended to bring in io addresses
>> into the cpu space and not regular kernel managed memory.
>>
>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
>
> This commit in linux-next () breaks the build for metag:
>
> drivers/base/dma-mapping.c: In function ?dma_common_contiguous_remap?:
> drivers/base/dma-mapping.c:294: error: implicit declaration of
> function ?dma_common_pages_remap?
> drivers/base/dma-mapping.c:294: warning: assignment makes pointer from
> integer without a cast
> drivers/base/dma-mapping.c: At top level:
> drivers/base/dma-mapping.c:308: error: conflicting types for
> ?dma_common_pages_remap?
> drivers/base/dma-mapping.c:294: error: previous implicit declaration
> of ?dma_common_pages_remap? was here
>
> Looks like metag isn't alone either:
>
> $ git grep -L dma-mapping-common arch/*/include/asm/dma-mapping.h
> arch/arc/include/asm/dma-mapping.h
> arch/avr32/include/asm/dma-mapping.h
> arch/blackfin/include/asm/dma-mapping.h
> arch/c6x/include/asm/dma-mapping.h
> arch/cris/include/asm/dma-mapping.h
> arch/frv/include/asm/dma-mapping.h
> arch/m68k/include/asm/dma-mapping.h
> arch/metag/include/asm/dma-mapping.h
> arch/mn10300/include/asm/dma-mapping.h
> arch/parisc/include/asm/dma-mapping.h
> arch/xtensa/include/asm/dma-mapping.h
>
> I've checked a couple of these arches (blackfin, xtensa) which don't
> include dma-mapping-common.h and their builds seem to be broken too.
>
> Cheers
> James
>
Thanks for the report. Would you mind giving the following patch
a test (this is theoretical only but I think it should work)
-----8<------
>From 81c9a5504cbc1d72ff1df084d48502b248cd79d0 Mon Sep 17 00:00:00 2001
From: Laura Abbott <lauraa@codeaurora.org>
Date: Tue, 26 Aug 2014 09:50:49 -0700
Subject: [PATCH] common: dma-mapping: Swap function order
Fix the order of dma_common_contiguous_remap and
dma_common_pages_remap to avoid function declaration errors:
drivers/base/dma-mapping.c: In function 'dma_common_contiguous_remap':
drivers/base/dma-mapping.c:294: error: implicit declaration of
function 'dma_common_pages_remap'
drivers/base/dma-mapping.c:294: warning: assignment makes pointer from
integer without a cast
drivers/base/dma-mapping.c: At top level:
drivers/base/dma-mapping.c:308: error: conflicting types for
'dma_common_pages_remap'
drivers/base/dma-mapping.c:294: error: previous implicit declaration
of 'dma_common_pages_remap' was here
Change-Id: I65db739114e8f5816a24a279a2ff1a6dc92e2b83
Reported-by: James Hogan <james.hogan@imgtec.com>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
drivers/base/dma-mapping.c | 44 ++++++++++++++++++++++----------------------
1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index 1bc46df..056fd46 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -271,6 +271,28 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
EXPORT_SYMBOL(dma_common_mmap);
/*
+ * remaps an array of PAGE_SIZE pages into another vm_area
+ * Cannot be used in non-sleeping contexts
+ */
+void *dma_common_pages_remap(struct page **pages, size_t size,
+ unsigned long vm_flags, pgprot_t prot,
+ const void *caller)
+{
+ struct vm_struct *area;
+
+ area = get_vm_area_caller(size, vm_flags, caller);
+ if (!area)
+ return NULL;
+
+ if (map_vm_area(area, prot, pages)) {
+ vunmap(area->addr);
+ return NULL;
+ }
+
+ return area->addr;
+}
+
+/*
* remaps an allocated contiguous region into another vm_area.
* Cannot be used in non-sleeping contexts
*/
@@ -299,28 +321,6 @@ void *dma_common_contiguous_remap(struct page *page, size_t size,
}
/*
- * remaps an array of PAGE_SIZE pages into another vm_area
- * Cannot be used in non-sleeping contexts
- */
-void *dma_common_pages_remap(struct page **pages, size_t size,
- unsigned long vm_flags, pgprot_t prot,
- const void *caller)
-{
- struct vm_struct *area;
-
- area = get_vm_area_caller(size, vm_flags, caller);
- if (!area)
- return NULL;
-
- if (map_vm_area(area, prot, pages)) {
- vunmap(area->addr);
- return NULL;
- }
-
- return area->addr;
-}
-
-/*
* unmaps a range previously mapped by dma_common_*_remap
*/
void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions
2014-08-26 16:58 ` Laura Abbott
@ 2014-08-27 9:30 ` James Hogan
2014-08-27 21:01 ` Mark Salter
1 sibling, 0 replies; 10+ messages in thread
From: James Hogan @ 2014-08-27 9:30 UTC (permalink / raw)
To: linux-arm-kernel
On 26/08/14 17:58, Laura Abbott wrote:
> On 8/26/2014 3:05 AM, James Hogan wrote:
>> On 12 August 2014 00:40, Laura Abbott <lauraa@codeaurora.org> wrote:
>>>
>>> For architectures without coherent DMA, memory for DMA may
>>> need to be remapped with coherent attributes. Factor out
>>> the the remapping code from arm and put it in a
>>> common location to reduce code duplication.
>>>
>>> As part of this, the arm APIs are now migrated away from
>>> ioremap_page_range to the common APIs which use map_vm_area for remapping.
>>> This should be an equivalent change and using map_vm_area is more
>>> correct as ioremap_page_range is intended to bring in io addresses
>>> into the cpu space and not regular kernel managed memory.
>>>
>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>>> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
>>
>> This commit in linux-next () breaks the build for metag:
>>
>> drivers/base/dma-mapping.c: In function ?dma_common_contiguous_remap?:
>> drivers/base/dma-mapping.c:294: error: implicit declaration of
>> function ?dma_common_pages_remap?
>> drivers/base/dma-mapping.c:294: warning: assignment makes pointer from
>> integer without a cast
>> drivers/base/dma-mapping.c: At top level:
>> drivers/base/dma-mapping.c:308: error: conflicting types for
>> ?dma_common_pages_remap?
>> drivers/base/dma-mapping.c:294: error: previous implicit declaration
>> of ?dma_common_pages_remap? was here
>>
>> Looks like metag isn't alone either:
>>
>> $ git grep -L dma-mapping-common arch/*/include/asm/dma-mapping.h
>> arch/arc/include/asm/dma-mapping.h
>> arch/avr32/include/asm/dma-mapping.h
>> arch/blackfin/include/asm/dma-mapping.h
>> arch/c6x/include/asm/dma-mapping.h
>> arch/cris/include/asm/dma-mapping.h
>> arch/frv/include/asm/dma-mapping.h
>> arch/m68k/include/asm/dma-mapping.h
>> arch/metag/include/asm/dma-mapping.h
>> arch/mn10300/include/asm/dma-mapping.h
>> arch/parisc/include/asm/dma-mapping.h
>> arch/xtensa/include/asm/dma-mapping.h
>>
>> I've checked a couple of these arches (blackfin, xtensa) which don't
>> include dma-mapping-common.h and their builds seem to be broken too.
>>
>> Cheers
>> James
>>
>
> Thanks for the report. Would you mind giving the following patch
> a test (this is theoretical only but I think it should work)
It certainly fixes the build for metag.
Thanks
James
>
> -----8<------
>
> From 81c9a5504cbc1d72ff1df084d48502b248cd79d0 Mon Sep 17 00:00:00 2001
> From: Laura Abbott <lauraa@codeaurora.org>
> Date: Tue, 26 Aug 2014 09:50:49 -0700
> Subject: [PATCH] common: dma-mapping: Swap function order
>
> Fix the order of dma_common_contiguous_remap and
> dma_common_pages_remap to avoid function declaration errors:
>
> drivers/base/dma-mapping.c: In function 'dma_common_contiguous_remap':
> drivers/base/dma-mapping.c:294: error: implicit declaration of
> function 'dma_common_pages_remap'
> drivers/base/dma-mapping.c:294: warning: assignment makes pointer from
> integer without a cast
> drivers/base/dma-mapping.c: At top level:
> drivers/base/dma-mapping.c:308: error: conflicting types for
> 'dma_common_pages_remap'
> drivers/base/dma-mapping.c:294: error: previous implicit declaration
> of 'dma_common_pages_remap' was here
>
> Change-Id: I65db739114e8f5816a24a279a2ff1a6dc92e2b83
> Reported-by: James Hogan <james.hogan@imgtec.com>
> Reported-by: kbuild test robot <fengguang.wu@intel.com>
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---
> drivers/base/dma-mapping.c | 44 ++++++++++++++++++++++----------------------
> 1 file changed, 22 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
> index 1bc46df..056fd46 100644
> --- a/drivers/base/dma-mapping.c
> +++ b/drivers/base/dma-mapping.c
> @@ -271,6 +271,28 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> EXPORT_SYMBOL(dma_common_mmap);
>
> /*
> + * remaps an array of PAGE_SIZE pages into another vm_area
> + * Cannot be used in non-sleeping contexts
> + */
> +void *dma_common_pages_remap(struct page **pages, size_t size,
> + unsigned long vm_flags, pgprot_t prot,
> + const void *caller)
> +{
> + struct vm_struct *area;
> +
> + area = get_vm_area_caller(size, vm_flags, caller);
> + if (!area)
> + return NULL;
> +
> + if (map_vm_area(area, prot, pages)) {
> + vunmap(area->addr);
> + return NULL;
> + }
> +
> + return area->addr;
> +}
> +
> +/*
> * remaps an allocated contiguous region into another vm_area.
> * Cannot be used in non-sleeping contexts
> */
> @@ -299,28 +321,6 @@ void *dma_common_contiguous_remap(struct page *page, size_t size,
> }
>
> /*
> - * remaps an array of PAGE_SIZE pages into another vm_area
> - * Cannot be used in non-sleeping contexts
> - */
> -void *dma_common_pages_remap(struct page **pages, size_t size,
> - unsigned long vm_flags, pgprot_t prot,
> - const void *caller)
> -{
> - struct vm_struct *area;
> -
> - area = get_vm_area_caller(size, vm_flags, caller);
> - if (!area)
> - return NULL;
> -
> - if (map_vm_area(area, prot, pages)) {
> - vunmap(area->addr);
> - return NULL;
> - }
> -
> - return area->addr;
> -}
> -
> -/*
> * unmaps a range previously mapped by dma_common_*_remap
> */
> void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions
2014-08-26 16:58 ` Laura Abbott
2014-08-27 9:30 ` James Hogan
@ 2014-08-27 21:01 ` Mark Salter
1 sibling, 0 replies; 10+ messages in thread
From: Mark Salter @ 2014-08-27 21:01 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, 2014-08-26 at 09:58 -0700, Laura Abbott wrote:
> On 8/26/2014 3:05 AM, James Hogan wrote:
> > On 12 August 2014 00:40, Laura Abbott <lauraa@codeaurora.org> wrote:
> >>
> >> For architectures without coherent DMA, memory for DMA may
> >> need to be remapped with coherent attributes. Factor out
> >> the the remapping code from arm and put it in a
> >> common location to reduce code duplication.
> >>
> >> As part of this, the arm APIs are now migrated away from
> >> ioremap_page_range to the common APIs which use map_vm_area for remapping.
> >> This should be an equivalent change and using map_vm_area is more
> >> correct as ioremap_page_range is intended to bring in io addresses
> >> into the cpu space and not regular kernel managed memory.
> >>
> >> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> >> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> >
> > This commit in linux-next () breaks the build for metag:
> >
> > drivers/base/dma-mapping.c: In function ?dma_common_contiguous_remap?:
> > drivers/base/dma-mapping.c:294: error: implicit declaration of
> > function ?dma_common_pages_remap?
> > drivers/base/dma-mapping.c:294: warning: assignment makes pointer from
> > integer without a cast
> > drivers/base/dma-mapping.c: At top level:
> > drivers/base/dma-mapping.c:308: error: conflicting types for
> > ?dma_common_pages_remap?
> > drivers/base/dma-mapping.c:294: error: previous implicit declaration
> > of ?dma_common_pages_remap? was here
> >
> > Looks like metag isn't alone either:
> >
> > $ git grep -L dma-mapping-common arch/*/include/asm/dma-mapping.h
> > arch/arc/include/asm/dma-mapping.h
> > arch/avr32/include/asm/dma-mapping.h
> > arch/blackfin/include/asm/dma-mapping.h
> > arch/c6x/include/asm/dma-mapping.h
> > arch/cris/include/asm/dma-mapping.h
> > arch/frv/include/asm/dma-mapping.h
> > arch/m68k/include/asm/dma-mapping.h
> > arch/metag/include/asm/dma-mapping.h
> > arch/mn10300/include/asm/dma-mapping.h
> > arch/parisc/include/asm/dma-mapping.h
> > arch/xtensa/include/asm/dma-mapping.h
> >
> > I've checked a couple of these arches (blackfin, xtensa) which don't
> > include dma-mapping-common.h and their builds seem to be broken too.
> >
> > Cheers
> > James
> >
>
> Thanks for the report. Would you mind giving the following patch
> a test (this is theoretical only but I think it should work)
There's a further problem with c6x (no MMU):
drivers/built-in.o: In function `dma_common_pages_remap':
(.text+0x220c4): undefined reference to `get_vm_area_caller'
drivers/built-in.o: In function `dma_common_pages_remap':
(.text+0x22108): undefined reference to `map_vm_area'
drivers/built-in.o: In function `dma_common_free_remap':
(.text+0x22278): undefined reference to `find_vm_area'
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2014-08-27 21:01 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-11 23:40 [PATCHv7 0/5] DMA Atomic pool for arm64 Laura Abbott
2014-08-11 23:40 ` [PATCHv7 1/5] lib/genalloc.c: Add power aligned algorithm Laura Abbott
2014-08-11 23:40 ` [PATCHv7 2/5] lib/genalloc.c: Add genpool range check function Laura Abbott
2014-08-11 23:40 ` [PATCHv7 3/5] common: dma-mapping: Introduce common remapping functions Laura Abbott
2014-08-26 10:05 ` James Hogan
2014-08-26 16:58 ` Laura Abbott
2014-08-27 9:30 ` James Hogan
2014-08-27 21:01 ` Mark Salter
2014-08-11 23:40 ` [PATCHv7 4/5] arm: use genalloc for the atomic pool Laura Abbott
2014-08-11 23:40 ` [PATCHv7 5/5] arm64: Add atomic pool for non-coherent and CMA allocations Laura Abbott
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).