* [v2 1/4] ARM: dma-mapping: Refactor out to introduce __in_atomic_pool
2012-08-23 6:10 [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Hiroshi Doyu
@ 2012-08-23 6:10 ` Hiroshi Doyu
2012-08-23 6:10 ` [v2 2/4] ARM: dma-mapping: Use kzalloc() with GFP_ATOMIC Hiroshi Doyu
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Hiroshi Doyu @ 2012-08-23 6:10 UTC (permalink / raw)
To: m.szyprowski
Cc: Hiroshi Doyu, linux-arm-kernel, linaro-mm-sig, linux-mm,
linux-kernel, kyungmin.park, arnd, linux, chunsang.jeong, vdumpa,
konrad.wilk, subashrp, minchan, pullip.cho
Check the given range("start", "size") is included in "atomic_pool" or not.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
---
arch/arm/mm/dma-mapping.c | 25 +++++++++++++++++++------
1 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 601da7a..aca2fd0 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -492,19 +492,32 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page)
return ptr;
}
+static bool __in_atomic_pool(void *start, size_t size)
+{
+ struct dma_pool *pool = &atomic_pool;
+ void *end = start + size;
+ void *pool_start = pool->vaddr;
+ void *pool_end = pool->vaddr + pool->size;
+
+ if (start < pool_start || start > pool_end)
+ return false;
+
+ if (end > pool_end) {
+ WARN(1, "freeing wrong coherent size from pool\n");
+ return false;
+ }
+
+ return true;
+}
+
static int __free_from_pool(void *start, size_t size)
{
struct dma_pool *pool = &atomic_pool;
unsigned long pageno, count;
unsigned long flags;
- if (start < pool->vaddr || start > pool->vaddr + pool->size)
- return 0;
-
- if (start + size > pool->vaddr + pool->size) {
- WARN(1, "freeing wrong coherent size from pool\n");
+ if (!__in_atomic_pool(start, size))
return 0;
- }
pageno = (start - pool->vaddr) >> PAGE_SHIFT;
count = size >> PAGE_SHIFT;
--
1.7.5.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [v2 2/4] ARM: dma-mapping: Use kzalloc() with GFP_ATOMIC
2012-08-23 6:10 [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Hiroshi Doyu
2012-08-23 6:10 ` [v2 1/4] ARM: dma-mapping: Refactor out to introduce __in_atomic_pool Hiroshi Doyu
@ 2012-08-23 6:10 ` Hiroshi Doyu
2012-08-23 6:10 ` [v2 3/4] ARM: dma-mapping: Refactor out to introduce __alloc_fill_pages Hiroshi Doyu
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Hiroshi Doyu @ 2012-08-23 6:10 UTC (permalink / raw)
To: m.szyprowski
Cc: Hiroshi Doyu, linux-arm-kernel, linaro-mm-sig, linux-mm,
linux-kernel, kyungmin.park, arnd, linux, chunsang.jeong, vdumpa,
konrad.wilk, subashrp, minchan, pullip.cho
Use kzalloc() with GFP_ATOMIC instead of vzalloc(). At freeing,
__in_atomic_pool() checks if it comes from atomic_pool or not.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
---
arch/arm/mm/dma-mapping.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index aca2fd0..b64475a 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1029,7 +1029,7 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t
int array_size = count * sizeof(struct page *);
int i = 0;
- if (array_size <= PAGE_SIZE)
+ if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
pages = kzalloc(array_size, gfp);
else
pages = vzalloc(array_size);
@@ -1061,7 +1061,7 @@ error:
while (i--)
if (pages[i])
__free_pages(pages[i], 0);
- if (array_size <= PAGE_SIZE)
+ if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
kfree(pages);
else
vfree(pages);
@@ -1076,7 +1076,8 @@ static int __iommu_free_buffer(struct device *dev, struct page **pages, size_t s
for (i = 0; i < count; i++)
if (pages[i])
__free_pages(pages[i], 0);
- if (array_size <= PAGE_SIZE)
+ if ((array_size <= PAGE_SIZE) ||
+ __in_atomic_pool(page_address(pages[0]), size))
kfree(pages);
else
vfree(pages);
--
1.7.5.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [v2 3/4] ARM: dma-mapping: Refactor out to introduce __alloc_fill_pages
2012-08-23 6:10 [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Hiroshi Doyu
2012-08-23 6:10 ` [v2 1/4] ARM: dma-mapping: Refactor out to introduce __in_atomic_pool Hiroshi Doyu
2012-08-23 6:10 ` [v2 2/4] ARM: dma-mapping: Use kzalloc() with GFP_ATOMIC Hiroshi Doyu
@ 2012-08-23 6:10 ` Hiroshi Doyu
2012-08-23 6:10 ` [v2 4/4] ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC Hiroshi Doyu
2012-08-23 6:43 ` [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Minchan Kim
4 siblings, 0 replies; 7+ messages in thread
From: Hiroshi Doyu @ 2012-08-23 6:10 UTC (permalink / raw)
To: m.szyprowski
Cc: Hiroshi Doyu, linux-arm-kernel, linaro-mm-sig, linux-mm,
linux-kernel, kyungmin.park, arnd, linux, chunsang.jeong, vdumpa,
konrad.wilk, subashrp, minchan, pullip.cho
__alloc_fill_pages() allocates power of 2 page number allocation at
most repeatedly.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
---
arch/arm/mm/dma-mapping.c | 50 ++++++++++++++++++++++++++++++--------------
1 files changed, 34 insertions(+), 16 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index b64475a..7ab016b 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1022,20 +1022,11 @@ static inline void __free_iova(struct dma_iommu_mapping *mapping,
spin_unlock_irqrestore(&mapping->lock, flags);
}
-static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t gfp)
+static int __alloc_fill_pages(struct page ***ppages, int count, gfp_t gfp)
{
- struct page **pages;
- int count = size >> PAGE_SHIFT;
- int array_size = count * sizeof(struct page *);
+ struct page **pages = *ppages;
int i = 0;
- if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
- pages = kzalloc(array_size, gfp);
- else
- pages = vzalloc(array_size);
- if (!pages)
- return NULL;
-
while (count) {
int j, order = __fls(count);
@@ -1045,22 +1036,49 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t
if (!pages[i])
goto error;
- if (order)
+ if (order) {
split_page(pages[i], order);
- j = 1 << order;
- while (--j)
- pages[i + j] = pages[i] + j;
+ j = 1 << order;
+ while (--j)
+ pages[i + j] = pages[i] + j;
+ }
__dma_clear_buffer(pages[i], PAGE_SIZE << order);
i += 1 << order;
count -= 1 << order;
}
- return pages;
+ return 0;
+
error:
while (i--)
if (pages[i])
__free_pages(pages[i], 0);
+ return -ENOMEM;
+}
+
+static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
+ gfp_t gfp)
+{
+ struct page **pages;
+ int count = size >> PAGE_SHIFT;
+ int array_size = count * sizeof(struct page *);
+ int err;
+
+ if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
+ pages = kzalloc(array_size, gfp);
+ else
+ pages = vzalloc(array_size);
+ if (!pages)
+ return NULL;
+
+ err = __alloc_fill_pages(&pages, count, gfp);
+ if (err)
+ goto error
+
+ return pages;
+
+error:
if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
kfree(pages);
else
--
1.7.5.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [v2 4/4] ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC
2012-08-23 6:10 [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Hiroshi Doyu
` (2 preceding siblings ...)
2012-08-23 6:10 ` [v2 3/4] ARM: dma-mapping: Refactor out to introduce __alloc_fill_pages Hiroshi Doyu
@ 2012-08-23 6:10 ` Hiroshi Doyu
2012-08-23 7:33 ` Marek Szyprowski
2012-08-23 6:43 ` [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Minchan Kim
4 siblings, 1 reply; 7+ messages in thread
From: Hiroshi Doyu @ 2012-08-23 6:10 UTC (permalink / raw)
To: m.szyprowski
Cc: Hiroshi Doyu, linux-arm-kernel, linaro-mm-sig, linux-mm,
linux-kernel, kyungmin.park, arnd, linux, chunsang.jeong, vdumpa,
konrad.wilk, subashrp, minchan, pullip.cho
Makes use of the same atomic pool from DMA, and skips kernel page
mapping which can involve sleep'able operations at allocating a kernel
page table.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
---
arch/arm/mm/dma-mapping.c | 30 +++++++++++++++++++++++++-----
1 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 7ab016b..433312a 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1063,7 +1063,6 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
struct page **pages;
int count = size >> PAGE_SHIFT;
int array_size = count * sizeof(struct page *);
- int err;
if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
pages = kzalloc(array_size, gfp);
@@ -1072,9 +1071,20 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
if (!pages)
return NULL;
- err = __alloc_fill_pages(&pages, count, gfp);
- if (err)
- goto error
+ if (gfp & GFP_ATOMIC) {
+ struct page *page;
+ int i;
+ void *addr = __alloc_from_pool(size, &page);
+ if (!addr)
+ goto error;
+
+ for (i = 0; i < count; i++)
+ pages[i] = page + i;
+ } else {
+ int err = __alloc_fill_pages(&pages, count, gfp);
+ if (err)
+ goto error;
+ }
return pages;
@@ -1091,9 +1101,15 @@ static int __iommu_free_buffer(struct device *dev, struct page **pages, size_t s
int count = size >> PAGE_SHIFT;
int array_size = count * sizeof(struct page *);
int i;
+
+ if (__free_from_pool(page_address(pages[0]), size))
+ goto out;
+
for (i = 0; i < count; i++)
if (pages[i])
__free_pages(pages[i], 0);
+
+out:
if ((array_size <= PAGE_SIZE) ||
__in_atomic_pool(page_address(pages[0]), size))
kfree(pages);
@@ -1221,6 +1237,9 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
if (*handle == DMA_ERROR_CODE)
goto err_buffer;
+ if (gfp & GFP_ATOMIC)
+ return page_address(pages[0]);
+
if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs))
return pages;
@@ -1279,7 +1298,8 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
return;
}
- if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) {
+ if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs) ||
+ !__in_atomic_pool(cpu_addr, size)) {
unmap_kernel_range((unsigned long)cpu_addr, size);
vunmap(cpu_addr);
}
--
1.7.5.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 7+ messages in thread
* RE: [v2 4/4] ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC
2012-08-23 6:10 ` [v2 4/4] ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC Hiroshi Doyu
@ 2012-08-23 7:33 ` Marek Szyprowski
0 siblings, 0 replies; 7+ messages in thread
From: Marek Szyprowski @ 2012-08-23 7:33 UTC (permalink / raw)
To: 'Hiroshi Doyu'
Cc: linux-arm-kernel, linaro-mm-sig, linux-mm, linux-kernel,
kyungmin.park, arnd, linux, chunsang.jeong, vdumpa, konrad.wilk,
subashrp, minchan, pullip.cho
Hi Hiroshi,
On Thursday, August 23, 2012 8:10 AM Hiroshi Doyu wrote:
> Makes use of the same atomic pool from DMA, and skips kernel page
> mapping which can involve sleep'able operations at allocating a kernel
> page table.
>
> Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
> ---
> arch/arm/mm/dma-mapping.c | 30 +++++++++++++++++++++++++-----
> 1 files changed, 25 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 7ab016b..433312a 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -1063,7 +1063,6 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t
> size,
> struct page **pages;
> int count = size >> PAGE_SHIFT;
> int array_size = count * sizeof(struct page *);
> - int err;
>
> if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC))
> pages = kzalloc(array_size, gfp);
> @@ -1072,9 +1071,20 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t
> size,
> if (!pages)
> return NULL;
>
> - err = __alloc_fill_pages(&pages, count, gfp);
> - if (err)
> - goto error
> + if (gfp & GFP_ATOMIC) {
> + struct page *page;
> + int i;
> + void *addr = __alloc_from_pool(size, &page);
> + if (!addr)
> + goto error;
> +
> + for (i = 0; i < count; i++)
> + pages[i] = page + i;
> + } else {
> + int err = __alloc_fill_pages(&pages, count, gfp);
> + if (err)
> + goto error;
> + }
>
> return pages;
>
> @@ -1091,9 +1101,15 @@ static int __iommu_free_buffer(struct device *dev, struct page **pages,
> size_t s
> int count = size >> PAGE_SHIFT;
> int array_size = count * sizeof(struct page *);
> int i;
> +
> + if (__free_from_pool(page_address(pages[0]), size))
> + goto out;
> +
> for (i = 0; i < count; i++)
> if (pages[i])
> __free_pages(pages[i], 0);
> +
> +out:
> if ((array_size <= PAGE_SIZE) ||
> __in_atomic_pool(page_address(pages[0]), size))
> kfree(pages);
> @@ -1221,6 +1237,9 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
> if (*handle == DMA_ERROR_CODE)
> goto err_buffer;
>
> + if (gfp & GFP_ATOMIC)
> + return page_address(pages[0]);
> +
> if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs))
> return pages;
>
I've read the whole code and it looks that it starts to become a little spaghetti - there are too
many places altered by those atomic changes and it is hard to follow the logic at the first sight.
IMHO it will be better to add a following function:
static void *__iommu_alloc_atomic(struct device *dev, size_t size,
dma_addr_t *handle)
{
struct page *page, **pages;
int count = size >> PAGE_SHIFT;
int array_size = count * sizeof(struct page *);
void *addr;
int i;
pages = kzalloc(array_size, gfp);
if (!pages)
return NULL;
addr = __alloc_from_pool(size, &page);
if (!addr)
goto err_pool;
for (i = 0; i < count; i++)
pages[i] = page + i;
*handle = __iommu_create_mapping(dev, pages, size);
if (*handle == DMA_ERROR_CODE)
goto err_mapping;
return addr;
err_mapping:
__free_from_pool(addr, size);
err_pool:
kfree(pages);
return NULL;
}
and then call it at the beginning of the arm_iommu_alloc_attrs():
if (gfp & GFP_ATOMIC)
return __iommu_alloc_atomic(dev, size, handle);
You should also add support for the allocation from atomic_pool to __iommu_get_pages() function
to get dma_mmap() and dma_get_sgtable() working correctly.
> @@ -1279,7 +1298,8 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void
> *cpu_addr,
> return;
> }
>
> - if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) {
> + if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs) ||
> + !__in_atomic_pool(cpu_addr, size)) {
> unmap_kernel_range((unsigned long)cpu_addr, size);
> vunmap(cpu_addr);
> }
Are you sure this one works? __iommu_get_pages() won't find pages array, because atomic
allocations don't get their separate vm_struct area, so this check will be never reached. Maybe
a __iommu_free_atomic() function would also make it safer and easier to understand.
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation
2012-08-23 6:10 [v2 0/4] ARM: dma-mapping: IOMMU atomic allocation Hiroshi Doyu
` (3 preceding siblings ...)
2012-08-23 6:10 ` [v2 4/4] ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC Hiroshi Doyu
@ 2012-08-23 6:43 ` Minchan Kim
4 siblings, 0 replies; 7+ messages in thread
From: Minchan Kim @ 2012-08-23 6:43 UTC (permalink / raw)
To: Hiroshi Doyu
Cc: m.szyprowski, linux-arm-kernel, linaro-mm-sig, linux-mm,
linux-kernel, kyungmin.park, arnd, linux, chunsang.jeong, vdumpa,
konrad.wilk, subashrp, pullip.cho
On Thu, Aug 23, 2012 at 09:10:25AM +0300, Hiroshi Doyu wrote:
> Hi,
>
> The commit e9da6e9 "ARM: dma-mapping: remove custom consistent dma
> region" breaks the compatibility with existing drivers. This causes
> the following kernel oops(*1). That driver has called dma_pool_alloc()
> to allocate memory from the interrupt context, and it hits
> BUG_ON(in_interrpt()) in "get_vm_area_caller()". This patch seris
> fixes this problem with making use of the pre-allocate atomic memory
> pool which DMA is using in the same way as DMA does now.
>
> Any comment would be really appreciated.
>
> v2:
> Don't modify attrs(DMA_ATTR_NO_KERNEL_MAPPING) for atomic allocation. (Marek)
> Skip vzalloc (KyongHo, Minchan)
Huh? I would like to correct exactly. I didn't say that kzalloc unify is okay.
As KyongHo said, there are other usecases for allocating big buffer but
I can't agree his opinion that it's system memory shortage if the allocation
fails. Because there are lots of freeable pages(cached page + anon pages
with swap , shrinkable slab and so on). But as I said early, VM can't do
anything except relying on kswapd in atomic context while there are a ton
of freeable pages in system and we can't gaurantee kswapd will do something
for us before seeing the page allocation failure in process context.
Especially, UP is more problem. So, it never indicate system needs more memory.
Atomic high order allocation is very fragile in VM POV
so caller should have fallback mechanism. Otherwise, don't do that.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread