* [PATCH 1/8] mm: use vm_unmapped_area() on parisc architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 16:56 ` Rik van Riel
2013-01-09 1:28 ` [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture Michel Lespinasse
` (7 subsequent siblings)
8 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Update the parisc arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/parisc/kernel/sys_parisc.c | 46 ++++++++++++++------------------------
1 files changed, 17 insertions(+), 29 deletions(-)
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index f76c10863c62..6ab138088076 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -35,18 +35,15 @@
static unsigned long get_unshared_area(unsigned long addr, unsigned long len)
{
- struct vm_area_struct *vma;
+ struct vm_unmapped_area_info info;
- addr = PAGE_ALIGN(addr);
-
- for (vma = find_vma(current->mm, addr); ; vma = vma->vm_next) {
- /* At this point: (!vma || addr < vma->vm_end). */
- if (TASK_SIZE - len < addr)
- return -ENOMEM;
- if (!vma || addr + len <= vma->vm_start)
- return addr;
- addr = vma->vm_end;
- }
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = PAGE_ALIGN(addr);
+ info.high_limit = TASK_SIZE;
+ info.align_mask = 0;
+ info.align_offset = 0;
+ return vm_unmapped_area(&info);
}
#define DCACHE_ALIGN(addr) (((addr) + (SHMLBA - 1)) &~ (SHMLBA - 1))
@@ -63,30 +60,21 @@ static unsigned long get_unshared_area(unsigned long addr, unsigned long len)
*/
static int get_offset(struct address_space *mapping)
{
- int offset = (unsigned long) mapping << (PAGE_SHIFT - 8);
- return offset & 0x3FF000;
+ return (unsigned long) mapping >> 8;
}
static unsigned long get_shared_area(struct address_space *mapping,
unsigned long addr, unsigned long len, unsigned long pgoff)
{
- struct vm_area_struct *vma;
- int offset = mapping ? get_offset(mapping) : 0;
-
- offset = (offset + (pgoff << PAGE_SHIFT)) & 0x3FF000;
+ struct vm_unmapped_area_info info;
- addr = DCACHE_ALIGN(addr - offset) + offset;
-
- for (vma = find_vma(current->mm, addr); ; vma = vma->vm_next) {
- /* At this point: (!vma || addr < vma->vm_end). */
- if (TASK_SIZE - len < addr)
- return -ENOMEM;
- if (!vma || addr + len <= vma->vm_start)
- return addr;
- addr = DCACHE_ALIGN(vma->vm_end - offset) + offset;
- if (addr < vma->vm_end) /* handle wraparound */
- return -ENOMEM;
- }
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = PAGE_ALIGN(addr);
+ info.high_limit = TASK_SIZE;
+ info.align_mask = PAGE_MASK & (SHMLBA - 1);
+ info.align_offset = (get_offset(mapping) + pgoff) << PAGE_SHIFT;
+ return vm_unmapped_area(&info);
}
unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 1/8] mm: use vm_unmapped_area() on parisc architecture
2013-01-09 1:28 ` [PATCH 1/8] mm: use vm_unmapped_area() on parisc architecture Michel Lespinasse
@ 2013-01-09 16:56 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 16:56 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Update the parisc arch_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
2013-01-09 1:28 ` [PATCH 1/8] mm: use vm_unmapped_area() on parisc architecture Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 17:01 ` Rik van Riel
2013-01-25 3:49 ` Michael Cree
2013-01-09 1:28 ` [PATCH 3/8] mm: use vm_unmapped_area() on frv architecture Michel Lespinasse
` (6 subsequent siblings)
8 siblings, 2 replies; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Update the alpha arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/alpha/kernel/osf_sys.c | 20 +++++++++-----------
1 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
index 14db93e4c8a8..ba707e23ef37 100644
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -1298,17 +1298,15 @@ static unsigned long
arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
unsigned long limit)
{
- struct vm_area_struct *vma = find_vma(current->mm, addr);
-
- while (1) {
- /* At this point: (!vma || addr < vma->vm_end). */
- if (limit - len < addr)
- return -ENOMEM;
- if (!vma || addr + len <= vma->vm_start)
- return addr;
- addr = vma->vm_end;
- vma = vma->vm_next;
- }
+ struct vm_unmapped_area_info info;
+
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = addr;
+ info.high_limit = limit;
+ info.align_mask = 0;
+ info.align_offset = 0;
+ return vm_unmapped_area(&info);
}
unsigned long
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture
2013-01-09 1:28 ` [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture Michel Lespinasse
@ 2013-01-09 17:01 ` Rik van Riel
2013-01-25 3:49 ` Michael Cree
1 sibling, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 17:01 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Update the alpha arch_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture
2013-01-09 1:28 ` [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture Michel Lespinasse
2013-01-09 17:01 ` Rik van Riel
@ 2013-01-25 3:49 ` Michael Cree
1 sibling, 0 replies; 25+ messages in thread
From: Michael Cree @ 2013-01-25 3:49 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Rik van Riel, Tony Luck, linux-ia64, linux-parisc,
James E.J. Bottomley, linux-kernel, David Howells, linux-mm,
linux-alpha, Matt Turner, linuxppc-dev, Andrew Morton
On 9/01/2013, at 2:28 PM, Michel Lespinasse wrote:
> Update the alpha arch_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
'Tis running fine on my alpha.
Tested-by: Michael Cree <mcree@orcon.net.nz>
Cheers
Michael.
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 3/8] mm: use vm_unmapped_area() on frv architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
2013-01-09 1:28 ` [PATCH 1/8] mm: use vm_unmapped_area() on parisc architecture Michel Lespinasse
2013-01-09 1:28 ` [PATCH 2/8] mm: use vm_unmapped_area() on alpha architecture Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 18:25 ` Rik van Riel
2013-01-09 1:28 ` [PATCH 4/8] mm: use vm_unmapped_area() on ia64 architecture Michel Lespinasse
` (5 subsequent siblings)
8 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Update the frv arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/frv/mm/elf-fdpic.c | 49 ++++++++++++++++------------------------------
1 files changed, 17 insertions(+), 32 deletions(-)
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index 385fd30b142f..836f14707a62 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -60,7 +60,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
unsigned long pgoff, unsigned long flags)
{
struct vm_area_struct *vma;
- unsigned long limit;
+ struct vm_unmapped_area_info info;
if (len > TASK_SIZE)
return -ENOMEM;
@@ -79,39 +79,24 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
}
/* search between the bottom of user VM and the stack grow area */
- addr = PAGE_SIZE;
- limit = (current->mm->start_stack - 0x00200000);
- if (addr + len <= limit) {
- limit -= len;
-
- if (addr <= limit) {
- vma = find_vma(current->mm, PAGE_SIZE);
- for (; vma; vma = vma->vm_next) {
- if (addr > limit)
- break;
- if (addr + len <= vma->vm_start)
- goto success;
- addr = vma->vm_end;
- }
- }
- }
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = PAGE_SIZE;
+ info.high_limit = (current->mm->start_stack - 0x00200000);
+ info.align_mask = 0;
+ info.align_offset = 0;
+ addr = vm_unmapped_area(&info);
+ if (!(addr & ~PAGE_MASK))
+ goto success;
+ VM_BUG_ON(addr != -ENOMEM);
/* search from just above the WorkRAM area to the top of memory */
- addr = PAGE_ALIGN(0x80000000);
- limit = TASK_SIZE - len;
- if (addr <= limit) {
- vma = find_vma(current->mm, addr);
- for (; vma; vma = vma->vm_next) {
- if (addr > limit)
- break;
- if (addr + len <= vma->vm_start)
- goto success;
- addr = vma->vm_end;
- }
-
- if (!vma && addr <= limit)
- goto success;
- }
+ info.low_limit = PAGE_ALIGN(0x80000000);
+ info.high_limit = TASK_SIZE;
+ addr = vm_unmapped_area(&info);
+ if (!(addr & ~PAGE_MASK))
+ goto success;
+ VM_BUG_ON(addr != -ENOMEM);
#if 0
printk("[area] l=%lx (ENOMEM) f='%s'\n",
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 3/8] mm: use vm_unmapped_area() on frv architecture
2013-01-09 1:28 ` [PATCH 3/8] mm: use vm_unmapped_area() on frv architecture Michel Lespinasse
@ 2013-01-09 18:25 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 18:25 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Update the frv arch_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 4/8] mm: use vm_unmapped_area() on ia64 architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
` (2 preceding siblings ...)
2013-01-09 1:28 ` [PATCH 3/8] mm: use vm_unmapped_area() on frv architecture Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 18:29 ` Rik van Riel
2013-01-09 1:28 ` [PATCH 5/8] mm: use vm_unmapped_area() in hugetlbfs " Michel Lespinasse
` (4 subsequent siblings)
8 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Update the ia64 arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/ia64/kernel/sys_ia64.c | 37 ++++++++++++-------------------------
1 files changed, 12 insertions(+), 25 deletions(-)
diff --git a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c
index d9439ef2f661..41e33f84c185 100644
--- a/arch/ia64/kernel/sys_ia64.c
+++ b/arch/ia64/kernel/sys_ia64.c
@@ -25,9 +25,9 @@ arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len
unsigned long pgoff, unsigned long flags)
{
long map_shared = (flags & MAP_SHARED);
- unsigned long start_addr, align_mask = PAGE_SIZE - 1;
+ unsigned long align_mask = 0;
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_unmapped_area_info info;
if (len > RGN_MAP_LIMIT)
return -ENOMEM;
@@ -44,7 +44,7 @@ arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len
addr = 0;
#endif
if (!addr)
- addr = mm->free_area_cache;
+ addr = TASK_UNMAPPED_BASE;
if (map_shared && (TASK_SIZE > 0xfffffffful))
/*
@@ -53,28 +53,15 @@ arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len
* tasks, we prefer to avoid exhausting the address space too quickly by
* limiting alignment to a single page.
*/
- align_mask = SHMLBA - 1;
-
- full_search:
- start_addr = addr = (addr + align_mask) & ~align_mask;
-
- for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
- /* At this point: (!vma || addr < vma->vm_end). */
- if (TASK_SIZE - len < addr || RGN_MAP_LIMIT - len < REGION_OFFSET(addr)) {
- if (start_addr != TASK_UNMAPPED_BASE) {
- /* Start a new search --- just in case we missed some holes. */
- addr = TASK_UNMAPPED_BASE;
- goto full_search;
- }
- return -ENOMEM;
- }
- if (!vma || addr + len <= vma->vm_start) {
- /* Remember the address where we stopped this search: */
- mm->free_area_cache = addr + len;
- return addr;
- }
- addr = (vma->vm_end + align_mask) & ~align_mask;
- }
+ align_mask = PAGE_MASK & (SHMLBA - 1);
+
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = addr;
+ info.high_limit = TASK_SIZE;
+ info.align_mask = align_mask;
+ info.align_offset = 0;
+ return vm_unmapped_area(&info);
}
asmlinkage long
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 4/8] mm: use vm_unmapped_area() on ia64 architecture
2013-01-09 1:28 ` [PATCH 4/8] mm: use vm_unmapped_area() on ia64 architecture Michel Lespinasse
@ 2013-01-09 18:29 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 18:29 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Update the ia64 arch_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 5/8] mm: use vm_unmapped_area() in hugetlbfs on ia64 architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
` (3 preceding siblings ...)
2013-01-09 1:28 ` [PATCH 4/8] mm: use vm_unmapped_area() on ia64 architecture Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 18:32 ` Rik van Riel
2013-01-09 1:28 ` [PATCH 6/8] mm: remove free_area_cache use in powerpc architecture Michel Lespinasse
` (3 subsequent siblings)
8 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Update the ia64 hugetlb_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/ia64/mm/hugetlbpage.c | 20 +++++++++-----------
1 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
index 5ca674b74737..76069c18ee42 100644
--- a/arch/ia64/mm/hugetlbpage.c
+++ b/arch/ia64/mm/hugetlbpage.c
@@ -148,7 +148,7 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb,
unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
{
- struct vm_area_struct *vmm;
+ struct vm_unmapped_area_info info;
if (len > RGN_MAP_LIMIT)
return -ENOMEM;
@@ -165,16 +165,14 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, u
/* This code assumes that RGN_HPAGE != 0. */
if ((REGION_NUMBER(addr) != RGN_HPAGE) || (addr & (HPAGE_SIZE - 1)))
addr = HPAGE_REGION_BASE;
- else
- addr = ALIGN(addr, HPAGE_SIZE);
- for (vmm = find_vma(current->mm, addr); ; vmm = vmm->vm_next) {
- /* At this point: (!vmm || addr < vmm->vm_end). */
- if (REGION_OFFSET(addr) + len > RGN_MAP_LIMIT)
- return -ENOMEM;
- if (!vmm || (addr + len) <= vmm->vm_start)
- return addr;
- addr = ALIGN(vmm->vm_end, HPAGE_SIZE);
- }
+
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = addr;
+ info.high_limit = HPAGE_REGION_BASE + RGN_MAP_LIMIT;
+ info.align_mask = PAGE_MASK & (HPAGE_SIZE - 1);
+ info.align_offset = 0;
+ return vm_unmapped_area(&info);
}
static int __init hugetlb_setup_sz(char *str)
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 5/8] mm: use vm_unmapped_area() in hugetlbfs on ia64 architecture
2013-01-09 1:28 ` [PATCH 5/8] mm: use vm_unmapped_area() in hugetlbfs " Michel Lespinasse
@ 2013-01-09 18:32 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 18:32 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Update the ia64 hugetlb_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 6/8] mm: remove free_area_cache use in powerpc architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
` (4 preceding siblings ...)
2013-01-09 1:28 ` [PATCH 5/8] mm: use vm_unmapped_area() in hugetlbfs " Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 20:57 ` Rik van Riel
2013-01-09 1:28 ` [PATCH 7/8] mm: use vm_unmapped_area() on " Michel Lespinasse
` (2 subsequent siblings)
8 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
As all other architectures have been converted to use vm_unmapped_area(),
we are about to retire the free_area_cache.
This change simply removes the use of that cache in
slice_get_unmapped_area(), which will most certainly have a
performance cost. Next one will convert that function to use the
vm_unmapped_area() infrastructure and regain the performance.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/powerpc/include/asm/page_64.h | 3 +-
arch/powerpc/mm/hugetlbpage.c | 2 +-
arch/powerpc/mm/slice.c | 108 +++++------------------------
arch/powerpc/platforms/cell/spufs/file.c | 2 +-
4 files changed, 22 insertions(+), 93 deletions(-)
diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/page_64.h
index cd915d6b093d..88693cef4f3d 100644
--- a/arch/powerpc/include/asm/page_64.h
+++ b/arch/powerpc/include/asm/page_64.h
@@ -99,8 +99,7 @@ extern unsigned long slice_get_unmapped_area(unsigned long addr,
unsigned long len,
unsigned long flags,
unsigned int psize,
- int topdown,
- int use_cache);
+ int topdown);
extern unsigned int get_slice_psize(struct mm_struct *mm,
unsigned long addr);
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 1a6de0a7d8eb..5dc52d803ed8 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -742,7 +742,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
struct hstate *hstate = hstate_file(file);
int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
- return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1, 0);
+ return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
}
#endif
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index cf9dada734b6..999a74f25ebe 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -240,23 +240,15 @@ static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psiz
static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
unsigned long len,
struct slice_mask available,
- int psize, int use_cache)
+ int psize)
{
struct vm_area_struct *vma;
- unsigned long start_addr, addr;
+ unsigned long addr;
struct slice_mask mask;
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
- if (use_cache) {
- if (len <= mm->cached_hole_size) {
- start_addr = addr = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = 0;
- } else
- start_addr = addr = mm->free_area_cache;
- } else
- start_addr = addr = TASK_UNMAPPED_BASE;
+ addr = TASK_UNMAPPED_BASE;
-full_search:
for (;;) {
addr = _ALIGN_UP(addr, 1ul << pshift);
if ((TASK_SIZE - len) < addr)
@@ -272,63 +264,24 @@ full_search:
addr = _ALIGN_UP(addr + 1, 1ul << SLICE_HIGH_SHIFT);
continue;
}
- if (!vma || addr + len <= vma->vm_start) {
- /*
- * Remember the place where we stopped the search:
- */
- if (use_cache)
- mm->free_area_cache = addr + len;
+ if (!vma || addr + len <= vma->vm_start)
return addr;
- }
- if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
- mm->cached_hole_size = vma->vm_start - addr;
addr = vma->vm_end;
}
- /* Make sure we didn't miss any holes */
- if (use_cache && start_addr != TASK_UNMAPPED_BASE) {
- start_addr = addr = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = 0;
- goto full_search;
- }
return -ENOMEM;
}
static unsigned long slice_find_area_topdown(struct mm_struct *mm,
unsigned long len,
struct slice_mask available,
- int psize, int use_cache)
+ int psize)
{
struct vm_area_struct *vma;
unsigned long addr;
struct slice_mask mask;
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
- /* check if free_area_cache is useful for us */
- if (use_cache) {
- if (len <= mm->cached_hole_size) {
- mm->cached_hole_size = 0;
- mm->free_area_cache = mm->mmap_base;
- }
-
- /* either no address requested or can't fit in requested
- * address hole
- */
- addr = mm->free_area_cache;
-
- /* make sure it can fit in the remaining address space */
- if (addr > len) {
- addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
- mask = slice_range_to_mask(addr, len);
- if (slice_check_fit(mask, available) &&
- slice_area_is_free(mm, addr, len))
- /* remember the address as a hint for
- * next time
- */
- return (mm->free_area_cache = addr);
- }
- }
-
addr = mm->mmap_base;
while (addr > len) {
/* Go down by chunk size */
@@ -352,16 +305,8 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
* return with success:
*/
vma = find_vma(mm, addr);
- if (!vma || (addr + len) <= vma->vm_start) {
- /* remember the address as a hint for next time */
- if (use_cache)
- mm->free_area_cache = addr;
+ if (!vma || (addr + len) <= vma->vm_start)
return addr;
- }
-
- /* remember the largest hole we saw so far */
- if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
- mm->cached_hole_size = vma->vm_start - addr;
/* try just below the current vma->vm_start */
addr = vma->vm_start;
@@ -373,28 +318,18 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
* can happen with large stack limits and large mmap()
* allocations.
*/
- addr = slice_find_area_bottomup(mm, len, available, psize, 0);
-
- /*
- * Restore the topdown base:
- */
- if (use_cache) {
- mm->free_area_cache = mm->mmap_base;
- mm->cached_hole_size = ~0UL;
- }
-
- return addr;
+ return slice_find_area_bottomup(mm, len, available, psize);
}
static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len,
struct slice_mask mask, int psize,
- int topdown, int use_cache)
+ int topdown)
{
if (topdown)
- return slice_find_area_topdown(mm, len, mask, psize, use_cache);
+ return slice_find_area_topdown(mm, len, mask, psize);
else
- return slice_find_area_bottomup(mm, len, mask, psize, use_cache);
+ return slice_find_area_bottomup(mm, len, mask, psize);
}
#define or_mask(dst, src) do { \
@@ -415,7 +350,7 @@ static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len,
unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
unsigned long flags, unsigned int psize,
- int topdown, int use_cache)
+ int topdown)
{
struct slice_mask mask = {0, 0};
struct slice_mask good_mask;
@@ -430,8 +365,8 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
BUG_ON(mm->task_size == 0);
slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
- slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d, use_cache=%d\n",
- addr, len, flags, topdown, use_cache);
+ slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
+ addr, len, flags, topdown);
if (len > mm->task_size)
return -ENOMEM;
@@ -503,8 +438,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
/* Now let's see if we can find something in the existing
* slices for that size
*/
- newaddr = slice_find_area(mm, len, good_mask, psize, topdown,
- use_cache);
+ newaddr = slice_find_area(mm, len, good_mask, psize, topdown);
if (newaddr != -ENOMEM) {
/* Found within the good mask, we don't have to setup,
* we thus return directly
@@ -536,8 +470,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
* anywhere in the good area.
*/
if (addr) {
- addr = slice_find_area(mm, len, good_mask, psize, topdown,
- use_cache);
+ addr = slice_find_area(mm, len, good_mask, psize, topdown);
if (addr != -ENOMEM) {
slice_dbg(" found area at 0x%lx\n", addr);
return addr;
@@ -547,15 +480,14 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
/* Now let's see if we can find something in the existing slices
* for that size plus free slices
*/
- addr = slice_find_area(mm, len, potential_mask, psize, topdown,
- use_cache);
+ addr = slice_find_area(mm, len, potential_mask, psize, topdown);
#ifdef CONFIG_PPC_64K_PAGES
if (addr == -ENOMEM && psize == MMU_PAGE_64K) {
/* retry the search with 4k-page slices included */
or_mask(potential_mask, compat_mask);
addr = slice_find_area(mm, len, potential_mask, psize,
- topdown, use_cache);
+ topdown);
}
#endif
@@ -586,8 +518,7 @@ unsigned long arch_get_unmapped_area(struct file *filp,
unsigned long flags)
{
return slice_get_unmapped_area(addr, len, flags,
- current->mm->context.user_psize,
- 0, 1);
+ current->mm->context.user_psize, 0);
}
unsigned long arch_get_unmapped_area_topdown(struct file *filp,
@@ -597,8 +528,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp,
const unsigned long flags)
{
return slice_get_unmapped_area(addr0, len, flags,
- current->mm->context.user_psize,
- 1, 1);
+ current->mm->context.user_psize, 1);
}
unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
index 0cfece4cf6ef..2eb4df2a9388 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/platforms/cell/spufs/file.c
@@ -352,7 +352,7 @@ static unsigned long spufs_get_unmapped_area(struct file *file,
/* Else, try to obtain a 64K pages slice */
return slice_get_unmapped_area(addr, len, flags,
- MMU_PAGE_64K, 1, 0);
+ MMU_PAGE_64K, 1);
}
#endif /* CONFIG_SPU_FS_64K_LS */
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 6/8] mm: remove free_area_cache use in powerpc architecture
2013-01-09 1:28 ` [PATCH 6/8] mm: remove free_area_cache use in powerpc architecture Michel Lespinasse
@ 2013-01-09 20:57 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 20:57 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> As all other architectures have been converted to use vm_unmapped_area(),
> we are about to retire the free_area_cache.
>
> This change simply removes the use of that cache in
> slice_get_unmapped_area(), which will most certainly have a
> performance cost. Next one will convert that function to use the
> vm_unmapped_area() infrastructure and regain the performance.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
` (5 preceding siblings ...)
2013-01-09 1:28 ` [PATCH 6/8] mm: remove free_area_cache use in powerpc architecture Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 2:15 ` Benjamin Herrenschmidt
2013-01-09 21:24 ` Rik van Riel
2013-01-09 1:28 ` [PATCH 8/8] mm: remove free_area_cache Michel Lespinasse
2013-01-09 1:32 ` [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
8 siblings, 2 replies; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Update the powerpc slice_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/powerpc/mm/slice.c | 128 +++++++++++++++++++++++++++++-----------------
1 files changed, 81 insertions(+), 47 deletions(-)
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 999a74f25ebe..048346b7eed5 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -242,31 +242,51 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
struct slice_mask available,
int psize)
{
- struct vm_area_struct *vma;
- unsigned long addr;
- struct slice_mask mask;
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+ unsigned long addr, found, slice;
+ struct vm_unmapped_area_info info;
- addr = TASK_UNMAPPED_BASE;
+ info.flags = 0;
+ info.length = len;
+ info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
+ info.align_offset = 0;
- for (;;) {
- addr = _ALIGN_UP(addr, 1ul << pshift);
- if ((TASK_SIZE - len) < addr)
- break;
- vma = find_vma(mm, addr);
- BUG_ON(vma && (addr >= vma->vm_end));
+ addr = TASK_UNMAPPED_BASE;
+ while (addr < TASK_SIZE) {
+ info.low_limit = addr;
+ if (addr < SLICE_LOW_TOP) {
+ slice = GET_LOW_SLICE_INDEX(addr);
+ addr = (slice + 1) << SLICE_LOW_SHIFT;
+ if (!(available.low_slices & (1u << slice)))
+ continue;
+ } else {
+ slice = GET_HIGH_SLICE_INDEX(addr);
+ addr = (slice + 1) << SLICE_HIGH_SHIFT;
+ if (!(available.high_slices & (1u << slice)))
+ continue;
+ }
- mask = slice_range_to_mask(addr, len);
- if (!slice_check_fit(mask, available)) {
- if (addr < SLICE_LOW_TOP)
- addr = _ALIGN_UP(addr + 1, 1ul << SLICE_LOW_SHIFT);
- else
- addr = _ALIGN_UP(addr + 1, 1ul << SLICE_HIGH_SHIFT);
- continue;
+ next_slice:
+ if (addr >= TASK_SIZE)
+ addr = TASK_SIZE;
+ else if (addr < SLICE_LOW_TOP) {
+ slice = GET_LOW_SLICE_INDEX(addr);
+ if (available.low_slices & (1u << slice)) {
+ addr = (slice + 1) << SLICE_LOW_SHIFT;
+ goto next_slice;
+ }
+ } else {
+ slice = GET_HIGH_SLICE_INDEX(addr);
+ if (available.high_slices & (1u << slice)) {
+ addr = (slice + 1) << SLICE_HIGH_SHIFT;
+ goto next_slice;
+ }
}
- if (!vma || addr + len <= vma->vm_start)
- return addr;
- addr = vma->vm_end;
+ info.high_limit = addr;
+
+ found = vm_unmapped_area(&info);
+ if (!(found & ~PAGE_MASK))
+ return found;
}
return -ENOMEM;
@@ -277,39 +297,53 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
struct slice_mask available,
int psize)
{
- struct vm_area_struct *vma;
- unsigned long addr;
- struct slice_mask mask;
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+ unsigned long addr, found, slice;
+ struct vm_unmapped_area_info info;
- addr = mm->mmap_base;
- while (addr > len) {
- /* Go down by chunk size */
- addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+ info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
+ info.align_offset = 0;
- /* Check for hit with different page size */
- mask = slice_range_to_mask(addr, len);
- if (!slice_check_fit(mask, available)) {
- if (addr < SLICE_LOW_TOP)
- addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT);
- else if (addr < (1ul << SLICE_HIGH_SHIFT))
- addr = SLICE_LOW_TOP;
- else
- addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT);
- continue;
+ addr = mm->mmap_base;
+ while (addr > PAGE_SIZE) {
+ info.high_limit = addr;
+ if (addr < SLICE_LOW_TOP) {
+ slice = GET_LOW_SLICE_INDEX(addr - 1);
+ addr = slice << SLICE_LOW_SHIFT;
+ if (!(available.low_slices & (1u << slice)))
+ continue;
+ } else {
+ slice = GET_HIGH_SLICE_INDEX(addr - 1);
+ addr = slice ? (slice << SLICE_HIGH_SHIFT) :
+ SLICE_LOW_TOP;
+ if (!(available.high_slices & (1u << slice)))
+ continue;
}
- /*
- * Lookup failure means no vma is above this address,
- * else if new region fits below vma->vm_start,
- * return with success:
- */
- vma = find_vma(mm, addr);
- if (!vma || (addr + len) <= vma->vm_start)
- return addr;
+ next_slice:
+ if (addr < PAGE_SIZE)
+ addr = PAGE_SIZE;
+ else if (addr < SLICE_LOW_TOP) {
+ slice = GET_LOW_SLICE_INDEX(addr - 1);
+ if (available.low_slices & (1u << slice)) {
+ addr = slice << SLICE_LOW_SHIFT;
+ goto next_slice;
+ }
+ } else {
+ slice = GET_HIGH_SLICE_INDEX(addr - 1);
+ if (available.high_slices & (1u << slice)) {
+ addr = slice ? (slice << SLICE_HIGH_SHIFT) :
+ SLICE_LOW_TOP;
+ goto next_slice;
+ }
+ }
+ info.low_limit = addr;
- /* try just below the current vma->vm_start */
- addr = vma->vm_start;
+ found = vm_unmapped_area(&info);
+ if (!(found & ~PAGE_MASK))
+ return found;
}
/*
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 1:28 ` [PATCH 7/8] mm: use vm_unmapped_area() on " Michel Lespinasse
@ 2013-01-09 2:15 ` Benjamin Herrenschmidt
2013-01-09 2:38 ` Michel Lespinasse
2013-01-09 21:24 ` Rik van Riel
1 sibling, 1 reply; 25+ messages in thread
From: Benjamin Herrenschmidt @ 2013-01-09 2:15 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Rik van Riel, Tony Luck, linux-ia64, linux-parisc,
James E.J. Bottomley, linux-kernel, David Howells, linux-mm,
linux-alpha, Matt Turner, linuxppc-dev, Andrew Morton
On Tue, 2013-01-08 at 17:28 -0800, Michel Lespinasse wrote:
> Update the powerpc slice_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
>
> ---
> arch/powerpc/mm/slice.c | 128 +++++++++++++++++++++++++++++-----------------
> 1 files changed, 81 insertions(+), 47 deletions(-)
That doesn't look good ... the resulting code is longer than the
original, which makes me wonder how it is an improvement...
Now it could just be a matter of how the code is factored, I see
quite a bit of duplication of the whole slice mask test...
Cheers,
Ben.
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 999a74f25ebe..048346b7eed5 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -242,31 +242,51 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
> struct slice_mask available,
> int psize)
> {
> - struct vm_area_struct *vma;
> - unsigned long addr;
> - struct slice_mask mask;
> int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
> + unsigned long addr, found, slice;
> + struct vm_unmapped_area_info info;
>
> - addr = TASK_UNMAPPED_BASE;
> + info.flags = 0;
> + info.length = len;
> + info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
> + info.align_offset = 0;
>
> - for (;;) {
> - addr = _ALIGN_UP(addr, 1ul << pshift);
> - if ((TASK_SIZE - len) < addr)
> - break;
> - vma = find_vma(mm, addr);
> - BUG_ON(vma && (addr >= vma->vm_end));
> + addr = TASK_UNMAPPED_BASE;
> + while (addr < TASK_SIZE) {
> + info.low_limit = addr;
> + if (addr < SLICE_LOW_TOP) {
> + slice = GET_LOW_SLICE_INDEX(addr);
> + addr = (slice + 1) << SLICE_LOW_SHIFT;
> + if (!(available.low_slices & (1u << slice)))
> + continue;
> + } else {
> + slice = GET_HIGH_SLICE_INDEX(addr);
> + addr = (slice + 1) << SLICE_HIGH_SHIFT;
> + if (!(available.high_slices & (1u << slice)))
> + continue;
> + }
>
> - mask = slice_range_to_mask(addr, len);
> - if (!slice_check_fit(mask, available)) {
> - if (addr < SLICE_LOW_TOP)
> - addr = _ALIGN_UP(addr + 1, 1ul << SLICE_LOW_SHIFT);
> - else
> - addr = _ALIGN_UP(addr + 1, 1ul << SLICE_HIGH_SHIFT);
> - continue;
> + next_slice:
> + if (addr >= TASK_SIZE)
> + addr = TASK_SIZE;
> + else if (addr < SLICE_LOW_TOP) {
> + slice = GET_LOW_SLICE_INDEX(addr);
> + if (available.low_slices & (1u << slice)) {
> + addr = (slice + 1) << SLICE_LOW_SHIFT;
> + goto next_slice;
> + }
> + } else {
> + slice = GET_HIGH_SLICE_INDEX(addr);
> + if (available.high_slices & (1u << slice)) {
> + addr = (slice + 1) << SLICE_HIGH_SHIFT;
> + goto next_slice;
> + }
> }
> - if (!vma || addr + len <= vma->vm_start)
> - return addr;
> - addr = vma->vm_end;
> + info.high_limit = addr;
> +
> + found = vm_unmapped_area(&info);
> + if (!(found & ~PAGE_MASK))
> + return found;
> }
>
> return -ENOMEM;
> @@ -277,39 +297,53 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
> struct slice_mask available,
> int psize)
> {
> - struct vm_area_struct *vma;
> - unsigned long addr;
> - struct slice_mask mask;
> int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
> + unsigned long addr, found, slice;
> + struct vm_unmapped_area_info info;
>
> - addr = mm->mmap_base;
> - while (addr > len) {
> - /* Go down by chunk size */
> - addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
> + info.flags = VM_UNMAPPED_AREA_TOPDOWN;
> + info.length = len;
> + info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
> + info.align_offset = 0;
>
> - /* Check for hit with different page size */
> - mask = slice_range_to_mask(addr, len);
> - if (!slice_check_fit(mask, available)) {
> - if (addr < SLICE_LOW_TOP)
> - addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT);
> - else if (addr < (1ul << SLICE_HIGH_SHIFT))
> - addr = SLICE_LOW_TOP;
> - else
> - addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT);
> - continue;
> + addr = mm->mmap_base;
> + while (addr > PAGE_SIZE) {
> + info.high_limit = addr;
> + if (addr < SLICE_LOW_TOP) {
> + slice = GET_LOW_SLICE_INDEX(addr - 1);
> + addr = slice << SLICE_LOW_SHIFT;
> + if (!(available.low_slices & (1u << slice)))
> + continue;
> + } else {
> + slice = GET_HIGH_SLICE_INDEX(addr - 1);
> + addr = slice ? (slice << SLICE_HIGH_SHIFT) :
> + SLICE_LOW_TOP;
> + if (!(available.high_slices & (1u << slice)))
> + continue;
> }
>
> - /*
> - * Lookup failure means no vma is above this address,
> - * else if new region fits below vma->vm_start,
> - * return with success:
> - */
> - vma = find_vma(mm, addr);
> - if (!vma || (addr + len) <= vma->vm_start)
> - return addr;
> + next_slice:
> + if (addr < PAGE_SIZE)
> + addr = PAGE_SIZE;
> + else if (addr < SLICE_LOW_TOP) {
> + slice = GET_LOW_SLICE_INDEX(addr - 1);
> + if (available.low_slices & (1u << slice)) {
> + addr = slice << SLICE_LOW_SHIFT;
> + goto next_slice;
> + }
> + } else {
> + slice = GET_HIGH_SLICE_INDEX(addr - 1);
> + if (available.high_slices & (1u << slice)) {
> + addr = slice ? (slice << SLICE_HIGH_SHIFT) :
> + SLICE_LOW_TOP;
> + goto next_slice;
> + }
> + }
> + info.low_limit = addr;
>
> - /* try just below the current vma->vm_start */
> - addr = vma->vm_start;
> + found = vm_unmapped_area(&info);
> + if (!(found & ~PAGE_MASK))
> + return found;
> }
>
> /*
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 2:15 ` Benjamin Herrenschmidt
@ 2013-01-09 2:38 ` Michel Lespinasse
2013-01-09 3:32 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 2:38 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Rik van Riel, Tony Luck, linux-ia64, linux-parisc,
James E.J. Bottomley, linux-kernel, David Howells, linux-mm,
linux-alpha, Matt Turner, linuxppc-dev, Andrew Morton
On Tue, Jan 8, 2013 at 6:15 PM, Benjamin Herrenschmidt
<benh@kernel.crashing.org> wrote:
> On Tue, 2013-01-08 at 17:28 -0800, Michel Lespinasse wrote:
>> Update the powerpc slice_get_unmapped_area function to make use of
>> vm_unmapped_area() instead of implementing a brute force search.
>>
>> Signed-off-by: Michel Lespinasse <walken@google.com>
>>
>> ---
>> arch/powerpc/mm/slice.c | 128 +++++++++++++++++++++++++++++-----------------
>> 1 files changed, 81 insertions(+), 47 deletions(-)
>
> That doesn't look good ... the resulting code is longer than the
> original, which makes me wonder how it is an improvement...
Well no fair, the previous patch (for powerpc as well) has 22
insertions and 93 deletions :)
The benefit is that the new code has lower algorithmic complexity, it
replaces a per-vma loop with O(N) complexity with an outer loop that
finds contiguous slice blocks and passes them to vm_unmapped_area()
which is only O(log N) complexity. So the new code will be faster for
workloads which use lots of vmas.
That said, I do agree that the code that looks for contiguous
available slices looks kinda ugly - just not sure how to make it look
nicer though.
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 2:38 ` Michel Lespinasse
@ 2013-01-09 3:32 ` Benjamin Herrenschmidt
2013-01-09 11:23 ` Michel Lespinasse
0 siblings, 1 reply; 25+ messages in thread
From: Benjamin Herrenschmidt @ 2013-01-09 3:32 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Rik van Riel, Tony Luck, linux-ia64, linux-parisc,
James E.J. Bottomley, linux-kernel, David Howells, linux-mm,
linux-alpha, Matt Turner, linuxppc-dev, Andrew Morton
On Tue, 2013-01-08 at 18:38 -0800, Michel Lespinasse wrote:
>
> Well no fair, the previous patch (for powerpc as well) has 22
> insertions and 93 deletions :)
>
> The benefit is that the new code has lower algorithmic complexity, it
> replaces a per-vma loop with O(N) complexity with an outer loop that
> finds contiguous slice blocks and passes them to vm_unmapped_area()
> which is only O(log N) complexity. So the new code will be faster for
> workloads which use lots of vmas.
>
> That said, I do agree that the code that looks for contiguous
> available slices looks kinda ugly - just not sure how to make it look
> nicer though.
Ok. I think at least you can move that construct:
+ if (addr < SLICE_LOW_TOP) {
+ slice = GET_LOW_SLICE_INDEX(addr);
+ addr = (slice + 1) << SLICE_LOW_SHIFT;
+ if (!(available.low_slices & (1u << slice)))
+ continue;
+ } else {
+ slice = GET_HIGH_SLICE_INDEX(addr);
+ addr = (slice + 1) << SLICE_HIGH_SHIFT;
+ if (!(available.high_slices & (1u << slice)))
+ continue;
+ }
Into some kind of helper. It will probably compile to the same thing but
at least it's more readable and it will avoid a fuckup in the future if
somebody changes the algorithm and forgets to update one of the
copies :-)
Cheers,
Ben.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 3:32 ` Benjamin Herrenschmidt
@ 2013-01-09 11:23 ` Michel Lespinasse
2013-01-09 21:41 ` Rik van Riel
0 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 11:23 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Rik van Riel, Tony Luck, linux-ia64, linux-parisc,
James E.J. Bottomley, linux-kernel, David Howells, linux-mm,
linux-alpha, Matt Turner, linuxppc-dev, Andrew Morton
On Wed, Jan 09, 2013 at 02:32:56PM +1100, Benjamin Herrenschmidt wrote:
> Ok. I think at least you can move that construct:
>
> + if (addr < SLICE_LOW_TOP) {
> + slice = GET_LOW_SLICE_INDEX(addr);
> + addr = (slice + 1) << SLICE_LOW_SHIFT;
> + if (!(available.low_slices & (1u << slice)))
> + continue;
> + } else {
> + slice = GET_HIGH_SLICE_INDEX(addr);
> + addr = (slice + 1) << SLICE_HIGH_SHIFT;
> + if (!(available.high_slices & (1u << slice)))
> + continue;
> + }
>
> Into some kind of helper. It will probably compile to the same thing but
> at least it's more readable and it will avoid a fuckup in the future if
> somebody changes the algorithm and forgets to update one of the
> copies :-)
All right, does the following look more palatable then ?
(didn't re-test it, though)
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/powerpc/mm/slice.c | 123 ++++++++++++++++++++++++++++++-----------------
1 files changed, 78 insertions(+), 45 deletions(-)
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 999a74f25ebe..3e99c149271a 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -237,36 +237,69 @@ static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psiz
#endif
}
+/*
+ * Compute which slice addr is part of;
+ * set *boundary_addr to the start or end boundary of that slice
+ * (depending on 'end' parameter);
+ * return boolean indicating if the slice is marked as available in the
+ * 'available' slice_mark.
+ */
+static bool slice_scan_available(unsigned long addr,
+ struct slice_mask available,
+ int end,
+ unsigned long *boundary_addr)
+{
+ unsigned long slice;
+ if (addr < SLICE_LOW_TOP) {
+ slice = GET_LOW_SLICE_INDEX(addr);
+ *boundary_addr = (slice + end) << SLICE_LOW_SHIFT;
+ return !!(available.low_slices & (1u << slice));
+ } else {
+ slice = GET_HIGH_SLICE_INDEX(addr);
+ *boundary_addr = (slice + end) ?
+ ((slice + end) << SLICE_HIGH_SHIFT) : SLICE_LOW_TOP;
+ return !!(available.high_slices & (1u << slice));
+ }
+}
+
static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
unsigned long len,
struct slice_mask available,
int psize)
{
- struct vm_area_struct *vma;
- unsigned long addr;
- struct slice_mask mask;
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+ unsigned long addr, found, next_end;
+ struct vm_unmapped_area_info info;
- addr = TASK_UNMAPPED_BASE;
-
- for (;;) {
- addr = _ALIGN_UP(addr, 1ul << pshift);
- if ((TASK_SIZE - len) < addr)
- break;
- vma = find_vma(mm, addr);
- BUG_ON(vma && (addr >= vma->vm_end));
+ info.flags = 0;
+ info.length = len;
+ info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
+ info.align_offset = 0;
- mask = slice_range_to_mask(addr, len);
- if (!slice_check_fit(mask, available)) {
- if (addr < SLICE_LOW_TOP)
- addr = _ALIGN_UP(addr + 1, 1ul << SLICE_LOW_SHIFT);
- else
- addr = _ALIGN_UP(addr + 1, 1ul << SLICE_HIGH_SHIFT);
+ addr = TASK_UNMAPPED_BASE;
+ while (addr < TASK_SIZE) {
+ info.low_limit = addr;
+ if (!slice_scan_available(addr, available, 1, &addr))
continue;
+
+ next_slice:
+ /*
+ * At this point [info.low_limit; addr) covers
+ * available slices only and ends at a slice boundary.
+ * Check if we need to reduce the range, or if we can
+ * extend it to cover the next available slice.
+ */
+ if (addr >= TASK_SIZE)
+ addr = TASK_SIZE;
+ else if (slice_scan_available(addr, available, 1, &next_end)) {
+ addr = next_end;
+ goto next_slice;
}
- if (!vma || addr + len <= vma->vm_start)
- return addr;
- addr = vma->vm_end;
+ info.high_limit = addr;
+
+ found = vm_unmapped_area(&info);
+ if (!(found & ~PAGE_MASK))
+ return found;
}
return -ENOMEM;
@@ -277,39 +310,39 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
struct slice_mask available,
int psize)
{
- struct vm_area_struct *vma;
- unsigned long addr;
- struct slice_mask mask;
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+ unsigned long addr, found, prev;
+ struct vm_unmapped_area_info info;
- addr = mm->mmap_base;
- while (addr > len) {
- /* Go down by chunk size */
- addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+ info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
+ info.align_offset = 0;
- /* Check for hit with different page size */
- mask = slice_range_to_mask(addr, len);
- if (!slice_check_fit(mask, available)) {
- if (addr < SLICE_LOW_TOP)
- addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT);
- else if (addr < (1ul << SLICE_HIGH_SHIFT))
- addr = SLICE_LOW_TOP;
- else
- addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT);
+ addr = mm->mmap_base;
+ while (addr > PAGE_SIZE) {
+ info.high_limit = addr;
+ if (!slice_scan_available(addr - 1, available, 0, &addr))
continue;
- }
+ prev_slice:
/*
- * Lookup failure means no vma is above this address,
- * else if new region fits below vma->vm_start,
- * return with success:
+ * At this point [addr; info.high_limit) covers
+ * available slices only and starts at a slice boundary.
+ * Check if we need to reduce the range, or if we can
+ * extend it to cover the previous available slice.
*/
- vma = find_vma(mm, addr);
- if (!vma || (addr + len) <= vma->vm_start)
- return addr;
+ if (addr < PAGE_SIZE)
+ addr = PAGE_SIZE;
+ else if (slice_scan_available(addr - 1, available, 0, &prev)) {
+ addr = prev;
+ goto prev_slice;
+ }
+ info.low_limit = addr;
- /* try just below the current vma->vm_start */
- addr = vma->vm_start;
+ found = vm_unmapped_area(&info);
+ if (!(found & ~PAGE_MASK))
+ return found;
}
/*
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 11:23 ` Michel Lespinasse
@ 2013-01-09 21:41 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 21:41 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/09/2013 06:23 AM, Michel Lespinasse wrote:
> On Wed, Jan 09, 2013 at 02:32:56PM +1100, Benjamin Herrenschmidt wrote:
>> Ok. I think at least you can move that construct:
>>
>> + if (addr < SLICE_LOW_TOP) {
>> + slice = GET_LOW_SLICE_INDEX(addr);
>> + addr = (slice + 1) << SLICE_LOW_SHIFT;
>> + if (!(available.low_slices & (1u << slice)))
>> + continue;
>> + } else {
>> + slice = GET_HIGH_SLICE_INDEX(addr);
>> + addr = (slice + 1) << SLICE_HIGH_SHIFT;
>> + if (!(available.high_slices & (1u << slice)))
>> + continue;
>> + }
>>
>> Into some kind of helper. It will probably compile to the same thing but
>> at least it's more readable and it will avoid a fuckup in the future if
>> somebody changes the algorithm and forgets to update one of the
>> copies :-)
>
> All right, does the following look more palatable then ?
> (didn't re-test it, though)
Looks equivalent. I have also not tested :)
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture
2013-01-09 1:28 ` [PATCH 7/8] mm: use vm_unmapped_area() on " Michel Lespinasse
2013-01-09 2:15 ` Benjamin Herrenschmidt
@ 2013-01-09 21:24 ` Rik van Riel
1 sibling, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 21:24 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Update the powerpc slice_get_unmapped_area function to make use of
> vm_unmapped_area() instead of implementing a brute force search.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 8/8] mm: remove free_area_cache
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
` (6 preceding siblings ...)
2013-01-09 1:28 ` [PATCH 7/8] mm: use vm_unmapped_area() on " Michel Lespinasse
@ 2013-01-09 1:28 ` Michel Lespinasse
2013-01-09 21:25 ` Rik van Riel
2013-01-09 1:32 ` [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
8 siblings, 1 reply; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:28 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Since all architectures have been converted to use vm_unmapped_area(),
there is no remaining use for the free_area_cache.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/arm/mm/mmap.c | 2 --
arch/arm64/mm/mmap.c | 2 --
arch/mips/mm/mmap.c | 2 --
arch/powerpc/mm/mmap_64.c | 2 --
arch/s390/mm/mmap.c | 4 ----
arch/sparc/kernel/sys_sparc_64.c | 2 --
arch/tile/mm/mmap.c | 2 --
arch/x86/ia32/ia32_aout.c | 2 --
arch/x86/mm/mmap.c | 2 --
fs/binfmt_aout.c | 2 --
fs/binfmt_elf.c | 2 --
include/linux/mm_types.h | 3 ---
include/linux/sched.h | 2 --
kernel/fork.c | 4 ----
mm/mmap.c | 28 ----------------------------
mm/nommu.c | 4 ----
mm/util.c | 1 -
17 files changed, 0 insertions(+), 66 deletions(-)
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 10062ceadd1c..0c6356255fe3 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -181,11 +181,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(random_factor);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 7c7be7855638..8ed6cb1a900f 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -90,11 +90,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index d9be7540a6be..f4e63c29d044 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -158,11 +158,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(random_factor);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/arch/powerpc/mm/mmap_64.c b/arch/powerpc/mm/mmap_64.c
index 67a42ed0d2fc..cb8bdbe4972f 100644
--- a/arch/powerpc/mm/mmap_64.c
+++ b/arch/powerpc/mm/mmap_64.c
@@ -92,10 +92,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index c59a5efa58b1..f2a462625c9e 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -91,11 +91,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
@@ -173,11 +171,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = s390_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = s390_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index 708bc29d36a8..f3c169f9d3a1 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -290,7 +290,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
sysctl_legacy_va_layout) {
mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
/* We know it's 32-bit */
unsigned long task_size = STACK_TOP32;
@@ -302,7 +301,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
mm->mmap_base = PAGE_ALIGN(task_size - gap - random_factor);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/arch/tile/mm/mmap.c b/arch/tile/mm/mmap.c
index f96f4cec602a..d67d91ebf63e 100644
--- a/arch/tile/mm/mmap.c
+++ b/arch/tile/mm/mmap.c
@@ -66,10 +66,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (!is_32bit || rlimit(RLIMIT_STACK) == RLIM_INFINITY) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(mm);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/arch/x86/ia32/ia32_aout.c b/arch/x86/ia32/ia32_aout.c
index a703af19c281..3b3558577642 100644
--- a/arch/x86/ia32/ia32_aout.c
+++ b/arch/x86/ia32/ia32_aout.c
@@ -309,8 +309,6 @@ static int load_aout_binary(struct linux_binprm *bprm)
(current->mm->start_data = N_DATADDR(ex));
current->mm->brk = ex.a_bss +
(current->mm->start_brk = N_BSSADDR(ex));
- current->mm->free_area_cache = TASK_UNMAPPED_BASE;
- current->mm->cached_hole_size = 0;
retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT);
if (retval < 0) {
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 845df6835f9f..62c29a5bfe26 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -115,10 +115,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = mmap_legacy_base();
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
}
}
diff --git a/fs/binfmt_aout.c b/fs/binfmt_aout.c
index 6043567b95c2..692e75ca6415 100644
--- a/fs/binfmt_aout.c
+++ b/fs/binfmt_aout.c
@@ -256,8 +256,6 @@ static int load_aout_binary(struct linux_binprm * bprm)
(current->mm->start_data = N_DATADDR(ex));
current->mm->brk = ex.a_bss +
(current->mm->start_brk = N_BSSADDR(ex));
- current->mm->free_area_cache = current->mm->mmap_base;
- current->mm->cached_hole_size = 0;
retval = setup_arg_pages(bprm, STACK_TOP, EXSTACK_DEFAULT);
if (retval < 0) {
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 0c42cdbabecf..e2087dea9c1e 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -730,8 +730,6 @@ static int load_elf_binary(struct linux_binprm *bprm)
/* Do this so that we can load the interpreter, if need be. We will
change some of these later */
- current->mm->free_area_cache = current->mm->mmap_base;
- current->mm->cached_hole_size = 0;
retval = setup_arg_pages(bprm, randomize_stack_top(STACK_TOP),
executable_stack);
if (retval < 0) {
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index f8f5162a3571..e50eb047ea8a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -329,12 +329,9 @@ struct mm_struct {
unsigned long (*get_unmapped_area) (struct file *filp,
unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags);
- void (*unmap_area) (struct mm_struct *mm, unsigned long addr);
#endif
unsigned long mmap_base; /* base of mmap area */
unsigned long task_size; /* size of task vm space */
- unsigned long cached_hole_size; /* if non-zero, the largest hole below free_area_cache */
- unsigned long free_area_cache; /* first hole of size cached_hole_size or larger */
unsigned long highest_vm_end; /* highest vma end address */
pgd_t * pgd;
atomic_t mm_users; /* How many users with user space? */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 206bb089c06b..fa7e0a60ebe9 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -366,8 +366,6 @@ extern unsigned long
arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags);
-extern void arch_unmap_area(struct mm_struct *, unsigned long);
-extern void arch_unmap_area_topdown(struct mm_struct *, unsigned long);
#else
static inline void arch_pick_mmap_layout(struct mm_struct *mm) {}
#endif
diff --git a/kernel/fork.c b/kernel/fork.c
index a31b823b3c2d..bdf61755ef4a 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -364,8 +364,6 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
mm->locked_vm = 0;
mm->mmap = NULL;
mm->mmap_cache = NULL;
- mm->free_area_cache = oldmm->mmap_base;
- mm->cached_hole_size = ~0UL;
mm->map_count = 0;
cpumask_clear(mm_cpumask(mm));
mm->mm_rb = RB_ROOT;
@@ -539,8 +537,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
mm->nr_ptes = 0;
memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
spin_lock_init(&mm->page_table_lock);
- mm->free_area_cache = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = ~0UL;
mm_init_aio(mm);
mm_init_owner(mm, p);
diff --git a/mm/mmap.c b/mm/mmap.c
index f54b235f29a9..532f447879d4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1800,15 +1800,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
}
#endif
-void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
-{
- /*
- * Is this a new hole at the lowest possible address?
- */
- if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache)
- mm->free_area_cache = addr;
-}
-
/*
* This mmap-allocator allocates new areas top-down from below the
* stack's low limit (the base):
@@ -1865,19 +1856,6 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
}
#endif
-void arch_unmap_area_topdown(struct mm_struct *mm, unsigned long addr)
-{
- /*
- * Is this a new hole at the highest possible address?
- */
- if (addr > mm->free_area_cache)
- mm->free_area_cache = addr;
-
- /* dont allow allocations above current base */
- if (mm->free_area_cache > mm->mmap_base)
- mm->free_area_cache = mm->mmap_base;
-}
-
unsigned long
get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
@@ -2276,7 +2254,6 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
{
struct vm_area_struct **insertion_point;
struct vm_area_struct *tail_vma = NULL;
- unsigned long addr;
insertion_point = (prev ? &prev->vm_next : &mm->mmap);
vma->vm_prev = NULL;
@@ -2293,11 +2270,6 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
} else
mm->highest_vm_end = prev ? prev->vm_end : 0;
tail_vma->vm_next = NULL;
- if (mm->unmap_area == arch_unmap_area)
- addr = prev ? prev->vm_end : mm->mmap_base;
- else
- addr = vma ? vma->vm_start : mm->mmap_base;
- mm->unmap_area(mm, addr);
mm->mmap_cache = NULL; /* Kill the cache. */
}
diff --git a/mm/nommu.c b/mm/nommu.c
index 79c3cac87afa..b5535ff2f9d1 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1852,10 +1852,6 @@ unsigned long arch_get_unmapped_area(struct file *file, unsigned long addr,
return -ENOMEM;
}
-void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
-{
-}
-
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen,
int even_cows)
diff --git a/mm/util.c b/mm/util.c
index c55e26b17d93..4c19aa6a1b43 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -293,7 +293,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
{
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
}
#endif
--
1.7.7.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 8/8] mm: remove free_area_cache
2013-01-09 1:28 ` [PATCH 8/8] mm: remove free_area_cache Michel Lespinasse
@ 2013-01-09 21:25 ` Rik van Riel
0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2013-01-09 21:25 UTC (permalink / raw)
To: Michel Lespinasse
Cc: Tony Luck, linux-ia64, linux-parisc, James E.J. Bottomley,
linux-kernel, David Howells, linux-mm, linux-alpha, Matt Turner,
linuxppc-dev, Andrew Morton
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
> Since all architectures have been converted to use vm_unmapped_area(),
> there is no remaining use for the free_area_cache.
>
> Signed-off-by: Michel Lespinasse <walken@google.com>
Yay
Acked-by: Rik van Riel <riel@redhat.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 0/8] vm_unmapped_area: finish the mission
2013-01-09 1:28 [PATCH 0/8] vm_unmapped_area: finish the mission Michel Lespinasse
` (7 preceding siblings ...)
2013-01-09 1:28 ` [PATCH 8/8] mm: remove free_area_cache Michel Lespinasse
@ 2013-01-09 1:32 ` Michel Lespinasse
8 siblings, 0 replies; 25+ messages in thread
From: Michel Lespinasse @ 2013-01-09 1:32 UTC (permalink / raw)
To: Rik van Riel, Benjamin Herrenschmidt, James E.J. Bottomley,
Matt Turner, David Howells, Tony Luck
Cc: linux-ia64, linux-parisc, linux-kernel, linux-mm, linux-alpha,
Andrew Morton, linuxppc-dev
Whoops, I was supposed to find a more appropriate subject line before
sending this :]
On Tue, Jan 8, 2013 at 5:28 PM, Michel Lespinasse <walken@google.com> wrote:
> These patches, which apply on top of v3.8-rc kernels, are to complete the
> VMA gap finding code I introduced (following Rik's initial proposal) in
> v3.8-rc1.
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
^ permalink raw reply [flat|nested] 25+ messages in thread