linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area
@ 2011-11-17 21:47 Rob Herring
  2011-11-17 21:47 ` [PATCH v2 2/2] ARM: topdown mmap support Rob Herring
  2011-11-17 22:37 ` [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area Will Deacon
  0 siblings, 2 replies; 7+ messages in thread
From: Rob Herring @ 2011-11-17 21:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Rob Herring <rob.herring@calxeda.com>

There are already cache type decoding functions, so use those instead
of custom decode code which only works for ARMv6.

This change also correctly enables cache colour alignment on Cortex-A9
whose I-cache is aliasing VIPT.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
v2:
- remove icache_is_vipt_aliasing check

Nico, can you pick up these 2 patches into the Linaro kernel to get some
testing?

Rob

 arch/arm/mm/mmap.c |   23 ++++++-----------------
 1 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 74be05f..44b628e 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -9,8 +9,7 @@
 #include <linux/io.h>
 #include <linux/personality.h>
 #include <linux/random.h>
-#include <asm/cputype.h>
-#include <asm/system.h>
+#include <asm/cachetype.h>
 
 #define COLOUR_ALIGN(addr,pgoff)		\
 	((((addr)+SHMLBA-1)&~(SHMLBA-1)) +	\
@@ -32,25 +31,15 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long start_addr;
-#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K)
-	unsigned int cache_type;
-	int do_align = 0, aliasing = 0;
+	int do_align = 0;
+	int aliasing = cache_is_vipt_aliasing();
 
 	/*
 	 * We only need to do colour alignment if either the I or D
-	 * caches alias.  This is indicated by bits 9 and 21 of the
-	 * cache type register.
+	 * caches alias.
 	 */
-	cache_type = read_cpuid_cachetype();
-	if (cache_type != read_cpuid_id()) {
-		aliasing = (cache_type | cache_type >> 12) & (1 << 11);
-		if (aliasing)
-			do_align = filp || flags & MAP_SHARED;
-	}
-#else
-#define do_align 0
-#define aliasing 0
-#endif
+	if (aliasing)
+		do_align = filp || (flags & MAP_SHARED);
 
 	/*
 	 * We enforce the MAP_FIXED case.
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] ARM: topdown mmap support
  2011-11-17 21:47 [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area Rob Herring
@ 2011-11-17 21:47 ` Rob Herring
  2011-11-22  3:05   ` Rob Herring
  2012-10-24 12:22   ` zhangfei gao
  2011-11-17 22:37 ` [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area Will Deacon
  1 sibling, 2 replies; 7+ messages in thread
From: Rob Herring @ 2011-11-17 21:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Rob Herring <rob.herring@calxeda.com>

Similar to other architectures, this adds topdown mmap support in user
process address space allocation policy. This allows mmap sizes greater
than 2GB. This support is largely copied from MIPS and the generic
implementations.

The address space randomization is moved into arch_pick_mmap_layout.

Tested on V-Express with ubuntu and a mmap test from here:
https://bugs.launchpad.net/bugs/861296

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
v2: 
- remove icache_is_vipt_aliasing check

 arch/arm/include/asm/pgtable.h   |    1 +
 arch/arm/include/asm/processor.h |    2 +
 arch/arm/mm/mmap.c               |  173 ++++++++++++++++++++++++++++++++++++-
 3 files changed, 171 insertions(+), 5 deletions(-)

diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 9451dce..2f659e2 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -336,6 +336,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
  * We provide our own arch_get_unmapped_area to cope with VIPT caches.
  */
 #define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
 
 /*
  * remap a physical page `pfn' of size `size' with page protection `prot'
diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
index b2d9df5..ce280b8 100644
--- a/arch/arm/include/asm/processor.h
+++ b/arch/arm/include/asm/processor.h
@@ -123,6 +123,8 @@ static inline void prefetch(const void *ptr)
 
 #endif
 
+#define HAVE_ARCH_PICK_MMAP_LAYOUT
+
 #endif
 
 #endif /* __ASM_ARM_PROCESSOR_H */
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 44b628e..ce8cb19 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -11,10 +11,49 @@
 #include <linux/random.h>
 #include <asm/cachetype.h>
 
+static inline unsigned long COLOUR_ALIGN_DOWN(unsigned long addr,
+					      unsigned long pgoff)
+{
+	unsigned long base = addr & ~(SHMLBA-1);
+	unsigned long off = (pgoff << PAGE_SHIFT) & (SHMLBA-1);
+
+	if (base + off <= addr)
+		return base + off;
+
+	return base - off;
+}
+
 #define COLOUR_ALIGN(addr,pgoff)		\
 	((((addr)+SHMLBA-1)&~(SHMLBA-1)) +	\
 	 (((pgoff)<<PAGE_SHIFT) & (SHMLBA-1)))
 
+/* gap between mmap and stack */
+#define MIN_GAP (128*1024*1024UL)
+#define MAX_GAP ((TASK_SIZE)/6*5)
+
+static int mmap_is_legacy(void)
+{
+	if (current->personality & ADDR_COMPAT_LAYOUT)
+		return 1;
+
+	if (rlimit(RLIMIT_STACK) == RLIM_INFINITY)
+		return 1;
+
+	return sysctl_legacy_va_layout;
+}
+
+static unsigned long mmap_base(unsigned long rnd)
+{
+	unsigned long gap = rlimit(RLIMIT_STACK);
+
+	if (gap < MIN_GAP)
+		gap = MIN_GAP;
+	else if (gap > MAX_GAP)
+		gap = MAX_GAP;
+
+	return PAGE_ALIGN(TASK_SIZE - gap - rnd);
+}
+
 /*
  * We need to ensure that shared mappings are correctly aligned to
  * avoid aliasing issues with VIPT caches.  We need to ensure that
@@ -68,13 +107,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (len > mm->cached_hole_size) {
 	        start_addr = addr = mm->free_area_cache;
 	} else {
-	        start_addr = addr = TASK_UNMAPPED_BASE;
+	        start_addr = addr = mm->mmap_base;
 	        mm->cached_hole_size = 0;
 	}
-	/* 8 bits of randomness in 20 address space bits */
-	if ((current->flags & PF_RANDOMIZE) &&
-	    !(current->personality & ADDR_NO_RANDOMIZE))
-		addr += (get_random_int() % (1 << 8)) << PAGE_SHIFT;
 
 full_search:
 	if (do_align)
@@ -111,6 +146,134 @@ full_search:
 	}
 }
 
+unsigned long
+arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
+			const unsigned long len, const unsigned long pgoff,
+			const unsigned long flags)
+{
+	struct vm_area_struct *vma;
+	struct mm_struct *mm = current->mm;
+	unsigned long addr = addr0;
+	int do_align = 0;
+	int aliasing = cache_is_vipt_aliasing();
+
+	/*
+	 * We only need to do colour alignment if either the I or D
+	 * caches alias.
+	 */
+	if (aliasing)
+		do_align = filp || (flags & MAP_SHARED);
+
+	/* requested length too big for entire address space */
+	if (len > TASK_SIZE)
+		return -ENOMEM;
+
+	if (flags & MAP_FIXED) {
+		if (aliasing && flags & MAP_SHARED &&
+		    (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1))
+			return -EINVAL;
+		return addr;
+	}
+
+	/* requesting a specific address */
+	if (addr) {
+		if (do_align)
+			addr = COLOUR_ALIGN(addr, pgoff);
+		else
+			addr = PAGE_ALIGN(addr);
+		vma = find_vma(mm, addr);
+		if (TASK_SIZE - len >= addr &&
+				(!vma || addr + len <= vma->vm_start))
+			return addr;
+	}
+
+	/* check if free_area_cache is useful for us */
+	if (len <= mm->cached_hole_size) {
+		mm->cached_hole_size = 0;
+		mm->free_area_cache = mm->mmap_base;
+	}
+
+	/* either no address requested or can't fit in requested address hole */
+	addr = mm->free_area_cache;
+	if (do_align) {
+		unsigned long base = COLOUR_ALIGN_DOWN(addr - len, pgoff);
+		addr = base + len;
+	}
+
+	/* make sure it can fit in the remaining address space */
+	if (addr > len) {
+		vma = find_vma(mm, addr-len);
+		if (!vma || addr <= vma->vm_start)
+			/* remember the address as a hint for next time */
+			return (mm->free_area_cache = addr-len);
+	}
+
+	if (mm->mmap_base < len)
+		goto bottomup;
+
+	addr = mm->mmap_base - len;
+	if (do_align)
+		addr = COLOUR_ALIGN_DOWN(addr, pgoff);
+
+	do {
+		/*
+		 * Lookup failure means no vma is above this address,
+		 * else if new region fits below vma->vm_start,
+		 * return with success:
+		 */
+		vma = find_vma(mm, addr);
+		if (!vma || addr+len <= vma->vm_start)
+			/* remember the address as a hint for next time */
+			return (mm->free_area_cache = addr);
+
+		/* remember the largest hole we saw so far */
+		if (addr + mm->cached_hole_size < vma->vm_start)
+			mm->cached_hole_size = vma->vm_start - addr;
+
+		/* try just below the current vma->vm_start */
+		addr = vma->vm_start - len;
+		if (do_align)
+			addr = COLOUR_ALIGN_DOWN(addr, pgoff);
+	} while (len < vma->vm_start);
+
+bottomup:
+	/*
+	 * A failed mmap() very likely causes application failure,
+	 * so fall back to the bottom-up function here. This scenario
+	 * can happen with large stack limits and large mmap()
+	 * allocations.
+	 */
+	mm->cached_hole_size = ~0UL;
+	mm->free_area_cache = TASK_UNMAPPED_BASE;
+	addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
+	/*
+	 * Restore the topdown base:
+	 */
+	mm->free_area_cache = mm->mmap_base;
+	mm->cached_hole_size = ~0UL;
+
+	return addr;
+}
+
+void arch_pick_mmap_layout(struct mm_struct *mm)
+{
+	unsigned long random_factor = 0UL;
+
+	/* 8 bits of randomness in 20 address space bits */
+	if ((current->flags & PF_RANDOMIZE) &&
+	    !(current->personality & ADDR_NO_RANDOMIZE))
+		random_factor = (get_random_int() % (1 << 8)) << PAGE_SHIFT;
+
+	if (mmap_is_legacy()) {
+		mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
+		mm->get_unmapped_area = arch_get_unmapped_area;
+		mm->unmap_area = arch_unmap_area;
+	} else {
+		mm->mmap_base = mmap_base(random_factor);
+		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+		mm->unmap_area = arch_unmap_area_topdown;
+	}
+}
 
 /*
  * You really shouldn't be using read() or write() on /dev/mem.  This
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area
  2011-11-17 21:47 [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area Rob Herring
  2011-11-17 21:47 ` [PATCH v2 2/2] ARM: topdown mmap support Rob Herring
@ 2011-11-17 22:37 ` Will Deacon
  1 sibling, 0 replies; 7+ messages in thread
From: Will Deacon @ 2011-11-17 22:37 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Rob,

On Thu, Nov 17, 2011 at 09:47:05PM +0000, Rob Herring wrote:
> From: Rob Herring <rob.herring@calxeda.com>
> 
> There are already cache type decoding functions, so use those instead
> of custom decode code which only works for ARMv6.
> 
> This change also correctly enables cache colour alignment on Cortex-A9
> whose I-cache is aliasing VIPT.

You can probably drop this paragraph.

With that,

Acked-by: Will Deacon <will.deacon@arm.com>

but yes, it would be nice to see this in -next for a bit.

Cheers,

Will

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] ARM: topdown mmap support
  2011-11-17 21:47 ` [PATCH v2 2/2] ARM: topdown mmap support Rob Herring
@ 2011-11-22  3:05   ` Rob Herring
  2011-11-22  3:44     ` Nicolas Pitre
  2012-10-24 12:22   ` zhangfei gao
  1 sibling, 1 reply; 7+ messages in thread
From: Rob Herring @ 2011-11-22  3:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/17/2011 03:47 PM, Rob Herring wrote:
> From: Rob Herring <rob.herring@calxeda.com>
> 
> Similar to other architectures, this adds topdown mmap support in user
> process address space allocation policy. This allows mmap sizes greater
> than 2GB. This support is largely copied from MIPS and the generic
> implementations.
> 
> The address space randomization is moved into arch_pick_mmap_layout.
> 
> Tested on V-Express with ubuntu and a mmap test from here:
> https://bugs.launchpad.net/bugs/861296
> 
> Signed-off-by: Rob Herring <rob.herring@calxeda.com>
> Acked-by: Nicolas Pitre <nico@linaro.org>
> ---
> v2: 
> - remove icache_is_vipt_aliasing check
> 
>  arch/arm/include/asm/pgtable.h   |    1 +
>  arch/arm/include/asm/processor.h |    2 +
>  arch/arm/mm/mmap.c               |  173 ++++++++++++++++++++++++++++++++++++-
>  3 files changed, 171 insertions(+), 5 deletions(-)
> 

Russell,

I submitted these 2 patches to the patch system. Can you please pull
into your next branch so they can get some more testing.

Cheers,
Rob

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] ARM: topdown mmap support
  2011-11-22  3:05   ` Rob Herring
@ 2011-11-22  3:44     ` Nicolas Pitre
  0 siblings, 0 replies; 7+ messages in thread
From: Nicolas Pitre @ 2011-11-22  3:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 21 Nov 2011, Rob Herring wrote:

> On 11/17/2011 03:47 PM, Rob Herring wrote:
> > From: Rob Herring <rob.herring@calxeda.com>
> > 
> > Similar to other architectures, this adds topdown mmap support in user
> > process address space allocation policy. This allows mmap sizes greater
> > than 2GB. This support is largely copied from MIPS and the generic
> > implementations.
> > 
> > The address space randomization is moved into arch_pick_mmap_layout.
> > 
> > Tested on V-Express with ubuntu and a mmap test from here:
> > https://bugs.launchpad.net/bugs/861296
> > 
> > Signed-off-by: Rob Herring <rob.herring@calxeda.com>
> > Acked-by: Nicolas Pitre <nico@linaro.org>
> > ---
> > v2: 
> > - remove icache_is_vipt_aliasing check
> > 
> >  arch/arm/include/asm/pgtable.h   |    1 +
> >  arch/arm/include/asm/processor.h |    2 +
> >  arch/arm/mm/mmap.c               |  173 ++++++++++++++++++++++++++++++++++++-
> >  3 files changed, 171 insertions(+), 5 deletions(-)
> > 
> 
> Russell,
> 
> I submitted these 2 patches to the patch system. Can you please pull
> into your next branch so they can get some more testing.

For the record, I included an earlier version of those patches in the 
Linaro kernel a week ago, and no breakage were reported so far.


Nicolas

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] ARM: topdown mmap support
  2011-11-17 21:47 ` [PATCH v2 2/2] ARM: topdown mmap support Rob Herring
  2011-11-22  3:05   ` Rob Herring
@ 2012-10-24 12:22   ` zhangfei gao
  2012-10-24 12:39     ` Rob Herring
  1 sibling, 1 reply; 7+ messages in thread
From: zhangfei gao @ 2012-10-24 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Nov 18, 2011 at 5:47 AM, Rob Herring <robherring2@gmail.com> wrote:
> From: Rob Herring <rob.herring@calxeda.com>
>
> Similar to other architectures, this adds topdown mmap support in user
> process address space allocation policy. This allows mmap sizes greater
> than 2GB. This support is largely copied from MIPS and the generic
> implementations.
>
> The address space randomization is moved into arch_pick_mmap_layout.
>
> Tested on V-Express with ubuntu and a mmap test from here:
> https://bugs.launchpad.net/bugs/861296
>
> Signed-off-by: Rob Herring <rob.herring@calxeda.com>
> Acked-by: Nicolas Pitre <nico@linaro.org>

Unfortunately, we met "no vspace available" during loading libmono.so,
when using default arch_get_unmapped_area_topdown method, tested on
Jelly Bean with 3.4 kernel.
While no problem using arch_get_unmapped_area in the patch, or just
revert this patch.
The result is some application can not run.

Just google, find same issue reported on Jelly Bean with 3.4 kernel.

Any suggestion?

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] ARM: topdown mmap support
  2012-10-24 12:22   ` zhangfei gao
@ 2012-10-24 12:39     ` Rob Herring
  0 siblings, 0 replies; 7+ messages in thread
From: Rob Herring @ 2012-10-24 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 10/24/2012 07:22 AM, zhangfei gao wrote:
> On Fri, Nov 18, 2011 at 5:47 AM, Rob Herring <robherring2@gmail.com> wrote:
>> From: Rob Herring <rob.herring@calxeda.com>
>>
>> Similar to other architectures, this adds topdown mmap support in user
>> process address space allocation policy. This allows mmap sizes greater
>> than 2GB. This support is largely copied from MIPS and the generic
>> implementations.
>>
>> The address space randomization is moved into arch_pick_mmap_layout.
>>
>> Tested on V-Express with ubuntu and a mmap test from here:
>> https://bugs.launchpad.net/bugs/861296
>>
>> Signed-off-by: Rob Herring <rob.herring@calxeda.com>
>> Acked-by: Nicolas Pitre <nico@linaro.org>
> 
> Unfortunately, we met "no vspace available" during loading libmono.so,
> when using default arch_get_unmapped_area_topdown method, tested on
> Jelly Bean with 3.4 kernel.
> While no problem using arch_get_unmapped_area in the patch, or just
> revert this patch.
> The result is some application can not run.
> 
> Just google, find same issue reported on Jelly Bean with 3.4 kernel.
> 
> Any suggestion?

Perhaps Android has hardcoded expectations about the virtual memory
layout? I was worried about that at the time and asked the Linaro
Android folks to test it.

I believe either layout can be selected at runtime per process. I don't
recall the exact /proc file to control this.

Rob

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-10-24 12:39 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-17 21:47 [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area Rob Herring
2011-11-17 21:47 ` [PATCH v2 2/2] ARM: topdown mmap support Rob Herring
2011-11-22  3:05   ` Rob Herring
2011-11-22  3:44     ` Nicolas Pitre
2012-10-24 12:22   ` zhangfei gao
2012-10-24 12:39     ` Rob Herring
2011-11-17 22:37 ` [PATCH v2 1/2] ARM: use cache type functions for arch_get_unmapped_area Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).