* [PATCH 2/4] Pass vma argument to copy_user_highpage().
2006-12-12 17:14 [PATCH 0/4] Fix COW D-cache aliasing on fork Ralf Baechle
2006-12-12 17:14 ` [PATCH 1/4] " Ralf Baechle
@ 2006-12-12 17:14 ` Ralf Baechle
2006-12-12 17:14 ` [PATCH 3/4] MIPS: Fix COW D-cache aliasing on fork Ralf Baechle
2006-12-12 17:14 ` [PATCH 4/4] Optimize D-cache alias handling " Ralf Baechle
3 siblings, 0 replies; 5+ messages in thread
From: Ralf Baechle @ 2006-12-12 17:14 UTC (permalink / raw)
To: Linus Torvalds, Andrew Morton
Cc: linux-arch, linux-kernel, Atsushi Nemoto, Ralf Baechle
From: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
To allow a more effective copy_user_highpage() on certain architectures,
a vma argument is added to the function and cow_user_page() allowing
the implementation of these functions to check for the VM_EXEC bit.
The main part of this patch was originally written by Ralf Baechle;
Atushi Nemoto did the the debugging.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
include/linux/highmem.h | 3 ++-
mm/hugetlb.c | 6 +++---
mm/memory.c | 10 +++++-----
3 files changed, 10 insertions(+), 9 deletions(-)
Index: upstream-alias/mm/memory.c
===================================================================
--- upstream-alias.orig/mm/memory.c
+++ upstream-alias/mm/memory.c
@@ -1441,7 +1441,7 @@ static inline pte_t maybe_mkwrite(pte_t
return pte;
}
-static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va)
+static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
{
/*
* If the source page was a PFN mapping, we don't have
@@ -1464,9 +1464,9 @@ static inline void cow_user_page(struct
kunmap_atomic(kaddr, KM_USER0);
flush_dcache_page(dst);
return;
-
+
}
- copy_user_highpage(dst, src, va);
+ copy_user_highpage(dst, src, va, vma);
}
/*
@@ -1577,7 +1577,7 @@ gotten:
new_page = alloc_page_vma(GFP_HIGHUSER, vma, address);
if (!new_page)
goto oom;
- cow_user_page(new_page, old_page, address);
+ cow_user_page(new_page, old_page, address, vma);
}
/*
@@ -2200,7 +2200,7 @@ retry:
page = alloc_page_vma(GFP_HIGHUSER, vma, address);
if (!page)
goto oom;
- copy_user_highpage(page, new_page, address);
+ copy_user_highpage(page, new_page, address, vma);
page_cache_release(new_page);
new_page = page;
anon = 1;
Index: upstream-alias/include/linux/highmem.h
===================================================================
--- upstream-alias.orig/include/linux/highmem.h
+++ upstream-alias/include/linux/highmem.h
@@ -98,7 +98,8 @@ static inline void memclear_highpage_flu
#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE
-static inline void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr)
+static inline void copy_user_highpage(struct page *to, struct page *from,
+ unsigned long vaddr, struct vm_area_struct *vma)
{
char *vfrom, *vto;
Index: upstream-alias/mm/hugetlb.c
===================================================================
--- upstream-alias.orig/mm/hugetlb.c
+++ upstream-alias/mm/hugetlb.c
@@ -44,14 +44,14 @@ static void clear_huge_page(struct page
}
static void copy_huge_page(struct page *dst, struct page *src,
- unsigned long addr)
+ unsigned long addr, struct vm_area_struct *vma)
{
int i;
might_sleep();
for (i = 0; i < HPAGE_SIZE/PAGE_SIZE; i++) {
cond_resched();
- copy_user_highpage(dst + i, src + i, addr + i*PAGE_SIZE);
+ copy_user_highpage(dst + i, src + i, addr + i*PAGE_SIZE, vma);
}
}
@@ -442,7 +442,7 @@ static int hugetlb_cow(struct mm_struct
}
spin_unlock(&mm->page_table_lock);
- copy_huge_page(new_page, old_page, address);
+ copy_huge_page(new_page, old_page, address, vma);
spin_lock(&mm->page_table_lock);
ptep = huge_pte_offset(mm, address & HPAGE_MASK);
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH 4/4] Optimize D-cache alias handling on fork
2006-12-12 17:14 [PATCH 0/4] Fix COW D-cache aliasing on fork Ralf Baechle
` (2 preceding siblings ...)
2006-12-12 17:14 ` [PATCH 3/4] MIPS: Fix COW D-cache aliasing on fork Ralf Baechle
@ 2006-12-12 17:14 ` Ralf Baechle
3 siblings, 0 replies; 5+ messages in thread
From: Ralf Baechle @ 2006-12-12 17:14 UTC (permalink / raw)
To: Linus Torvalds, Andrew Morton; +Cc: linux-arch, linux-kernel, Ralf Baechle
Virtually index, physically tagged cache architectures can get away
without cache flushing when forking. This patch adds a new cache
flushing function flush_cache_dup_mm(struct mm_struct *) which for the
moment I've implemented to do the same thing on all architectures
except on MIPS where it's a no-op.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Documentation/cachetlb.txt | 23 +++++++++++++++++------
include/asm-alpha/cacheflush.h | 1 +
include/asm-arm/cacheflush.h | 2 ++
include/asm-arm26/cacheflush.h | 1 +
include/asm-avr32/cacheflush.h | 1 +
include/asm-cris/cacheflush.h | 1 +
include/asm-frv/cacheflush.h | 1 +
include/asm-h8300/cacheflush.h | 1 +
include/asm-i386/cacheflush.h | 1 +
include/asm-ia64/cacheflush.h | 1 +
include/asm-m32r/cacheflush.h | 3 +++
include/asm-m68k/cacheflush.h | 2 ++
include/asm-m68knommu/cacheflush.h | 1 +
include/asm-mips/cacheflush.h | 2 ++
include/asm-parisc/cacheflush.h | 2 ++
include/asm-powerpc/cacheflush.h | 1 +
include/asm-s390/cacheflush.h | 1 +
include/asm-sh/cpu-sh2/cacheflush.h | 2 ++
include/asm-sh/cpu-sh3/cacheflush.h | 3 +++
include/asm-sh/cpu-sh4/cacheflush.h | 1 +
include/asm-sh64/cacheflush.h | 2 ++
include/asm-sparc/cacheflush.h | 1 +
include/asm-sparc64/cacheflush.h | 1 +
include/asm-v850/cacheflush.h | 1 +
include/asm-x86_64/cacheflush.h | 1 +
include/asm-xtensa/cacheflush.h | 2 ++
kernel/fork.c | 2 +-
27 files changed, 54 insertions(+), 7 deletions(-)
Index: upstream-alias/include/asm-alpha/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-alpha/cacheflush.h
+++ upstream-alias/include/asm-alpha/cacheflush.h
@@ -6,6 +6,7 @@
/* Caches aren't brain-dead on the Alpha. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-arm/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-arm/cacheflush.h
+++ upstream-alias/include/asm-arm/cacheflush.h
@@ -319,6 +319,8 @@ extern void flush_ptrace_access(struct v
unsigned long len, int write);
#endif
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+
/*
* flush_cache_user_range is used when we want to ensure that the
* Harvard caches are synchronised for the user space address range.
Index: upstream-alias/include/asm-arm26/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-arm26/cacheflush.h
+++ upstream-alias/include/asm-arm26/cacheflush.h
@@ -22,6 +22,7 @@
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma,start,end) do { } while (0)
#define flush_cache_page(vma,vmaddr,pfn) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
Index: upstream-alias/include/asm-avr32/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-avr32/cacheflush.h
+++ upstream-alias/include/asm-avr32/cacheflush.h
@@ -87,6 +87,7 @@ void invalidate_icache_region(void *star
*/
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
Index: upstream-alias/include/asm-cris/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-cris/cacheflush.h
+++ upstream-alias/include/asm-cris/cacheflush.h
@@ -9,6 +9,7 @@
*/
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-frv/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-frv/cacheflush.h
+++ upstream-alias/include/asm-frv/cacheflush.h
@@ -20,6 +20,7 @@
*/
#define flush_cache_all() do {} while(0)
#define flush_cache_mm(mm) do {} while(0)
+#define flush_cache_dup_mm(mm) do {} while(0)
#define flush_cache_range(mm, start, end) do {} while(0)
#define flush_cache_page(vma, vmaddr, pfn) do {} while(0)
#define flush_cache_vmap(start, end) do {} while(0)
Index: upstream-alias/include/asm-h8300/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-h8300/cacheflush.h
+++ upstream-alias/include/asm-h8300/cacheflush.h
@@ -12,6 +12,7 @@
#define flush_cache_all()
#define flush_cache_mm(mm)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma,a,b)
#define flush_cache_page(vma,p,pfn)
#define flush_dcache_page(page)
Index: upstream-alias/include/asm-i386/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-i386/cacheflush.h
+++ upstream-alias/include/asm-i386/cacheflush.h
@@ -7,6 +7,7 @@
/* Caches aren't brain-dead on the intel. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-ia64/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-ia64/cacheflush.h
+++ upstream-alias/include/asm-ia64/cacheflush.h
@@ -18,6 +18,7 @@
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_icache_page(vma,page) do { } while (0)
Index: upstream-alias/include/asm-m32r/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-m32r/cacheflush.h
+++ upstream-alias/include/asm-m32r/cacheflush.h
@@ -9,6 +9,7 @@ extern void _flush_cache_copyback_all(vo
#if defined(CONFIG_CHIP_M32700) || defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104)
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
@@ -29,6 +30,7 @@ extern void smp_flush_cache_all(void);
#elif defined(CONFIG_CHIP_M32102)
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
@@ -41,6 +43,7 @@ extern void smp_flush_cache_all(void);
#else
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-m68k/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-m68k/cacheflush.h
+++ upstream-alias/include/asm-m68k/cacheflush.h
@@ -89,6 +89,8 @@ static inline void flush_cache_mm(struct
__flush_cache_030();
}
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+
/* flush_cache_range/flush_cache_page must be macros to avoid
a dependency on linux/mm.h, which includes this file... */
static inline void flush_cache_range(struct vm_area_struct *vma,
Index: upstream-alias/include/asm-m68knommu/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-m68knommu/cacheflush.h
+++ upstream-alias/include/asm-m68knommu/cacheflush.h
@@ -8,6 +8,7 @@
#define flush_cache_all() __flush_cache_all()
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) __flush_cache_all()
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_dcache_range(start,len) __flush_cache_all()
Index: upstream-alias/include/asm-mips/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-mips/cacheflush.h
+++ upstream-alias/include/asm-mips/cacheflush.h
@@ -17,6 +17,7 @@
*
* - flush_cache_all() flushes entire cache
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
+ * - flush_cache_dup mm(mm) handles cache flushing when forking
* - flush_cache_page(mm, vmaddr, pfn) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
* - flush_icache_range(start, end) flush a range of instructions
@@ -31,6 +32,7 @@
extern void (*flush_cache_all)(void);
extern void (*__flush_cache_all)(void);
extern void (*flush_cache_mm)(struct mm_struct *mm);
+#define flush_cache_dup_mm(mm) do { (void) (mm); } while (0)
extern void (*flush_cache_range)(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
Index: upstream-alias/include/asm-parisc/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-parisc/cacheflush.h
+++ upstream-alias/include/asm-parisc/cacheflush.h
@@ -15,6 +15,8 @@
#define flush_cache_mm(mm) flush_cache_all_local()
#endif
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+
#define flush_kernel_dcache_range(start,size) \
flush_kernel_dcache_range_asm((start), (start)+(size));
Index: upstream-alias/include/asm-powerpc/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-powerpc/cacheflush.h
+++ upstream-alias/include/asm-powerpc/cacheflush.h
@@ -18,6 +18,7 @@
*/
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0)
Index: upstream-alias/include/asm-s390/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-s390/cacheflush.h
+++ upstream-alias/include/asm-s390/cacheflush.h
@@ -7,6 +7,7 @@
/* Caches aren't brain-dead on the s390. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-sh/cpu-sh2/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-sh/cpu-sh2/cacheflush.h
+++ upstream-alias/include/asm-sh/cpu-sh2/cacheflush.h
@@ -15,6 +15,7 @@
*
* - flush_cache_all() flushes entire cache
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
+ * - flush_cache_dup mm(mm) handles cache flushing when forking
* - flush_cache_page(mm, vmaddr, pfn) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
*
@@ -27,6 +28,7 @@
*/
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-sh/cpu-sh3/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-sh/cpu-sh3/cacheflush.h
+++ upstream-alias/include/asm-sh/cpu-sh3/cacheflush.h
@@ -15,6 +15,7 @@
*
* - flush_cache_all() flushes entire cache
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
+ * - flush_cache_dup mm(mm) handles cache flushing when forking
* - flush_cache_page(mm, vmaddr, pfn) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
*
@@ -39,6 +40,7 @@
void flush_cache_all(void);
void flush_cache_mm(struct mm_struct *mm);
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
@@ -48,6 +50,7 @@ void flush_icache_page(struct vm_area_st
#else
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-sh/cpu-sh4/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-sh/cpu-sh4/cacheflush.h
+++ upstream-alias/include/asm-sh/cpu-sh4/cacheflush.h
@@ -18,6 +18,7 @@
*/
void flush_cache_all(void);
void flush_cache_mm(struct mm_struct *mm);
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
Index: upstream-alias/include/asm-sh64/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-sh64/cacheflush.h
+++ upstream-alias/include/asm-sh64/cacheflush.h
@@ -21,6 +21,8 @@ extern void flush_icache_user_range(stru
struct page *page, unsigned long addr,
int len);
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
Index: upstream-alias/include/asm-sparc/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-sparc/cacheflush.h
+++ upstream-alias/include/asm-sparc/cacheflush.h
@@ -48,6 +48,7 @@ BTFIXUPDEF_CALL(void, flush_cache_page,
#define flush_cache_all() BTFIXUP_CALL(flush_cache_all)()
#define flush_cache_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
+#define flush_cache_dup_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
#define flush_cache_range(vma,start,end) BTFIXUP_CALL(flush_cache_range)(vma,start,end)
#define flush_cache_page(vma,addr,pfn) BTFIXUP_CALL(flush_cache_page)(vma,addr)
#define flush_icache_range(start, end) do { } while (0)
Index: upstream-alias/include/asm-sparc64/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-sparc64/cacheflush.h
+++ upstream-alias/include/asm-sparc64/cacheflush.h
@@ -12,6 +12,7 @@
/* These are the same regardless of whether this is an SMP kernel or not. */
#define flush_cache_mm(__mm) \
do { if ((__mm) == current->mm) flushw_user(); } while(0)
+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
#define flush_cache_range(vma, start, end) \
flush_cache_mm((vma)->vm_mm)
#define flush_cache_page(vma, page, pfn) \
Index: upstream-alias/include/asm-v850/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-v850/cacheflush.h
+++ upstream-alias/include/asm-v850/cacheflush.h
@@ -24,6 +24,7 @@
systems with MMUs, so we don't need them. */
#define flush_cache_all() ((void)0)
#define flush_cache_mm(mm) ((void)0)
+#define flush_cache_dup_mm(mm) ((void)0)
#define flush_cache_range(vma, start, end) ((void)0)
#define flush_cache_page(vma, vmaddr, pfn) ((void)0)
#define flush_dcache_page(page) ((void)0)
Index: upstream-alias/include/asm-x86_64/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-x86_64/cacheflush.h
+++ upstream-alias/include/asm-x86_64/cacheflush.h
@@ -7,6 +7,7 @@
/* Caches aren't brain-dead on the intel. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
Index: upstream-alias/include/asm-xtensa/cacheflush.h
===================================================================
--- upstream-alias.orig/include/asm-xtensa/cacheflush.h
+++ upstream-alias/include/asm-xtensa/cacheflush.h
@@ -75,6 +75,7 @@ extern void __flush_invalidate_dcache_ra
#define flush_cache_all() __flush_invalidate_cache_all();
#define flush_cache_mm(mm) __flush_invalidate_cache_all();
+#define flush_cache_dup_mm(mm) __flush_invalidate_cache_all();
#define flush_cache_vmap(start,end) __flush_invalidate_cache_all();
#define flush_cache_vunmap(start,end) __flush_invalidate_cache_all();
@@ -88,6 +89,7 @@ extern void flush_cache_page(struct vm_a
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_vmap(start,end) do { } while (0)
#define flush_cache_vunmap(start,end) do { } while (0)
Index: upstream-alias/kernel/fork.c
===================================================================
--- upstream-alias.orig/kernel/fork.c
+++ upstream-alias/kernel/fork.c
@@ -203,7 +203,7 @@ static inline int dup_mmap(struct mm_str
struct mempolicy *pol;
down_write(&oldmm->mmap_sem);
- flush_cache_mm(oldmm);
+ flush_cache_dup_mm(oldmm);
/*
* Not linked in yet - no deadlock potential:
*/
Index: upstream-alias/Documentation/cachetlb.txt
===================================================================
--- upstream-alias.orig/Documentation/cachetlb.txt
+++ upstream-alias/Documentation/cachetlb.txt
@@ -179,10 +179,21 @@ Here are the routines, one by one:
lines associated with 'mm'.
This interface is used to handle whole address space
- page table operations such as what happens during
- fork, exit, and exec.
+ page table operations such as what happens during exit and exec.
-2) void flush_cache_range(struct vm_area_struct *vma,
+2) void flush_cache_dup_mm(struct mm_struct *mm)
+
+ This interface flushes an entire user address space from
+ the caches. That is, after running, there will be no cache
+ lines associated with 'mm'.
+
+ This interface is used to handle whole address space
+ page table operations such as what happens during fork.
+
+ This option is separate from flush_cache_mm to allow some
+ optimizations for VIPT caches.
+
+3) void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
Here we are flushing a specific range of (user) virtual
@@ -199,7 +210,7 @@ Here are the routines, one by one:
call flush_cache_page (see below) for each entry which may be
modified.
-3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
+4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
This time we need to remove a PAGE_SIZE sized range
from the cache. The 'vma' is the backing structure used by
@@ -220,7 +231,7 @@ Here are the routines, one by one:
This is used primarily during fault processing.
-4) void flush_cache_kmaps(void)
+5) void flush_cache_kmaps(void)
This routine need only be implemented if the platform utilizes
highmem. It will be called right before all of the kmaps
@@ -232,7 +243,7 @@ Here are the routines, one by one:
This routing should be implemented in asm/highmem.h
-5) void flush_cache_vmap(unsigned long start, unsigned long end)
+6) void flush_cache_vmap(unsigned long start, unsigned long end)
void flush_cache_vunmap(unsigned long start, unsigned long end)
Here in these two interfaces we are flushing a specific range
^ permalink raw reply [flat|nested] 5+ messages in thread