linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 00/15] Unify TLB gather implementations -v2
@ 2011-03-07 17:13 Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 01/15] mm, powerpc: Dont use tlb_flush for external tlb flushes Peter Zijlstra
                   ` (14 more replies)
  0 siblings, 15 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

This is a series that attempts to unify and fix the current tlb gather
implementations. Only s390 is left unconverted and most aren't event compiled
but other than that its mostly complete (ia64, arm, sh are compile tested).

This second installment doesn't try to change flush_tlb_range() for all
architectures but simply uses a fake vma and fills in VM_EXEC and VM_HUGETLB.

This series depends on the mmu_gather rework -v2 series send last week:
  https://lkml.org/lkml/2011/3/2/323

which is also available (including the anon_vma refcount simplification,
mm preemtibilidy and davem's sparc64 gup_fast implementation) as a git tree
based on next-20110307:
  git://git.kernel.org/pub/scm/linux/kernel/git/peterz/linux-2.6-mmu_gather.git mmu_gather

The whole series, including the depending patches is available through:
  git://git.kernel.org/pub/scm/linux/kernel/git/peterz/linux-2.6-mmu_gather.git mmu_unify

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 01/15] mm, powerpc: Dont use tlb_flush for external tlb flushes
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 02/15] mm, sparc64: " Peter Zijlstra
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: powerpc64-tlb_flush.patch --]
[-- Type: text/plain, Size: 1662 bytes --]

Both sparc64 and powerpc64 use tlb_flush() to flush their respective
hash-tables which is entirely different from what
flush_tlb_range()/flush_tlb_mm() would do.

Powerpc64 already uses arch_*_lazy_mmu_mode() to batch and flush these
so any tlb_flush() caller should already find an empty batch. So
remove this functionality from tlb_flush().

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/powerpc/mm/tlb_hash64.c         |   10 ----------
 arch/sparc/include/asm/tlb_64.h      |    2 +-
 arch/sparc/include/asm/tlbflush_64.h |   13 +++++++++++++
 3 files changed, 14 insertions(+), 11 deletions(-)

Index: linux-2.6/arch/powerpc/mm/tlb_hash64.c
===================================================================
--- linux-2.6.orig/arch/powerpc/mm/tlb_hash64.c
+++ linux-2.6/arch/powerpc/mm/tlb_hash64.c
@@ -155,16 +155,6 @@ void __flush_tlb_pending(struct ppc64_tl
 
 void tlb_flush(struct mmu_gather *tlb)
 {
-	struct ppc64_tlb_batch *tlbbatch = &get_cpu_var(ppc64_tlb_batch);
-
-	/* If there's a TLB batch pending, then we must flush it because the
-	 * pages are going to be freed and we really don't want to have a CPU
-	 * access a freed page because it has a stale TLB
-	 */
-	if (tlbbatch->index)
-		__flush_tlb_pending(tlbbatch);
-
-	put_cpu_var(ppc64_tlb_batch);
 }
 
 /**


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 02/15] mm, sparc64: Dont use tlb_flush for external tlb flushes
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 01/15] mm, powerpc: Dont use tlb_flush for external tlb flushes Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 03/15] mm, arch: Remove tlb_flush() Peter Zijlstra
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: sparc64-tlb_flush.patch --]
[-- Type: text/plain, Size: 2308 bytes --]

Both sparc64 and powerpc64 use tlb_flush() to flush their respective
hash-tables which is entirely different from what
flush_tlb_range()/flush_tlb_mm() would do.

Powerpc64 already uses arch_*_lazy_mmu_mode() to batch and flush these
so any tlb_flush() caller should already find an empty batch, make
sparc64 do the same.

This ensures all platforms now have a tlb_flush() implementation that
is either flush_tlb_mm() or flush_tlb_range().

Cc: David Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/powerpc/mm/tlb_hash64.c         |   10 ----------
 arch/sparc/include/asm/tlb_64.h      |    2 +-
 arch/sparc/include/asm/tlbflush_64.h |   13 +++++++++++++
 3 files changed, 14 insertions(+), 11 deletions(-)

Index: linux-2.6/arch/sparc/include/asm/tlbflush_64.h
===================================================================
--- linux-2.6.orig/arch/sparc/include/asm/tlbflush_64.h
+++ linux-2.6/arch/sparc/include/asm/tlbflush_64.h
@@ -26,6 +26,19 @@ extern void flush_tlb_pending(void);
 #define flush_tlb_page(vma,addr)	flush_tlb_pending()
 #define flush_tlb_mm(mm)		flush_tlb_pending()
 
+#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
+
+static inline void arch_enter_lazy_mmu_mode(void)
+{
+}
+
+static inline void arch_leave_lazy_mmu_mode(void)
+{
+	flush_tlb_pending();
+}
+
+#define arch_flush_lazy_mmu_mode()      do {} while (0)
+
 /* Local cpu only.  */
 extern void __flush_tlb_all(void);
 
Index: linux-2.6/arch/sparc/include/asm/tlb_64.h
===================================================================
--- linux-2.6.orig/arch/sparc/include/asm/tlb_64.h
+++ linux-2.6/arch/sparc/include/asm/tlb_64.h
@@ -25,7 +25,7 @@ extern void flush_tlb_pending(void);
 #define tlb_start_vma(tlb, vma) do { } while (0)
 #define tlb_end_vma(tlb, vma)	do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
-#define tlb_flush(tlb)	flush_tlb_pending()
+#define tlb_flush(tlb)	do { } while (0)
 
 #include <asm-generic/tlb.h>
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 03/15] mm, arch: Remove tlb_flush()
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 01/15] mm, powerpc: Dont use tlb_flush for external tlb flushes Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 02/15] mm, sparc64: " Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 04/15] mm: Optimize fullmm TLB flushing Peter Zijlstra
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: generic_tlb_flush.patch --]
[-- Type: text/plain, Size: 13835 bytes --]

Since all asm-generic/tlb.h users their tlb_flush() implementation is
now either a nop or flush_tlb_mm(), remove it and make the generic
code use flush_tlb_mm() directly.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/alpha/include/asm/tlb.h      |    2 --
 arch/arm/include/asm/tlb.h        |    2 --
 arch/avr32/include/asm/tlb.h      |    5 -----
 arch/blackfin/include/asm/tlb.h   |    6 ------
 arch/cris/include/asm/tlb.h       |    1 -
 arch/frv/include/asm/tlb.h        |    5 -----
 arch/h8300/include/asm/tlb.h      |   13 -------------
 arch/m32r/include/asm/tlb.h       |    6 ------
 arch/m68k/include/asm/tlb.h       |    6 ------
 arch/microblaze/include/asm/tlb.h |    2 --
 arch/mips/include/asm/tlb.h       |    5 -----
 arch/mn10300/include/asm/tlb.h    |    5 -----
 arch/parisc/include/asm/tlb.h     |    5 -----
 arch/powerpc/include/asm/tlb.h    |    2 --
 arch/powerpc/mm/tlb_hash32.c      |   15 ---------------
 arch/powerpc/mm/tlb_hash64.c      |    4 ----
 arch/powerpc/mm/tlb_nohash.c      |    5 -----
 arch/score/include/asm/tlb.h      |    1 -
 arch/sh/include/asm/tlb.h         |    1 -
 arch/sparc/include/asm/tlb_32.h   |    5 -----
 arch/sparc/include/asm/tlb_64.h   |    1 -
 arch/tile/include/asm/tlb.h       |    1 -
 arch/x86/include/asm/tlb.h        |    1 -
 arch/xtensa/include/asm/tlb.h     |    1 -
 include/asm-generic/tlb.h         |    2 +-
 25 files changed, 1 insertion(+), 101 deletions(-)

Index: linux-2.6/arch/alpha/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/alpha/include/asm/tlb.h
+++ linux-2.6/arch/alpha/include/asm/tlb.h
@@ -5,8 +5,6 @@
 #define tlb_end_vma(tlb, vma)			do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, pte, addr)	do { } while (0)
 
-#define tlb_flush(tlb)				flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 #define __pte_free_tlb(tlb, pte, address)		pte_free((tlb)->mm, pte)
Index: linux-2.6/arch/arm/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/arm/include/asm/tlb.h
+++ linux-2.6/arch/arm/include/asm/tlb.h
@@ -23,8 +23,6 @@
 
 #include <linux/pagemap.h>
 
-#define tlb_flush(tlb)	((void) tlb)
-
 #include <asm-generic/tlb.h>
 
 #else /* !CONFIG_MMU */
Index: linux-2.6/arch/avr32/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/avr32/include/asm/tlb.h
+++ linux-2.6/arch/avr32/include/asm/tlb.h
@@ -16,11 +16,6 @@
 
 #define __tlb_remove_tlb_entry(tlb, pte, address) do { } while(0)
 
-/*
- * Flush whole TLB for MM
- */
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 /*
Index: linux-2.6/arch/blackfin/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/blackfin/include/asm/tlb.h
+++ linux-2.6/arch/blackfin/include/asm/tlb.h
@@ -11,12 +11,6 @@
 #define tlb_end_vma(tlb, vma)	do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address)	do { } while (0)
 
-/*
- * .. because we flush the whole mm when it
- * fills up.
- */
-#define tlb_flush(tlb)		flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 #endif				/* _BLACKFIN_TLB_H */
Index: linux-2.6/arch/cris/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/cris/include/asm/tlb.h
+++ linux-2.6/arch/cris/include/asm/tlb.h
@@ -13,7 +13,6 @@
 #define tlb_end_vma(tlb, vma) do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
 
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
 #include <asm-generic/tlb.h>
 
 #endif
Index: linux-2.6/arch/frv/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/frv/include/asm/tlb.h
+++ linux-2.6/arch/frv/include/asm/tlb.h
@@ -16,11 +16,6 @@ extern void check_pgt_cache(void);
 #define tlb_end_vma(tlb, vma)				do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address)	do { } while (0)
 
-/*
- * .. because we flush the whole mm when it fills up
- */
-#define tlb_flush(tlb)		flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 #endif /* _ASM_TLB_H */
Index: linux-2.6/arch/h8300/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/h8300/include/asm/tlb.h
+++ linux-2.6/arch/h8300/include/asm/tlb.h
@@ -5,19 +5,6 @@
 #ifndef __H8300_TLB_H__
 #define __H8300_TLB_H__
 
-#define tlb_flush(tlb)	do { } while(0)
-
-/* 
-  include/asm-h8300/tlb.h 
-*/
-
-#ifndef __H8300_TLB_H__
-#define __H8300_TLB_H__
-
-#define tlb_flush(tlb)	do { } while(0)
-
 #include <asm-generic/tlb.h>
 
 #endif
-
-#endif
Index: linux-2.6/arch/m32r/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/m32r/include/asm/tlb.h
+++ linux-2.6/arch/m32r/include/asm/tlb.h
@@ -9,12 +9,6 @@
 #define tlb_end_vma(tlb, vma) do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0)
 
-/*
- * .. because we flush the whole mm when it
- * fills up.
- */
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 #endif /* _M32R_TLB_H */
Index: linux-2.6/arch/m68k/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/m68k/include/asm/tlb.h
+++ linux-2.6/arch/m68k/include/asm/tlb.h
@@ -9,12 +9,6 @@
 #define tlb_end_vma(tlb, vma)	do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address)	do { } while (0)
 
-/*
- * .. because we flush the whole mm when it
- * fills up.
- */
-#define tlb_flush(tlb)		flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 #endif /* _M68K_TLB_H */
Index: linux-2.6/arch/microblaze/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/microblaze/include/asm/tlb.h
+++ linux-2.6/arch/microblaze/include/asm/tlb.h
@@ -11,8 +11,6 @@
 #ifndef _ASM_MICROBLAZE_TLB_H
 #define _ASM_MICROBLAZE_TLB_H
 
-#define tlb_flush(tlb)	flush_tlb_mm((tlb)->mm)
-
 #include <linux/pagemap.h>
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/mips/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/mips/include/asm/tlb.h
+++ linux-2.6/arch/mips/include/asm/tlb.h
@@ -13,11 +13,6 @@
 #define tlb_end_vma(tlb, vma) do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
 
-/*
- * .. because we flush the whole mm when it fills up.
- */
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
-
 #include <asm-generic/tlb.h>
 
 #endif /* __ASM_TLB_H */
Index: linux-2.6/arch/mn10300/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/mn10300/include/asm/tlb.h
+++ linux-2.6/arch/mn10300/include/asm/tlb.h
@@ -23,11 +23,6 @@ extern void check_pgt_cache(void);
 #define tlb_end_vma(tlb, vma)				do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address)	do { } while (0)
 
-/*
- * .. because we flush the whole mm when it fills up
- */
-#define tlb_flush(tlb)	flush_tlb_mm((tlb)->mm)
-
 /* for now, just use the generic stuff */
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/parisc/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/parisc/include/asm/tlb.h
+++ linux-2.6/arch/parisc/include/asm/tlb.h
@@ -1,11 +1,6 @@
 #ifndef _PARISC_TLB_H
 #define _PARISC_TLB_H
 
-#define tlb_flush(tlb)			\
-do {	if ((tlb)->fullmm)		\
-		flush_tlb_mm((tlb)->mm);\
-} while (0)
-
 #define tlb_start_vma(tlb, vma) \
 do {	if (!(tlb)->fullmm)	\
 		flush_cache_range(vma, vma->vm_start, vma->vm_end); \
Index: linux-2.6/arch/powerpc/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/powerpc/include/asm/tlb.h
+++ linux-2.6/arch/powerpc/include/asm/tlb.h
@@ -28,8 +28,6 @@
 #define tlb_start_vma(tlb, vma)	do { } while (0)
 #define tlb_end_vma(tlb, vma)	do { } while (0)
 
-extern void tlb_flush(struct mmu_gather *tlb);
-
 /* Get the generic bits... */
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/powerpc/mm/tlb_hash32.c
===================================================================
--- linux-2.6.orig/arch/powerpc/mm/tlb_hash32.c
+++ linux-2.6/arch/powerpc/mm/tlb_hash32.c
@@ -59,21 +59,6 @@ void flush_tlb_page_nohash(struct vm_are
 }
 
 /*
- * Called at the end of a mmu_gather operation to make sure the
- * TLB flush is completely done.
- */
-void tlb_flush(struct mmu_gather *tlb)
-{
-	if (Hash == 0) {
-		/*
-		 * 603 needs to flush the whole TLB here since
-		 * it doesn't use a hash table.
-		 */
-		_tlbia();
-	}
-}
-
-/*
  * TLB flushing:
  *
  *  - flush_tlb_mm(mm) flushes the specified mm context TLB's
Index: linux-2.6/arch/powerpc/mm/tlb_hash64.c
===================================================================
--- linux-2.6.orig/arch/powerpc/mm/tlb_hash64.c
+++ linux-2.6/arch/powerpc/mm/tlb_hash64.c
@@ -153,10 +153,6 @@ void __flush_tlb_pending(struct ppc64_tl
 	batch->index = 0;
 }
 
-void tlb_flush(struct mmu_gather *tlb)
-{
-}
-
 /**
  * __flush_hash_table_range - Flush all HPTEs for a given address range
  *                            from the hash table (and the TLB). But keeps
Index: linux-2.6/arch/powerpc/mm/tlb_nohash.c
===================================================================
--- linux-2.6.orig/arch/powerpc/mm/tlb_nohash.c
+++ linux-2.6/arch/powerpc/mm/tlb_nohash.c
@@ -296,11 +296,6 @@ void flush_tlb_range(struct vm_area_stru
 }
 EXPORT_SYMBOL(flush_tlb_range);
 
-void tlb_flush(struct mmu_gather *tlb)
-{
-	flush_tlb_mm(tlb->mm);
-}
-
 /*
  * Below are functions specific to the 64-bit variant of Book3E though that
  * may change in the future
Index: linux-2.6/arch/score/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/score/include/asm/tlb.h
+++ linux-2.6/arch/score/include/asm/tlb.h
@@ -8,7 +8,6 @@
 #define tlb_start_vma(tlb, vma)		do {} while (0)
 #define tlb_end_vma(tlb, vma)		do {} while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do {} while (0)
-#define tlb_flush(tlb)			flush_tlb_mm((tlb)->mm)
 
 extern void score7_FTLB_refill_Handler(void);
 
Index: linux-2.6/arch/sh/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/sh/include/asm/tlb.h
+++ linux-2.6/arch/sh/include/asm/tlb.h
@@ -125,7 +125,6 @@ static inline void tlb_unwire_entry(void
 #define tlb_start_vma(tlb, vma)				do { } while (0)
 #define tlb_end_vma(tlb, vma)				do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, pte, address)	do { } while (0)
-#define tlb_flush(tlb)					do { } while (0)
 
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/sparc/include/asm/tlb_32.h
===================================================================
--- linux-2.6.orig/arch/sparc/include/asm/tlb_32.h
+++ linux-2.6/arch/sparc/include/asm/tlb_32.h
@@ -14,11 +14,6 @@ do {								\
 #define __tlb_remove_tlb_entry(tlb, pte, address) \
 	do { } while (0)
 
-#define tlb_flush(tlb) \
-do {								\
-	flush_tlb_mm((tlb)->mm);				\
-} while (0)
-
 #include <asm-generic/tlb.h>
 
 #endif /* _SPARC_TLB_H */
Index: linux-2.6/arch/sparc/include/asm/tlb_64.h
===================================================================
--- linux-2.6.orig/arch/sparc/include/asm/tlb_64.h
+++ linux-2.6/arch/sparc/include/asm/tlb_64.h
@@ -25,7 +25,6 @@ extern void flush_tlb_pending(void);
 #define tlb_start_vma(tlb, vma) do { } while (0)
 #define tlb_end_vma(tlb, vma)	do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
-#define tlb_flush(tlb)	do { } while (0)
 
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/tile/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/tile/include/asm/tlb.h
+++ linux-2.6/arch/tile/include/asm/tlb.h
@@ -18,7 +18,6 @@
 #define tlb_start_vma(tlb, vma) do { } while (0)
 #define tlb_end_vma(tlb, vma) do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
 
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/x86/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/tlb.h
+++ linux-2.6/arch/x86/include/asm/tlb.h
@@ -4,7 +4,6 @@
 #define tlb_start_vma(tlb, vma) do { } while (0)
 #define tlb_end_vma(tlb, vma) do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
 
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/arch/xtensa/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/xtensa/include/asm/tlb.h
+++ linux-2.6/arch/xtensa/include/asm/tlb.h
@@ -38,7 +38,6 @@
 #endif
 
 #define __tlb_remove_tlb_entry(tlb,pte,addr)	do { } while (0)
-#define tlb_flush(tlb)				flush_tlb_mm((tlb)->mm)
 
 #include <asm-generic/tlb.h>
 
Index: linux-2.6/include/asm-generic/tlb.h
===================================================================
--- linux-2.6.orig/include/asm-generic/tlb.h
+++ linux-2.6/include/asm-generic/tlb.h
@@ -159,7 +159,7 @@ tlb_flush_mmu(struct mmu_gather *tlb)
 	if (!tlb->need_flush)
 		return;
 	tlb->need_flush = 0;
-	tlb_flush(tlb);
+	flush_tlb_mm(tlb->mm);
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
 	tlb_table_flush(tlb);
 #endif


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 04/15] mm: Optimize fullmm TLB flushing
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (2 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 03/15] mm, arch: Remove tlb_flush() Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 05/15] mm, tile: Change flush_tlb_range() VM_HUGETLB semantics Peter Zijlstra
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: mmu_gather_fullmm.patch --]
[-- Type: text/plain, Size: 1742 bytes --]

This originated from s390 which does something similar and would allow
s390 to use the generic TLB flushing code.

The idea is to flush the mm wide cache and tlb a priory and not bother
with multiple flushes if the batching isn't large enough.

This can be safely done since there cannot be any concurrency on this
mm, its either after the process died (exit) or in the middle of
execve where the thread switched to the new mm.

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 include/asm-generic/tlb.h |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

Index: linux-2.6/include/asm-generic/tlb.h
===================================================================
--- linux-2.6.orig/include/asm-generic/tlb.h
+++ linux-2.6/include/asm-generic/tlb.h
@@ -149,6 +149,11 @@ tlb_gather_mmu(struct mmu_gather *tlb, s
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
 	tlb->batch = NULL;
 #endif
+
+	if (fullmm) {
+		flush_cache_mm(mm);
+		flush_tlb_mm(mm);
+	}
 }
 
 static inline void
@@ -156,13 +161,15 @@ tlb_flush_mmu(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
 
-	if (!tlb->need_flush)
-		return;
-	tlb->need_flush = 0;
-	flush_tlb_mm(tlb->mm);
+	if (!tlb->fullmm && tlb->need_flush) {
+		tlb->need_flush = 0;
+		flush_tlb_mm(tlb->mm);
+	}
+
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
 	tlb_table_flush(tlb);
 #endif
+
 	if (tlb_fast_mode(tlb))
 		return;
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 05/15] mm, tile: Change flush_tlb_range() VM_HUGETLB semantics
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (3 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 04/15] mm: Optimize fullmm TLB flushing Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 06/15] mm: Provide generic range tracking and flushing Peter Zijlstra
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: tile-flush_tlb_range-hugetlb.patch --]
[-- Type: text/plain, Size: 1511 bytes --]

Since we're going to provide a fake VMA covering a large range, we
need to change the VM_HUGETLB semantic to mean _also_ wipe HPAGE TLBs.

Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/tile/kernel/tlb.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Index: linux-2.6/arch/tile/kernel/tlb.c
===================================================================
--- linux-2.6.orig/arch/tile/kernel/tlb.c
+++ linux-2.6/arch/tile/kernel/tlb.c
@@ -67,11 +67,14 @@ EXPORT_SYMBOL(flush_tlb_page);
 void flush_tlb_range(const struct vm_area_struct *vma,
 		     unsigned long start, unsigned long end)
 {
-	unsigned long size = hv_page_size(vma);
 	struct mm_struct *mm = vma->vm_mm;
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
-	flush_remote(0, cache, &mm->cpu_vm_mask, start, end - start, size,
-		     &mm->cpu_vm_mask, NULL, 0);
+	flush_remote(0, cache, &mm->cpu_vm_mask, start, end - start,
+			PAGE_SIZE, &mm->cpu_vm_mask, NULL, 0);
+	if (vma->vm_flags & VM_HUGETLB) {
+		flush_remote(0, 0, &mm->cpu_vm_mask, start, end - start,
+				HPAGE_SIZE, &mm->cpu_vm_mask, NULL, 0);
+	}
 }
 
 void flush_tlb_all(void)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 06/15] mm: Provide generic range tracking and flushing
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (4 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 05/15] mm, tile: Change flush_tlb_range() VM_HUGETLB semantics Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 07/15] mm, arm: Convert arm to generic tlb Peter Zijlstra
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Tony Luck, Paul Mundt, Jeff Dike, Hans-Christian Egtvedt,
	Ralf Baechle, Kyle McMartin, James Bottomley, Chris Zankel

[-- Attachment #1: mm-generic-tlb-range.patch --]
[-- Type: text/plain, Size: 7797 bytes --]

In order to convert various architectures to generic tlb we need to
provide some extra infrastructure to track the range of the flushed
page tables.

There are two mmu_gather cases to consider:

  unmap_region()
    tlb_gather_mmu()
    unmap_vmas()
      for (; vma; vma = vma->vm_next)
        unmap_page_range()
          tlb_start_vma() -> flush cache range/track vm_flags
          zap_*_range()
            arch_enter_lazy_mmu_mode()
            ptep_get_and_clear_full() -> batch/track external tlbs
            tlb_remove_tlb_entry() -> track range/external tlbs
            tlb_remove_page() -> batch page
            arch_lazy_leave_mmu_mode() -> flush external tlbs
          tlb_end_vma()
    free_pgtables()
      while (vma)
        unlink_*_vma()
        free_*_range()
          *_free_tlb() -> track range/batch page
    tlb_finish_mmu() -> flush TLBs and flush everything
  free vmas

and:

  shift_arg_pages()
    tlb_gather_mmu()
    free_*_range()
      *_free_tlb() -> track tlb range
    tlb_finish_mmu() -> flush things

There are various reasons that we need to flush TLBs _after_ tearing
down the page-tables themselves. For some architectures (x86 among
others) this serializes against (both hardware and software) page
table walkers like gup_fast().

For others (ARM) this is (also) needed to evict stale page-table
caches - ARM LPAE mode apparently caches page tables and concurrent
hardware walkers could re-populate these caches if the final tlb flush
were to be from tlb_end_vma() since an concurrent walk could still be
in progress.

So implement generic range tracking over both clearing the PTEs and
tearing down the page-tables. 

Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: James Bottomley <jejb@parisc-linux.org>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/Kconfig              |    3 +
 include/asm-generic/tlb.h |  122 ++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 105 insertions(+), 20 deletions(-)

Index: linux-2.6/arch/Kconfig
===================================================================
--- linux-2.6.orig/arch/Kconfig
+++ linux-2.6/arch/Kconfig
@@ -187,4 +187,7 @@ config ARCH_HAVE_NMI_SAFE_CMPXCHG
 config HAVE_RCU_TABLE_FREE
 	bool
 
+config HAVE_MMU_GATHER_RANGE
+	bool
+
 source "kernel/gcov/Kconfig"
Index: linux-2.6/include/asm-generic/tlb.h
===================================================================
--- linux-2.6.orig/include/asm-generic/tlb.h
+++ linux-2.6/include/asm-generic/tlb.h
@@ -78,7 +78,8 @@ struct mmu_gather_batch {
 #define MAX_GATHER_BATCH	\
 	((PAGE_SIZE - sizeof(struct mmu_gather_batch)) / sizeof(void *))
 
-/* struct mmu_gather is an opaque type used by the mm code for passing around
+/*
+ * struct mmu_gather is an opaque type used by the mm code for passing around
  * any data needed by arch specific code for tlb_remove_page.
  */
 struct mmu_gather {
@@ -86,6 +87,10 @@ struct mmu_gather {
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
 	struct mmu_table_batch	*batch;
 #endif
+#ifdef CONFIG_HAVE_MMU_GATHER_RANGE
+	unsigned long		start, end;
+	unsigned long		vm_flags;
+#endif
 	unsigned int		need_flush : 1,	/* Did free PTEs */
 				fast_mode  : 1; /* No batching   */
 
@@ -106,6 +111,75 @@ struct mmu_gather {
   #define tlb_fast_mode(tlb) 1
 #endif
 
+#ifdef CONFIG_HAVE_MMU_GATHER_RANGE
+
+static inline void tlb_init_range(struct mmu_gather *tlb)
+{
+	tlb->start = TASK_SIZE;
+	tlb->end = 0;
+	tlb->vm_flags = 0;
+}
+
+static inline void
+tlb_track_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end)
+{
+	if (!tlb->fullmm) {
+		tlb->start = min(tlb->start, addr);
+		tlb->end = max(tlb->end, end);
+	}
+}
+
+static inline void
+tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
+{
+	if (!tlb->fullmm) {
+		flush_cache_range(vma, vma->vm_start, vma->vm_end);
+		tlb->vm_flags |= vma->vm_flags;
+	}
+}
+
+static inline void
+tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
+{
+}
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	/*
+	 * Fake VMA, some architectures use VM_EXEC to flush I-TLB/I$,
+	 * and some use VM_HUGETLB since they have separate HPAGE TLBs.
+	 *
+	 * Since its an artificial VMA, VM_HUGETLB means only part of
+	 * the range can be HUGE, so you always have to flush normal
+	 * TLBs.
+	 */
+	struct vm_area_struct vma = {
+		.vm_mm = tlb->mm,
+		.vm_flags = tlb->vm_flags & (VM_EXEC | VM_HUGETLB),
+	};
+
+	flush_tlb_range(&vma, tlb->start, tlb->end);
+	tlb_init_range(tlb);
+}
+
+#else /* CONFIG_HAVE_MMU_GATHER_RANGE */
+
+static inline void tlb_init_range(struct mmu_gather *tlb)
+{
+}
+
+/*
+ * Macro avoids argument evaluation.
+ */
+#define tlb_track_range(tlb, addr, end) do { } while (0)
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	flush_tlb_mm(tlb->mm);
+}
+
+#endif /* CONFIG_HAVE_MMU_GATHER_RANGE */
+
 static inline int tlb_next_batch(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
@@ -146,6 +220,8 @@ tlb_gather_mmu(struct mmu_gather *tlb, s
 	tlb->local.max  = ARRAY_SIZE(tlb->__pages);
 	tlb->active     = &tlb->local;
 
+	tlb_init_range(tlb);
+
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
 	tlb->batch = NULL;
 #endif
@@ -163,7 +239,7 @@ tlb_flush_mmu(struct mmu_gather *tlb)
 
 	if (!tlb->fullmm && tlb->need_flush) {
 		tlb->need_flush = 0;
-		flush_tlb_mm(tlb->mm);
+		tlb_flush(tlb);
 	}
 
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
@@ -240,32 +316,38 @@ static inline void tlb_remove_page(struc
  * later optimise away the tlb invalidate.   This helps when userspace is
  * unmapping already-unmapped pages, which happens quite a lot.
  */
-#define tlb_remove_tlb_entry(tlb, ptep, address)		\
-	do {							\
-		tlb->need_flush = 1;				\
-		__tlb_remove_tlb_entry(tlb, ptep, address);	\
+#define tlb_remove_tlb_entry(tlb, ptep, addr)				\
+	do {								\
+		tlb->need_flush = 1;					\
+		tlb_track_range(tlb, addr, addr + PAGE_SIZE);		\
+		__tlb_remove_tlb_entry(tlb, ptep, addr);		\
 	} while (0)
 
-#define pte_free_tlb(tlb, ptep, address)			\
-	do {							\
-		tlb->need_flush = 1;				\
-		__pte_free_tlb(tlb, ptep, address);		\
+#define pte_free_tlb(tlb, ptep, addr)					\
+	do {								\
+		tlb->need_flush = 1;					\
+		tlb_track_range(tlb, addr, pmd_addr_end(addr, TASK_SIZE));\
+		__pte_free_tlb(tlb, ptep, addr);			\
 	} while (0)
 
-#ifndef __ARCH_HAS_4LEVEL_HACK
-#define pud_free_tlb(tlb, pudp, address)			\
-	do {							\
-		tlb->need_flush = 1;				\
-		__pud_free_tlb(tlb, pudp, address);		\
+#define pmd_free_tlb(tlb, pmdp, addr)					\
+	do {								\
+		tlb->need_flush = 1;					\
+		tlb_track_range(tlb, addr, pud_addr_end(addr, TASK_SIZE));\
+		__pmd_free_tlb(tlb, pmdp, addr);			\
 	} while (0)
-#endif
 
-#define pmd_free_tlb(tlb, pmdp, address)			\
-	do {							\
-		tlb->need_flush = 1;				\
-		__pmd_free_tlb(tlb, pmdp, address);		\
+#ifndef __ARCH_HAS_4LEVEL_HACK
+#define pud_free_tlb(tlb, pudp, addr)					\
+	do {								\
+		tlb->need_flush = 1;					\
+		tlb_track_range(tlb, addr, pgd_addr_end(addr, TASK_SIZE));\
+		__pud_free_tlb(tlb, pudp, addr);			\
 	} while (0)
+#endif
 
+#ifndef tlb_migrate_finish
 #define tlb_migrate_finish(mm) do {} while (0)
+#endif
 
 #endif /* _ASM_GENERIC__TLB_H */


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 07/15] mm, arm: Convert arm to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (5 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 06/15] mm: Provide generic range tracking and flushing Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 08/15] mm, ia64: Convert ia64 " Peter Zijlstra
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: mm-arm-tlb-range.patch --]
[-- Type: text/plain, Size: 6585 bytes --]

Might want to optimize the tlb_flush() function to do a full mm flush
when the range is 'large', IA64 does this too.

Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/arm/Kconfig           |    1 
 arch/arm/include/asm/tlb.h |  174 +--------------------------------------------
 2 files changed, 6 insertions(+), 169 deletions(-)

Index: linux-2.6/arch/arm/Kconfig
===================================================================
--- linux-2.6.orig/arch/arm/Kconfig
+++ linux-2.6/arch/arm/Kconfig
@@ -28,6 +28,7 @@ config ARM
 	select HAVE_C_RECORDMCOUNT
 	select HAVE_GENERIC_HARDIRQS
 	select HAVE_SPARSE_IRQ
+	select HAVE_MMU_GATHER_RANGE if MMU
 	help
 	  The ARM series is a line of low-power-consumption RISC chip designs
 	  licensed by ARM Ltd and targeted at embedded applications and
Index: linux-2.6/arch/arm/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/arm/include/asm/tlb.h
+++ linux-2.6/arch/arm/include/asm/tlb.h
@@ -27,184 +27,20 @@
 
 #else /* !CONFIG_MMU */
 
-#include <linux/swap.h>
-#include <asm/pgalloc.h>
-#include <asm/tlbflush.h>
-
-/*
- * We need to delay page freeing for SMP as other CPUs can access pages
- * which have been removed but not yet had their TLB entries invalidated.
- * Also, as ARMv7 speculative prefetch can drag new entries into the TLB,
- * we need to apply this same delaying tactic to ensure correct operation.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_CPU_32v7)
-#define tlb_fast_mode(tlb)	0
-#else
-#define tlb_fast_mode(tlb)	1
-#endif
-
-#define MMU_GATHER_BUNDLE	8
-
-/*
- * TLB handling.  This allows us to remove pages from the page
- * tables, and efficiently handle the TLB issues.
- */
-struct mmu_gather {
-	struct mm_struct	*mm;
-	unsigned int		fullmm;
-	struct vm_area_struct	*vma;
-	unsigned long		range_start;
-	unsigned long		range_end;
-	unsigned int		nr;
-	unsigned int		max;
-	struct page		**pages;
-	struct page		*local[MMU_GATHER_BUNDLE];
-};
-
-DECLARE_PER_CPU(struct mmu_gather, mmu_gathers);
-
-/*
- * This is unnecessarily complex.  There's three ways the TLB shootdown
- * code is used:
- *  1. Unmapping a range of vmas.  See zap_page_range(), unmap_region().
- *     tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called.
- *     tlb->vma will be non-NULL.
- *  2. Unmapping all vmas.  See exit_mmap().
- *     tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called.
- *     tlb->vma will be non-NULL.  Additionally, page tables will be freed.
- *  3. Unmapping argument pages.  See shift_arg_pages().
- *     tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called.
- *     tlb->vma will be NULL.
- */
-static inline void tlb_flush(struct mmu_gather *tlb)
-{
-	if (tlb->fullmm || !tlb->vma)
-		flush_tlb_mm(tlb->mm);
-	else if (tlb->range_end > 0) {
-		flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end);
-		tlb->range_start = TASK_SIZE;
-		tlb->range_end = 0;
-	}
-}
-
-static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr)
-{
-	if (!tlb->fullmm) {
-		if (addr < tlb->range_start)
-			tlb->range_start = addr;
-		if (addr + PAGE_SIZE > tlb->range_end)
-			tlb->range_end = addr + PAGE_SIZE;
-	}
-}
-
-static inline void __tlb_alloc_page(struct mmu_gather *tlb)
-{
-	unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0);
-
-	if (addr) {
-		tlb->pages = (void *)addr;
-		tlb->max = PAGE_SIZE / sizeof(struct page *);
-	}
-}
-
-static inline void tlb_flush_mmu(struct mmu_gather *tlb)
-{
-	tlb_flush(tlb);
-	if (!tlb_fast_mode(tlb)) {
-		free_pages_and_swap_cache(tlb->pages, tlb->nr);
-		tlb->nr = 0;
-		if (tlb->pages == tlb->local)
-			__tlb_alloc_page(tlb);
-	}
-}
-
-static inline void
-tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int fullmm)
-{
-	tlb->mm = mm;
-	tlb->fullmm = fullmm;
-	tlb->vma = NULL;
-	tlb->max = ARRAY_SIZE(tlb->local);
-	tlb->pages = tlb->local;
-	tlb->nr = 0;
-	__tlb_alloc_page(tlb);
-}
+#define __tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0)
 
 static inline void
-tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
-{
-	tlb_flush_mmu(tlb);
+__pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr);
 
-	/* keep the page table cache within bounds */
-	check_pgt_cache();
+#define __pmd_free_tlb(tlb, pmdp, addr)	pmd_free((tlb)->mm, pmdp)
 
-	if (tlb->pages != tlb->local)
-		free_pages((unsigned long)tlb->pages, 0);
-}
+#include <asm-generic/tlb.h>
 
-/*
- * Memorize the range for the TLB flush.
- */
 static inline void
-tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr)
-{
-	tlb_add_flush(tlb, addr);
-}
-
-/*
- * In the case of tlb vma handling, we can optimise these away in the
- * case where we're doing a full MM flush.  When we're doing a munmap,
- * the vmas are adjusted to only cover the region to be torn down.
- */
-static inline void
-tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
-{
-	if (!tlb->fullmm) {
-		flush_cache_range(vma, vma->vm_start, vma->vm_end);
-		tlb->vma = vma;
-		tlb->range_start = TASK_SIZE;
-		tlb->range_end = 0;
-	}
-}
-
-static inline void
-tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
-{
-	if (!tlb->fullmm)
-		tlb_flush(tlb);
-}
-
-static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	if (tlb_fast_mode(tlb)) {
-		free_page_and_swap_cache(page);
-	} else {
-		tlb->pages[tlb->nr++] = page;
-		if (tlb->nr >= tlb->max)
-			return 1;
-	}
-	return 0;
-}
-
-static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	if (__tlb_remove_page(tlb, page))
-		tlb_flush_mmu(tlb);
-}
-
-static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
-	unsigned long addr)
+__pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 {
 	pgtable_page_dtor(pte);
-	tlb_add_flush(tlb, addr);
 	tlb_remove_page(tlb, pte);
 }
-
-#define pte_free_tlb(tlb, ptep, addr)	__pte_free_tlb(tlb, ptep, addr)
-#define pmd_free_tlb(tlb, pmdp, addr)	pmd_free((tlb)->mm, pmdp)
-#define pud_free_tlb(tlb, pudp, addr)	pud_free((tlb)->mm, pudp)
-
-#define tlb_migrate_finish(mm)		do { } while (0)
-
 #endif /* CONFIG_MMU */
 #endif


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 08/15] mm, ia64: Convert ia64 to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (6 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 07/15] mm, arm: Convert arm to generic tlb Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:13 ` [RFC][PATCH 09/15] mm, sh: Convert sh " Peter Zijlstra
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Tony Luck

[-- Attachment #1: mm-ia64-tlb-range.patch --]
[-- Type: text/plain, Size: 10858 bytes --]

Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/ia64/Kconfig                |    1 
 arch/ia64/include/asm/tlb.h      |  234 ---------------------------------------
 arch/ia64/include/asm/tlbflush.h |   25 ++++
 arch/ia64/mm/tlb.c               |   24 +++-
 4 files changed, 49 insertions(+), 235 deletions(-)

Index: linux-2.6/arch/ia64/Kconfig
===================================================================
--- linux-2.6.orig/arch/ia64/Kconfig
+++ linux-2.6/arch/ia64/Kconfig
@@ -25,6 +25,7 @@ config IA64
 	select HAVE_GENERIC_HARDIRQS
 	select GENERIC_IRQ_PROBE
 	select GENERIC_PENDING_IRQ if SMP
+	select HAVE_MMU_GATHER_RANGE
 	select IRQ_PER_CPU
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	default y
Index: linux-2.6/arch/ia64/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/ia64/include/asm/tlb.h
+++ linux-2.6/arch/ia64/include/asm/tlb.h
@@ -46,239 +46,9 @@
 #include <asm/tlbflush.h>
 #include <asm/machvec.h>
 
-#ifdef CONFIG_SMP
-# define tlb_fast_mode(tlb)	((tlb)->nr == ~0U)
-#else
-# define tlb_fast_mode(tlb)	(1)
-#endif
-
-/*
- * If we can't allocate a page to make a big batch of page pointers
- * to work on, then just handle a few from the on-stack structure.
- */
-#define	IA64_GATHER_BUNDLE	8
-
-struct mmu_gather {
-	struct mm_struct	*mm;
-	unsigned int		nr;		/* == ~0U => fast mode */
-	unsigned int		max;
-	unsigned char		fullmm;		/* non-zero means full mm flush */
-	unsigned char		need_flush;	/* really unmapped some PTEs? */
-	unsigned long		start_addr;
-	unsigned long		end_addr;
-	struct page		**pages;
-	struct page		*local[IA64_GATHER_BUNDLE];
-};
-
-struct ia64_tr_entry {
-	u64 ifa;
-	u64 itir;
-	u64 pte;
-	u64 rr;
-}; /*Record for tr entry!*/
-
-extern int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size);
-extern void ia64_ptr_entry(u64 target_mask, int slot);
-
-extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS];
-
-/*
- region register macros
-*/
-#define RR_TO_VE(val)   (((val) >> 0) & 0x0000000000000001)
-#define RR_VE(val)	(((val) & 0x0000000000000001) << 0)
-#define RR_VE_MASK	0x0000000000000001L
-#define RR_VE_SHIFT	0
-#define RR_TO_PS(val)	(((val) >> 2) & 0x000000000000003f)
-#define RR_PS(val)	(((val) & 0x000000000000003f) << 2)
-#define RR_PS_MASK	0x00000000000000fcL
-#define RR_PS_SHIFT	2
-#define RR_RID_MASK	0x00000000ffffff00L
-#define RR_TO_RID(val) 	((val >> 8) & 0xffffff)
-
-/*
- * Flush the TLB for address range START to END and, if not in fast mode, release the
- * freed pages that where gathered up to this point.
- */
-static inline void
-ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
-{
-	unsigned int nr;
-
-	if (!tlb->need_flush)
-		return;
-	tlb->need_flush = 0;
-
-	if (tlb->fullmm) {
-		/*
-		 * Tearing down the entire address space.  This happens both as a result
-		 * of exit() and execve().  The latter case necessitates the call to
-		 * flush_tlb_mm() here.
-		 */
-		flush_tlb_mm(tlb->mm);
-	} else if (unlikely (end - start >= 1024*1024*1024*1024UL
-			     || REGION_NUMBER(start) != REGION_NUMBER(end - 1)))
-	{
-		/*
-		 * If we flush more than a tera-byte or across regions, we're probably
-		 * better off just flushing the entire TLB(s).  This should be very rare
-		 * and is not worth optimizing for.
-		 */
-		flush_tlb_all();
-	} else {
-		/*
-		 * XXX fix me: flush_tlb_range() should take an mm pointer instead of a
-		 * vma pointer.
-		 */
-		struct vm_area_struct vma;
-
-		vma.vm_mm = tlb->mm;
-		/* flush the address range from the tlb: */
-		flush_tlb_range(&vma, start, end);
-		/* now flush the virt. page-table area mapping the address range: */
-		flush_tlb_range(&vma, ia64_thash(start), ia64_thash(end));
-	}
-
-	/* lastly, release the freed pages */
-	nr = tlb->nr;
-	if (!tlb_fast_mode(tlb)) {
-		unsigned long i;
-		tlb->nr = 0;
-		tlb->start_addr = ~0UL;
-		for (i = 0; i < nr; ++i)
-			free_page_and_swap_cache(tlb->pages[i]);
-	}
-}
-
-static inline void __tlb_alloc_page(struct mmu_gather *tlb)
-{
-	unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0);
-
-	if (addr) {
-		tlb->pages = (void *)addr;
-		tlb->max = PAGE_SIZE / sizeof(void *);
-	}
-}
-
-
-static inline void
-tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_mm_flush)
-{
-	tlb->mm = mm;
-	tlb->max = ARRAY_SIZE(tlb->local);
-	tlb->pages = tlb->local;
-	/*
-	 * Use fast mode if only 1 CPU is online.
-	 *
-	 * It would be tempting to turn on fast-mode for full_mm_flush as well.  But this
-	 * doesn't work because of speculative accesses and software prefetching: the page
-	 * table of "mm" may (and usually is) the currently active page table and even
-	 * though the kernel won't do any user-space accesses during the TLB shoot down, a
-	 * compiler might use speculation or lfetch.fault on what happens to be a valid
-	 * user-space address.  This in turn could trigger a TLB miss fault (or a VHPT
-	 * walk) and re-insert a TLB entry we just removed.  Slow mode avoids such
-	 * problems.  (We could make fast-mode work by switching the current task to a
-	 * different "mm" during the shootdown.) --davidm 08/02/2002
-	 */
-	tlb->nr = (num_online_cpus() == 1) ? ~0U : 0;
-	tlb->fullmm = full_mm_flush;
-	tlb->start_addr = ~0UL;
-}
-
-/*
- * Called at the end of the shootdown operation to free up any resources that were
- * collected.
- */
-static inline void
-tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
-{
-	/*
-	 * Note: tlb->nr may be 0 at this point, so we can't rely on tlb->start_addr and
-	 * tlb->end_addr.
-	 */
-	ia64_tlb_flush_mmu(tlb, start, end);
-
-	/* keep the page table cache within bounds */
-	check_pgt_cache();
-
-	if (tlb->pages != tlb->local)
-		free_pages((unsigned long)tlb->pages, 0);
-}
-
-/*
- * Logically, this routine frees PAGE.  On MP machines, the actual freeing of the page
- * must be delayed until after the TLB has been flushed (see comments at the beginning of
- * this file).
- */
-static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	tlb->need_flush = 1;
-
-	if (tlb_fast_mode(tlb)) {
-		free_page_and_swap_cache(page);
-		return 0;
-	}
-
-	if (!tlb->nr && tlb->pages == tlb->local)
-		__tlb_alloc_page(tlb);
-
-	tlb->pages[tlb->nr++] = page;
-	if (tlb->nr >= tlb->max)
-		return 1;
-
-	return 0;
-}
-
-static inline void tlb_flush_mmu(struct mmu_gather *tlb)
-{
-	ia64_tlb_flush_mmu(tlb, tlb->start_addr, tlb->end_addr);
-}
-
-static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	if (__tlb_remove_page(tlb, page))
-		tlb_flush_mmu(tlb);
-}
-
-/*
- * Remove TLB entry for PTE mapped at virtual address ADDRESS.  This is called for any
- * PTE, not just those pointing to (normal) physical memory.
- */
-static inline void
-__tlb_remove_tlb_entry (struct mmu_gather *tlb, pte_t *ptep, unsigned long address)
-{
-	if (tlb->start_addr == ~0UL)
-		tlb->start_addr = address;
-	tlb->end_addr = address + PAGE_SIZE;
-}
-
+#define __tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0)
 #define tlb_migrate_finish(mm)	platform_tlb_migrate_finish(mm)
 
-#define tlb_start_vma(tlb, vma)			do { } while (0)
-#define tlb_end_vma(tlb, vma)			do { } while (0)
-
-#define tlb_remove_tlb_entry(tlb, ptep, addr)		\
-do {							\
-	tlb->need_flush = 1;				\
-	__tlb_remove_tlb_entry(tlb, ptep, addr);	\
-} while (0)
-
-#define pte_free_tlb(tlb, ptep, address)		\
-do {							\
-	tlb->need_flush = 1;				\
-	__pte_free_tlb(tlb, ptep, address);		\
-} while (0)
-
-#define pmd_free_tlb(tlb, ptep, address)		\
-do {							\
-	tlb->need_flush = 1;				\
-	__pmd_free_tlb(tlb, ptep, address);		\
-} while (0)
-
-#define pud_free_tlb(tlb, pudp, address)		\
-do {							\
-	tlb->need_flush = 1;				\
-	__pud_free_tlb(tlb, pudp, address);		\
-} while (0)
+#include <asm-generic/tlb.h>
 
 #endif /* _ASM_IA64_TLB_H */
Index: linux-2.6/arch/ia64/include/asm/tlbflush.h
===================================================================
--- linux-2.6.orig/arch/ia64/include/asm/tlbflush.h
+++ linux-2.6/arch/ia64/include/asm/tlbflush.h
@@ -13,6 +13,31 @@
 #include <asm/mmu_context.h>
 #include <asm/page.h>
 
+struct ia64_tr_entry {
+	u64 ifa;
+	u64 itir;
+	u64 pte;
+	u64 rr;
+}; /*Record for tr entry!*/
+
+extern int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size);
+extern void ia64_ptr_entry(u64 target_mask, int slot);
+extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS];
+
+/*
+ region register macros
+*/
+#define RR_TO_VE(val)   (((val) >> 0) & 0x0000000000000001)
+#define RR_VE(val)     (((val) & 0x0000000000000001) << 0)
+#define RR_VE_MASK     0x0000000000000001L
+#define RR_VE_SHIFT    0
+#define RR_TO_PS(val)  (((val) >> 2) & 0x000000000000003f)
+#define RR_PS(val)     (((val) & 0x000000000000003f) << 2)
+#define RR_PS_MASK     0x00000000000000fcL
+#define RR_PS_SHIFT    2
+#define RR_RID_MASK    0x00000000ffffff00L
+#define RR_TO_RID(val)         ((val >> 8) & 0xffffff)
+
 /*
  * Now for some TLB flushing routines.  This is the kind of stuff that
  * can be very expensive, so try to avoid them whenever possible.
Index: linux-2.6/arch/ia64/mm/tlb.c
===================================================================
--- linux-2.6.orig/arch/ia64/mm/tlb.c
+++ linux-2.6/arch/ia64/mm/tlb.c
@@ -297,9 +297,8 @@ local_flush_tlb_all (void)
 	ia64_srlz_i();			/* srlz.i implies srlz.d */
 }
 
-void
-flush_tlb_range (struct vm_area_struct *vma, unsigned long start,
-		 unsigned long end)
+void __flush_tlb_range(struct vm_area_struct *vma,
+		  unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long size = end - start;
@@ -335,6 +334,25 @@ flush_tlb_range (struct vm_area_struct *
 	preempt_enable();
 	ia64_srlz_i();			/* srlz.i implies srlz.d */
 }
+
+void flush_tlb_range(struct vm_area_struct *vma,
+		     unsigned long start, unsigned long end)
+{
+	if (unlikely(end - start >= 1024*1024*1024*1024UL
+			|| REGION_NUMBER(start) != REGION_NUMBER(end - 1))) {
+		/*
+		 * If we flush more than a tera-byte or across regions, we're
+		 * probably better off just flushing the entire TLB(s).  This
+		 * should be very rare and is not worth optimizing for.
+		 */
+		flush_tlb_all();
+	} else {
+		/* flush the address range from the tlb */
+		__flush_tlb_range(vma, start, end);
+		/* flush the virt. page-table area mapping the addr range */
+		__flush_tlb_range(vma, ia64_thash(start), ia64_thash(end));
+	}
+}
 EXPORT_SYMBOL(flush_tlb_range);
 
 void __devinit


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 09/15] mm, sh: Convert sh to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (7 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 08/15] mm, ia64: Convert ia64 " Peter Zijlstra
@ 2011-03-07 17:13 ` Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 10/15] mm, um, Convert um " Peter Zijlstra
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:13 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Paul Mundt

[-- Attachment #1: mm-sh-tlb-range.patch --]
[-- Type: text/plain, Size: 4304 bytes --]

Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/sh/Kconfig           |    1 
 arch/sh/include/asm/tlb.h |   98 ++--------------------------------------------
 2 files changed, 6 insertions(+), 93 deletions(-)

Index: linux-2.6/arch/sh/Kconfig
===================================================================
--- linux-2.6.orig/arch/sh/Kconfig
+++ linux-2.6/arch/sh/Kconfig
@@ -24,6 +24,7 @@ config SUPERH
 	select HAVE_SPARSE_IRQ
 	select RTC_LIB
 	select GENERIC_ATOMIC64
+	select HAVE_MMU_GATHER_RANGE if MMU
 	# Support the deprecated APIs until MFD and GPIOLIB catch up.
 	select GENERIC_HARDIRQS_NO_DEPRECATED if !MFD_SUPPORT && !GPIOLIB
 	help
Index: linux-2.6/arch/sh/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/sh/include/asm/tlb.h
+++ linux-2.6/arch/sh/include/asm/tlb.h
@@ -9,100 +9,14 @@
 #include <linux/pagemap.h>
 
 #ifdef CONFIG_MMU
-#include <asm/pgalloc.h>
-#include <asm/tlbflush.h>
-#include <asm/mmu_context.h>
-
-/*
- * TLB handling.  This allows us to remove pages from the page
- * tables, and efficiently handle the TLB issues.
- */
-struct mmu_gather {
-	struct mm_struct	*mm;
-	unsigned int		fullmm;
-	unsigned long		start, end;
-};
 
-static inline void init_tlb_gather(struct mmu_gather *tlb)
-{
-	tlb->start = TASK_SIZE;
-	tlb->end = 0;
-
-	if (tlb->fullmm) {
-		tlb->start = 0;
-		tlb->end = TASK_SIZE;
-	}
-}
-
-static inline void
-tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_mm_flush)
-{
-	tlb->mm = mm;
-	tlb->fullmm = full_mm_flush;
-
-	init_tlb_gather(tlb);
-}
-
-static inline void
-tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
-{
-	if (tlb->fullmm)
-		flush_tlb_mm(tlb->mm);
-
-	/* keep the page table cache within bounds */
-	check_pgt_cache();
-}
-
-static inline void
-tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long address)
-{
-	if (tlb->start > address)
-		tlb->start = address;
-	if (tlb->end < address + PAGE_SIZE)
-		tlb->end = address + PAGE_SIZE;
-}
-
-/*
- * In the case of tlb vma handling, we can optimise these away in the
- * case where we're doing a full MM flush.  When we're doing a munmap,
- * the vmas are adjusted to only cover the region to be torn down.
- */
-static inline void
-tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
-{
-	if (!tlb->fullmm)
-		flush_cache_range(vma, vma->vm_start, vma->vm_end);
-}
-
-static inline void
-tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
-{
-	if (!tlb->fullmm && tlb->end) {
-		flush_tlb_range(vma, tlb->start, tlb->end);
-		init_tlb_gather(tlb);
-	}
-}
+#define __tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0)
 
-static inline void tlb_flush_mmu(struct mmu_gather *tlb)
-{
-}
+#define __pte_free_tlb(tlb, ptep, addr)	pte_free((tlb)->mm, ptep)
+#define __pmd_free_tlb(tlb, pmdp, addr)	pmd_free((tlb)->mm, pmdp)
+#define __pud_free_tlb(tlb, pudp, addr)	pud_free((tlb)->mm, pudp)
 
-static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	free_page_and_swap_cache(page);
-	return 0;
-}
-
-static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	__tlb_remove_page(tlb, page);
-}
-
-#define pte_free_tlb(tlb, ptep, addr)	pte_free((tlb)->mm, ptep)
-#define pmd_free_tlb(tlb, pmdp, addr)	pmd_free((tlb)->mm, pmdp)
-#define pud_free_tlb(tlb, pudp, addr)	pud_free((tlb)->mm, pudp)
-
-#define tlb_migrate_finish(mm)		do { } while (0)
+#include <asm-generic/tlb.h>
 
 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SUPERH64)
 extern void tlb_wire_entry(struct vm_area_struct *, unsigned long, pte_t);
@@ -122,8 +36,6 @@ static inline void tlb_unwire_entry(void
 
 #else /* CONFIG_MMU */
 
-#define tlb_start_vma(tlb, vma)				do { } while (0)
-#define tlb_end_vma(tlb, vma)				do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, pte, address)	do { } while (0)
 
 #include <asm-generic/tlb.h>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 10/15] mm, um, Convert um to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (8 preceding siblings ...)
  2011-03-07 17:13 ` [RFC][PATCH 09/15] mm, sh: Convert sh " Peter Zijlstra
@ 2011-03-07 17:14 ` Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 11/15] mm, avr32: Convert avr32 " Peter Zijlstra
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:14 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Jeff Dike

[-- Attachment #1: um-tlb-range.patch --]
[-- Type: text/plain, Size: 4976 bytes --]


Cc: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/um/Kconfig.common    |    1 
 arch/um/include/asm/tlb.h |  111 +---------------------------------------------
 arch/um/kernel/tlb.c      |   13 -----
 3 files changed, 4 insertions(+), 121 deletions(-)

Index: linux-2.6/arch/um/Kconfig.common
===================================================================
--- linux-2.6.orig/arch/um/Kconfig.common
+++ linux-2.6/arch/um/Kconfig.common
@@ -8,6 +8,7 @@ config UML
 	default y
 	select HAVE_GENERIC_HARDIRQS
 	select GENERIC_HARDIRQS_NO_DEPRECATED
+	select HAVE_MMU_GATHER_RANGE
 
 config MMU
 	bool
Index: linux-2.6/arch/um/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/um/include/asm/tlb.h
+++ linux-2.6/arch/um/include/asm/tlb.h
@@ -7,114 +7,9 @@
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 
-#define tlb_start_vma(tlb, vma) do { } while (0)
-#define tlb_end_vma(tlb, vma) do { } while (0)
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
-
-/* struct mmu_gather is an opaque type used by the mm code for passing around
- * any data needed by arch specific code for tlb_remove_page.
- */
-struct mmu_gather {
-	struct mm_struct	*mm;
-	unsigned int		need_flush; /* Really unmapped some ptes? */
-	unsigned long		start;
-	unsigned long		end;
-	unsigned int		fullmm; /* non-zero means full mm flush */
-};
-
-static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
-					  unsigned long address)
-{
-	if (tlb->start > address)
-		tlb->start = address;
-	if (tlb->end < address + PAGE_SIZE)
-		tlb->end = address + PAGE_SIZE;
-}
-
-static inline void init_tlb_gather(struct mmu_gather *tlb)
-{
-	tlb->need_flush = 0;
-
-	tlb->start = TASK_SIZE;
-	tlb->end = 0;
-
-	if (tlb->fullmm) {
-		tlb->start = 0;
-		tlb->end = TASK_SIZE;
-	}
-}
-
-static inline void
-tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_mm_flush)
-{
-	tlb->mm = mm;
-	tlb->fullmm = full_mm_flush;
-
-	init_tlb_gather(tlb);
-}
-
-extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
-			       unsigned long end);
-
-static inline void
-tlb_flush_mmu(struct mmu_gather *tlb)
-{
-	if (!tlb->need_flush)
-		return;
-
-	flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end);
-	init_tlb_gather(tlb);
-}
-
-/* tlb_finish_mmu
- *	Called at the end of the shootdown operation to free up any resources
- *	that were required.
- */
-static inline void
-tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
-{
-	tlb_flush_mmu(tlb);
-
-	/* keep the page table cache within bounds */
-	check_pgt_cache();
-}
-
-/* tlb_remove_page
- *	Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)),
- *	while handling the additional races in SMP caused by other CPUs
- *	caching valid mappings in their TLBs.
- */
-static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	tlb->need_flush = 1;
-	free_page_and_swap_cache(page);
-	return 0;
-}
-
-static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
-{
-	__tlb_remove_page(tlb, page);
-}
-
-/**
- * tlb_remove_tlb_entry - remember a pte unmapping for later tlb invalidation.
- *
- * Record the fact that pte's were really umapped in ->need_flush, so we can
- * later optimise away the tlb invalidate.   This helps when userspace is
- * unmapping already-unmapped pages, which happens quite a lot.
- */
-#define tlb_remove_tlb_entry(tlb, ptep, address)		\
-	do {							\
-		tlb->need_flush = 1;				\
-		__tlb_remove_tlb_entry(tlb, ptep, address);	\
-	} while (0)
-
-#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr)
-
-#define pud_free_tlb(tlb, pudp, addr) __pud_free_tlb(tlb, pudp, addr)
-
-#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr)
-
+#define __tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0)
 #define tlb_migrate_finish(mm) do {} while (0)
 
+#include <asm-generic/tlb.h>
+
 #endif
Index: linux-2.6/arch/um/kernel/tlb.c
===================================================================
--- linux-2.6.orig/arch/um/kernel/tlb.c
+++ linux-2.6/arch/um/kernel/tlb.c
@@ -500,19 +500,6 @@ void flush_tlb_range(struct vm_area_stru
 	else fix_range(vma->vm_mm, start, end, 0);
 }
 
-void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
-			unsigned long end)
-{
-	/*
-	 * Don't bother flushing if this address space is about to be
-	 * destroyed.
-	 */
-	if (atomic_read(&mm->mm_users) == 0)
-		return;
-
-	fix_range(mm, start, end, 0);
-}
-
 void flush_tlb_mm(struct mm_struct *mm)
 {
 	struct vm_area_struct *vma = mm->mmap;


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 11/15] mm, avr32: Convert avr32 to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (9 preceding siblings ...)
  2011-03-07 17:14 ` [RFC][PATCH 10/15] mm, um, Convert um " Peter Zijlstra
@ 2011-03-07 17:14 ` Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 12/15] mm, mips: Convert mips " Peter Zijlstra
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:14 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Hans-Christian Egtvedt

[-- Attachment #1: avr32-mmu_range.patch --]
[-- Type: text/plain, Size: 1571 bytes --]

Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/avr32/Kconfig           |    1 +
 arch/avr32/include/asm/tlb.h |    6 ------
 2 files changed, 1 insertion(+), 6 deletions(-)

Index: linux-2.6/arch/avr32/Kconfig
===================================================================
--- linux-2.6.orig/arch/avr32/Kconfig
+++ linux-2.6/arch/avr32/Kconfig
@@ -7,6 +7,7 @@ config AVR32
 	select HAVE_OPROFILE
 	select HAVE_KPROBES
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
+	select HAVE_MMU_GATHER_RANGE
 	help
 	  AVR32 is a high-performance 32-bit RISC microprocessor core,
 	  designed for cost-sensitive embedded applications, with particular
Index: linux-2.6/arch/avr32/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/avr32/include/asm/tlb.h
+++ linux-2.6/arch/avr32/include/asm/tlb.h
@@ -8,12 +8,6 @@
 #ifndef __ASM_AVR32_TLB_H
 #define __ASM_AVR32_TLB_H
 
-#define tlb_start_vma(tlb, vma) \
-	flush_cache_range(vma, vma->vm_start, vma->vm_end)
-
-#define tlb_end_vma(tlb, vma) \
-	flush_tlb_range(vma, vma->vm_start, vma->vm_end)
-
 #define __tlb_remove_tlb_entry(tlb, pte, address) do { } while(0)
 
 #include <asm-generic/tlb.h>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 12/15] mm, mips: Convert mips to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (10 preceding siblings ...)
  2011-03-07 17:14 ` [RFC][PATCH 11/15] mm, avr32: Convert avr32 " Peter Zijlstra
@ 2011-03-07 17:14 ` Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 13/15] mm, parisc: Convert parisc " Peter Zijlstra
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:14 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Ralf Baechle

[-- Attachment #1: mips-mmu_range.patch --]
[-- Type: text/plain, Size: 1586 bytes --]

Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/mips/Kconfig           |    1 +
 arch/mips/include/asm/tlb.h |   10 ----------
 2 files changed, 1 insertion(+), 10 deletions(-)

Index: linux-2.6/arch/mips/Kconfig
===================================================================
--- linux-2.6.orig/arch/mips/Kconfig
+++ linux-2.6/arch/mips/Kconfig
@@ -22,6 +22,7 @@ config MIPS
 	select HAVE_GENERIC_HARDIRQS
 	select GENERIC_IRQ_PROBE
 	select HAVE_ARCH_JUMP_LABEL
+	select HAVE_MMU_GATHER_RANGE
 
 menu "Machine selection"
 
Index: linux-2.6/arch/mips/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/mips/include/asm/tlb.h
+++ linux-2.6/arch/mips/include/asm/tlb.h
@@ -1,16 +1,6 @@
 #ifndef __ASM_TLB_H
 #define __ASM_TLB_H
 
-/*
- * MIPS doesn't need any special per-pte or per-vma handling, except
- * we need to flush cache for area to be unmapped.
- */
-#define tlb_start_vma(tlb, vma) 				\
-	do {							\
-		if (!tlb->fullmm)				\
-			flush_cache_range(vma, vma->vm_start, vma->vm_end); \
-	}  while (0)
-#define tlb_end_vma(tlb, vma) do { } while (0)
 #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
 
 #include <asm-generic/tlb.h>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 13/15] mm, parisc: Convert parisc to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (11 preceding siblings ...)
  2011-03-07 17:14 ` [RFC][PATCH 12/15] mm, mips: Convert mips " Peter Zijlstra
@ 2011-03-07 17:14 ` Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 14/15] mm, sparc32: Convert sparc32 " Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 15/15] mm, xtensa: Convert xtensa " Peter Zijlstra
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:14 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Kyle McMartin, James Bottomley

[-- Attachment #1: parisc-mmu_range.patch --]
[-- Type: text/plain, Size: 1616 bytes --]

Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: James Bottomley <jejb@parisc-linux.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/parisc/Kconfig           |    1 +
 arch/parisc/include/asm/tlb.h |   10 ----------
 2 files changed, 1 insertion(+), 10 deletions(-)

Index: linux-2.6/arch/parisc/Kconfig
===================================================================
--- linux-2.6.orig/arch/parisc/Kconfig
+++ linux-2.6/arch/parisc/Kconfig
@@ -17,6 +17,7 @@ config PARISC
 	select IRQ_PER_CPU
 	select GENERIC_HARDIRQS_NO_DEPRECATED
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
+	select HAVE_MMU_GATHER_RANGE
 
 	help
 	  The PA-RISC microprocessor is designed by Hewlett-Packard and used
Index: linux-2.6/arch/parisc/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/parisc/include/asm/tlb.h
+++ linux-2.6/arch/parisc/include/asm/tlb.h
@@ -1,16 +1,6 @@
 #ifndef _PARISC_TLB_H
 #define _PARISC_TLB_H
 
-#define tlb_start_vma(tlb, vma) \
-do {	if (!(tlb)->fullmm)	\
-		flush_cache_range(vma, vma->vm_start, vma->vm_end); \
-} while (0)
-
-#define tlb_end_vma(tlb, vma)	\
-do {	if (!(tlb)->fullmm)	\
-		flush_tlb_range(vma, vma->vm_start, vma->vm_end); \
-} while (0)
-
 #define __tlb_remove_tlb_entry(tlb, pte, address) \
 	do { } while (0)
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 14/15] mm, sparc32: Convert sparc32 to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (12 preceding siblings ...)
  2011-03-07 17:14 ` [RFC][PATCH 13/15] mm, parisc: Convert parisc " Peter Zijlstra
@ 2011-03-07 17:14 ` Peter Zijlstra
  2011-03-07 17:54   ` Peter Zijlstra
  2011-03-07 17:14 ` [RFC][PATCH 15/15] mm, xtensa: Convert xtensa " Peter Zijlstra
  14 siblings, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:14 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky

[-- Attachment #1: sparc32-mmu_range.patch --]
[-- Type: text/plain, Size: 1491 bytes --]

Cc: David Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/sparc/Kconfig              |    1 +
 arch/sparc/include/asm/tlb_32.h |   10 ----------
 2 files changed, 1 insertion(+), 10 deletions(-)

Index: linux-2.6/arch/sparc/Kconfig
===================================================================
--- linux-2.6.orig/arch/sparc/Kconfig
+++ linux-2.6/arch/sparc/Kconfig
@@ -25,6 +25,7 @@ config SPARC
 	select HAVE_DMA_ATTRS
 	select HAVE_DMA_API_DEBUG
 	select HAVE_ARCH_JUMP_LABEL
+	select HAVE_MMU_GATHER_RANGE
 
 config SPARC32
 	def_bool !64BIT
Index: linux-2.6/arch/sparc/include/asm/tlb_32.h
===================================================================
--- linux-2.6.orig/arch/sparc/include/asm/tlb_32.h
+++ linux-2.6/arch/sparc/include/asm/tlb_32.h
@@ -1,16 +1,6 @@
 #ifndef _SPARC_TLB_H
 #define _SPARC_TLB_H
 
-#define tlb_start_vma(tlb, vma) \
-do {								\
-	flush_cache_range(vma, vma->vm_start, vma->vm_end);	\
-} while (0)
-
-#define tlb_end_vma(tlb, vma) \
-do {								\
-	flush_tlb_range(vma, vma->vm_start, vma->vm_end);	\
-} while (0)
-
 #define __tlb_remove_tlb_entry(tlb, pte, address) \
 	do { } while (0)
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC][PATCH 15/15] mm, xtensa: Convert xtensa to generic tlb
  2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
                   ` (13 preceding siblings ...)
  2011-03-07 17:14 ` [RFC][PATCH 14/15] mm, sparc32: Convert sparc32 " Peter Zijlstra
@ 2011-03-07 17:14 ` Peter Zijlstra
  14 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:14 UTC (permalink / raw)
  To: Thomas Gleixner, Rik van Riel, Ingo Molnar, akpm, Linus Torvalds
  Cc: linux-kernel, linux-arch, linux-mm, Benjamin Herrenschmidt,
	David Miller, Hugh Dickins, Mel Gorman, Nick Piggin,
	Peter Zijlstra, Russell King, Chris Metcalf, Martin Schwidefsky,
	Chris Zankel

[-- Attachment #1: xtensa-mmu_range.patch --]
[-- Type: text/plain, Size: 1915 bytes --]

Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 arch/xtensa/Kconfig           |    1 +
 arch/xtensa/include/asm/tlb.h |   23 -----------------------
 2 files changed, 1 insertion(+), 23 deletions(-)

Index: linux-2.6/arch/xtensa/Kconfig
===================================================================
--- linux-2.6.orig/arch/xtensa/Kconfig
+++ linux-2.6/arch/xtensa/Kconfig
@@ -7,6 +7,7 @@ config ZONE_DMA
 config XTENSA
 	def_bool y
 	select HAVE_IDE
+	select HAVE_MMU_GATHER_RANGE
 	help
 	  Xtensa processors are 32-bit RISC machines designed by Tensilica
 	  primarily for embedded systems.  These processors are both
Index: linux-2.6/arch/xtensa/include/asm/tlb.h
===================================================================
--- linux-2.6.orig/arch/xtensa/include/asm/tlb.h
+++ linux-2.6/arch/xtensa/include/asm/tlb.h
@@ -14,29 +14,6 @@
 #include <asm/cache.h>
 #include <asm/page.h>
 
-#if (DCACHE_WAY_SIZE <= PAGE_SIZE)
-
-/* Note, read http://lkml.org/lkml/2004/1/15/6 */
-
-# define tlb_start_vma(tlb,vma)			do { } while (0)
-# define tlb_end_vma(tlb,vma)			do { } while (0)
-
-#else
-
-# define tlb_start_vma(tlb, vma)					      \
-	do {								      \
-		if (!tlb->fullmm)					      \
-			flush_cache_range(vma, vma->vm_start, vma->vm_end);   \
-	} while(0)
-
-# define tlb_end_vma(tlb, vma)						      \
-	do {								      \
-		if (!tlb->fullmm)					      \
-			flush_tlb_range(vma, vma->vm_start, vma->vm_end);     \
-	} while(0)
-
-#endif
-
 #define __tlb_remove_tlb_entry(tlb,pte,addr)	do { } while (0)
 
 #include <asm-generic/tlb.h>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC][PATCH 14/15] mm, sparc32: Convert sparc32 to generic tlb
  2011-03-07 17:14 ` [RFC][PATCH 14/15] mm, sparc32: Convert sparc32 " Peter Zijlstra
@ 2011-03-07 17:54   ` Peter Zijlstra
  0 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2011-03-07 17:54 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rik van Riel, Ingo Molnar, akpm, Linus Torvalds, linux-kernel,
	linux-arch, linux-mm, Benjamin Herrenschmidt, David Miller,
	Hugh Dickins, Mel Gorman, Nick Piggin, Russell King,
	Chris Metcalf, Martin Schwidefsky

On Mon, 2011-03-07 at 18:14 +0100, Peter Zijlstra wrote:
> plain text document attachment (sparc32-mmu_range.patch)
> Cc: David Miller <davem@davemloft.net>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
>  arch/sparc/Kconfig              |    1 +
>  arch/sparc/include/asm/tlb_32.h |   10 ----------
>  2 files changed, 1 insertion(+), 10 deletions(-)
> 
> Index: linux-2.6/arch/sparc/Kconfig
> ===================================================================
> --- linux-2.6.orig/arch/sparc/Kconfig
> +++ linux-2.6/arch/sparc/Kconfig
> @@ -25,6 +25,7 @@ config SPARC
>  	select HAVE_DMA_ATTRS
>  	select HAVE_DMA_API_DEBUG
>  	select HAVE_ARCH_JUMP_LABEL
> +	select HAVE_MMU_GATHER_RANGE
>  
>  config SPARC32
>  	def_bool !64BIT

Ah, I probably should put that in the SPARC32 bit.. ;-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2011-03-07 18:22 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-07 17:13 [RFC][PATCH 00/15] Unify TLB gather implementations -v2 Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 01/15] mm, powerpc: Dont use tlb_flush for external tlb flushes Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 02/15] mm, sparc64: " Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 03/15] mm, arch: Remove tlb_flush() Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 04/15] mm: Optimize fullmm TLB flushing Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 05/15] mm, tile: Change flush_tlb_range() VM_HUGETLB semantics Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 06/15] mm: Provide generic range tracking and flushing Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 07/15] mm, arm: Convert arm to generic tlb Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 08/15] mm, ia64: Convert ia64 " Peter Zijlstra
2011-03-07 17:13 ` [RFC][PATCH 09/15] mm, sh: Convert sh " Peter Zijlstra
2011-03-07 17:14 ` [RFC][PATCH 10/15] mm, um, Convert um " Peter Zijlstra
2011-03-07 17:14 ` [RFC][PATCH 11/15] mm, avr32: Convert avr32 " Peter Zijlstra
2011-03-07 17:14 ` [RFC][PATCH 12/15] mm, mips: Convert mips " Peter Zijlstra
2011-03-07 17:14 ` [RFC][PATCH 13/15] mm, parisc: Convert parisc " Peter Zijlstra
2011-03-07 17:14 ` [RFC][PATCH 14/15] mm, sparc32: Convert sparc32 " Peter Zijlstra
2011-03-07 17:54   ` Peter Zijlstra
2011-03-07 17:14 ` [RFC][PATCH 15/15] mm, xtensa: Convert xtensa " Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).