linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition
@ 2011-11-29 12:22 Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 1/6] sched: Introduce the finish_arch_post_lock_switch() scheduler hook Catalin Marinas
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This set of patches removes the use of __ARCH_WANT_INTERRUPTS_ON_CTXSW
on ARM.

As a background, the ARM architecture versions consist of two main sets
with regards to the MMU switching needs:

1. ARMv5 and earlier have VIVT caches and they require a full cache and
   TLB flush at every context switch.
2. ARMv6 and later have VIPT caches and the TLBs are tagged with an ASID
   (application specific ID). The number of ASIDs is limited to 256 and
   the allocation algorithm requires IPIs when all the ASIDs have been
   used.

Both cases above require interrupts enabled during context switch for
latency reasons (1) or deadlock avoidance (2).

The first patch in the series introduces a new scheduler hook invoked
after the rq->lock is released and interrupts enabled. The subsequent
two patches change the ARM context switching code (for processors in
category 2 above) to use a reserved TTBR value instead of a reserved
ASID. The 4th patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW
definition for ASID-capable processors by deferring the new ASID
allocation to the post-lock switch hook.

The last patch also removes __ARCH_WANT_INTERRUPTS_ON_CTXSW for ARMv5
and earlier processors. It defers the cpu_switch_mm call to the
post-lock switch hook. Since this is only running on UP systems and the
preemption is disabled during context switching, it assumes that the old
mm is still valid until the post-lock switch hook.

The series has been tested on Cortex-A9 (vexpress) and ARM926
(versatile). Comments are welcome.

Thanks,

Catalin


Catalin Marinas (4):
  sched: Introduce the finish_arch_post_lock_switch() scheduler hook
  ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs
  ARM: Remove current_mm per-cpu variable
  ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs

Will Deacon (2):
  ARM: Use TTBR1 instead of reserved context ID
  ARM: Allow ASID 0 to be allocated to tasks

 arch/arm/include/asm/mmu_context.h |  106 +++++++++++++++++++++++++++---------
 arch/arm/include/asm/system.h      |    7 ---
 arch/arm/include/asm/thread_info.h |    1 +
 arch/arm/mm/context.c              |   42 +++++++--------
 arch/arm/mm/proc-v7.S              |    9 +---
 kernel/sched.c                     |    4 ++
 6 files changed, 107 insertions(+), 62 deletions(-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 1/6] sched: Introduce the finish_arch_post_lock_switch() scheduler hook
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
@ 2011-11-29 12:22 ` Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 2/6] ARM: Use TTBR1 instead of reserved context ID Catalin Marinas
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

This hook is called by the scheduler after rq->lock has been released
and interrupts enabled. It will be used in subsequent patches on the ARM
architecture.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 0e9344a..7b46a39 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -983,6 +983,9 @@ static inline u64 global_rt_runtime(void)
 #ifndef finish_arch_switch
 # define finish_arch_switch(prev)	do { } while (0)
 #endif
+#ifndef finish_arch_post_lock_switch
+# define finish_arch_post_lock_switch()	do { } while (0)
+#endif
 
 static inline int task_current(struct rq *rq, struct task_struct *p)
 {
@@ -3203,6 +3206,7 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
 	local_irq_enable();
 #endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 	finish_lock_switch(rq, prev);
+	finish_arch_post_lock_switch();
 
 	fire_sched_in_preempt_notifiers(current);
 	if (mm)

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 2/6] ARM: Use TTBR1 instead of reserved context ID
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 1/6] sched: Introduce the finish_arch_post_lock_switch() scheduler hook Catalin Marinas
@ 2011-11-29 12:22 ` Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 3/6] ARM: Allow ASID 0 to be allocated to tasks Catalin Marinas
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: Will Deacon <will.deacon@arm.com>

On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.

This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.

This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/mm/context.c |   22 ++++++++++++++--------
 arch/arm/mm/proc-v7.S |   10 ++++------
 2 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 93aac06..a062230 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -22,11 +22,20 @@ unsigned int cpu_last_asid = ASID_FIRST_VERSION;
 DEFINE_PER_CPU(struct mm_struct *, current_mm);
 #endif
 
+static void cpu_set_reserved_ttbr0(void)
+{
+	u32 ttb;
+	/* Copy TTBR1 into TTBR0 */
+	asm volatile(
+	"	mrc	p15, 0, %0, c2, c0, 1		@ read TTBR1\n"
+	"	mcr	p15, 0, %0, c2, c0, 0		@ set TTBR0\n"
+	: "=r" (ttb));
+	isb();
+}
+
 /*
  * We fork()ed a process, and we need a new context for the child
- * to run in.  We reserve version 0 for initial tasks so we will
- * always allocate an ASID. The ASID 0 is reserved for the TTBR
- * register changing sequence.
+ * to run in.
  */
 void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 {
@@ -36,9 +45,7 @@ void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 
 static void flush_context(void)
 {
-	/* set the reserved ASID before flushing the TLB */
-	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (0));
-	isb();
+	cpu_set_reserved_ttbr0();
 	local_flush_tlb_all();
 	if (icache_is_vivt_asid_tagged()) {
 		__flush_icache_all();
@@ -99,8 +106,7 @@ static void reset_context(void *info)
 	set_mm_context(mm, asid);
 
 	/* set the new ASID */
-	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (mm->context.id));
-	isb();
+	cpu_switch_mm(mm->pgd, mm);
 }
 
 #else
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 2c559ac..2faff3b 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -116,18 +116,16 @@ ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_ARM_ERRATA_430973
 	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
 #endif
-#ifdef CONFIG_ARM_ERRATA_754322
-	dsb
-#endif
-	mcr	p15, 0, r2, c13, c0, 1		@ set reserved context ID
-	isb
-1:	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
+	mrc	p15, 0, r2, c2, c0, 1		@ load TTB 1
+	mcr	p15, 0, r2, c2, c0, 0		@ into TTB 0
 	isb
 #ifdef CONFIG_ARM_ERRATA_754322
 	dsb
 #endif
 	mcr	p15, 0, r1, c13, c0, 1		@ set context ID
 	isb
+	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
+	isb
 #endif
 	mov	pc, lr
 ENDPROC(cpu_v7_switch_mm)

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 3/6] ARM: Allow ASID 0 to be allocated to tasks
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 1/6] sched: Introduce the finish_arch_post_lock_switch() scheduler hook Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 2/6] ARM: Use TTBR1 instead of reserved context ID Catalin Marinas
@ 2011-11-29 12:22 ` Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs Catalin Marinas
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: Will Deacon <will.deacon@arm.com>

Now that ASID 0 is no longer used as a reserved value, allow it to be
allocated to tasks.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/mm/context.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index a062230..1d5014b 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -100,7 +100,7 @@ static void reset_context(void *info)
 		return;
 
 	smp_rmb();
-	asid = cpu_last_asid + cpu + 1;
+	asid = cpu_last_asid + cpu;
 
 	flush_context();
 	set_mm_context(mm, asid);
@@ -149,13 +149,13 @@ void __new_context(struct mm_struct *mm)
 	 * to start a new version and flush the TLB.
 	 */
 	if (unlikely((asid & ~ASID_MASK) == 0)) {
-		asid = cpu_last_asid + smp_processor_id() + 1;
+		asid = cpu_last_asid + smp_processor_id();
 		flush_context();
 #ifdef CONFIG_SMP
 		smp_wmb();
 		smp_call_function(reset_context, NULL, 1);
 #endif
-		cpu_last_asid += NR_CPUS;
+		cpu_last_asid += NR_CPUS - 1;
 	}
 
 	set_mm_context(mm, asid);

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
                   ` (2 preceding siblings ...)
  2011-11-29 12:22 ` [RFC PATCH 3/6] ARM: Allow ASID 0 to be allocated to tasks Catalin Marinas
@ 2011-11-29 12:22 ` Catalin Marinas
  2011-12-01  2:57   ` Frank Rowand
  2011-11-29 12:22 ` [RFC PATCH 5/6] ARM: Remove current_mm per-cpu variable Catalin Marinas
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

Since the ASIDs must be unique to an mm across all the CPUs in a system,
the __new_context() function needs to broadcast a context reset event to
all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
cannot be issued with interrupts disabled and ARM had to define
__ARCH_WANT_INTERRUPTS_ON_CTXSW.

This patch changes the check_context() function to
check_and_switch_context() called from switch_mm(). In case of
ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed, it defers
the __new_context() and cpu_switch_mm() calls to the post-lock switch
hook where the interrupts are enabled. Setting the reserved TTBR0 was
also moved to check_and_switch_context() from cpu_v7_switch_mm().

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/include/asm/mmu_context.h |   81 ++++++++++++++++++++++++++++--------
 arch/arm/include/asm/system.h      |    2 +
 arch/arm/include/asm/thread_info.h |    1 +
 arch/arm/mm/context.c              |    4 +-
 arch/arm/mm/proc-v7.S              |    3 -
 5 files changed, 69 insertions(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
index 71605d9..3e4b219 100644
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -48,39 +48,75 @@ DECLARE_PER_CPU(struct mm_struct *, current_mm);
 
 void __init_new_context(struct task_struct *tsk, struct mm_struct *mm);
 void __new_context(struct mm_struct *mm);
+void cpu_set_reserved_ttbr0(void);
 
-static inline void check_context(struct mm_struct *mm)
+static inline void check_and_switch_context(struct mm_struct *mm,
+					    struct task_struct *tsk)
 {
+	if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
+		__check_kvm_seq(mm);
+
 	/*
-	 * This code is executed with interrupts enabled. Therefore,
-	 * mm->context.id cannot be updated to the latest ASID version
-	 * on a different CPU (and condition below not triggered)
-	 * without first getting an IPI to reset the context. The
-	 * alternative is to take a read_lock on mm->context.id_lock
-	 * (after changing its type to rwlock_t).
+	 * Required during context switch to avoid speculative page table
+	 * walking with the wrong TTBR.
 	 */
-	if (unlikely((mm->context.id ^ cpu_last_asid) >> ASID_BITS))
-		__new_context(mm);
+	cpu_set_reserved_ttbr0();
 
-	if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
-		__check_kvm_seq(mm);
+	/*
+	 * This code is executed with interrupts disabled. If mm->context.id
+	 * and cpu_last_asid are from the same generation (condition below
+	 * false), they cannot be updated on a different CPU without an IPI
+	 * being issued to reset the context. However, smp_call_function() on
+	 * a different CPU will need to wait for the current context switch to
+	 * complete and interrupts to be enabled before using the new
+	 * generation of ASIDs.
+	 */
+	if (unlikely((mm->context.id ^ cpu_last_asid) >> ASID_BITS))
+		/*
+		 * Defer the new ASID allocation until after the context
+		 * switch critical region since __new_context() cannot be
+		 * called with interrupts disabled.
+		 */
+		set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
+	else
+		cpu_switch_mm(mm->pgd, mm);
 }
 
 #define init_new_context(tsk,mm)	(__init_new_context(tsk,mm),0)
 
-#else
+#define finish_arch_post_lock_switch \
+	finish_arch_post_lock_switch
+static inline void finish_arch_post_lock_switch(void)
+{
+	if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
+		struct mm_struct *mm = current->mm;
+		unsigned long flags;
+
+		__new_context(mm);
+
+		local_irq_save(flags);
+		cpu_switch_mm(mm->pgd, mm);
+		local_irq_restore(flags);
+	}
+}
+
+#else	/* !CONFIG_CPU_HAS_ASID */
 
-static inline void check_context(struct mm_struct *mm)
+static inline void check_and_switch_context(struct mm_struct *mm,
+					    struct task_struct *tsk)
 {
 #ifdef CONFIG_MMU
 	if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
 		__check_kvm_seq(mm);
+	cpu_switch_mm(mm->pgd, mm);
 #endif
 }
 
 #define init_new_context(tsk,mm)	0
 
-#endif
+#define finish_arch_post_lock_switch()	do { } while (0)
+
+#endif	/* CONFIG_CPU_HAS_ASID */
 
 #define destroy_context(mm)		do { } while(0)
 
@@ -122,8 +158,7 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 		struct mm_struct **crt_mm = &per_cpu(current_mm, cpu);
 		*crt_mm = next;
 #endif
-		check_context(next);
-		cpu_switch_mm(next->pgd, next);
+		check_and_switch_context(next, tsk);
 		if (cache_is_vivt())
 			cpumask_clear_cpu(cpu, mm_cpumask(prev));
 	}
@@ -131,7 +166,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 }
 
 #define deactivate_mm(tsk,mm)	do { } while (0)
-#define activate_mm(prev,next)	switch_mm(prev, next, NULL)
+
+static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)
+{
+#ifdef CONFIG_MMU
+	unsigned long flags;
+
+	local_irq_save(flags);
+	switch_mm(prev, next, current);
+	local_irq_restore(flags);
+
+	finish_arch_post_lock_switch();
+#endif
+}
 
 /*
  * We are inserting a "fake" vma for the user-accessible vector page so
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
index 984014b..3daebde 100644
--- a/arch/arm/include/asm/system.h
+++ b/arch/arm/include/asm/system.h
@@ -222,7 +222,9 @@ static inline void set_copro_access(unsigned int val)
  * so enable interrupts over the context switch to avoid high
  * latency.
  */
+#ifndef CONFIG_CPU_HAS_ASID
 #define __ARCH_WANT_INTERRUPTS_ON_CTXSW
+#endif
 
 /*
  * switch_to(prev, next) should switch from task `prev' to `next'
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 7b5cc8d..119e4eb 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -145,6 +145,7 @@ extern void vfp_flush_hwstate(struct thread_info *);
 #define TIF_FREEZE		19
 #define TIF_RESTORE_SIGMASK	20
 #define TIF_SECCOMP		21
+#define TIF_SWITCH_MM		22	/* deferred switch_mm */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 1d5014b..d80aef0 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -22,7 +22,7 @@ unsigned int cpu_last_asid = ASID_FIRST_VERSION;
 DEFINE_PER_CPU(struct mm_struct *, current_mm);
 #endif
 
-static void cpu_set_reserved_ttbr0(void)
+void cpu_set_reserved_ttbr0(void)
 {
 	u32 ttb;
 	/* Copy TTBR1 into TTBR0 */
@@ -43,7 +43,7 @@ void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 	raw_spin_lock_init(&mm->context.id_lock);
 }
 
-static void flush_context(void)
+void flush_context(void)
 {
 	cpu_set_reserved_ttbr0();
 	local_flush_tlb_all();
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 2faff3b..d5334d9 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -116,9 +116,6 @@ ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_ARM_ERRATA_430973
 	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
 #endif
-	mrc	p15, 0, r2, c2, c0, 1		@ load TTB 1
-	mcr	p15, 0, r2, c2, c0, 0		@ into TTB 0
-	isb
 #ifdef CONFIG_ARM_ERRATA_754322
 	dsb
 #endif

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 5/6] ARM: Remove current_mm per-cpu variable
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
                   ` (3 preceding siblings ...)
  2011-11-29 12:22 ` [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs Catalin Marinas
@ 2011-11-29 12:22 ` Catalin Marinas
  2011-11-29 12:22 ` [RFC PATCH 6/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs Catalin Marinas
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

The current_mm variable was used to store the new mm between the
switch_mm() and switch_to() calls where an IPI to reset the context
could have set the wrong mm. Since the interrupts are disabled during
context switch, there is no need for this variable, current->active_mm
already points to the current mm when interrupts are re-enabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/include/asm/mmu_context.h |    7 -------
 arch/arm/mm/context.c              |   12 +-----------
 2 files changed, 1 insertions(+), 18 deletions(-)

diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
index 3e4b219..56710d8 100644
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -42,9 +42,6 @@ void __check_kvm_seq(struct mm_struct *mm);
 #define ASID_FIRST_VERSION	(1 << ASID_BITS)
 
 extern unsigned int cpu_last_asid;
-#ifdef CONFIG_SMP
-DECLARE_PER_CPU(struct mm_struct *, current_mm);
-#endif
 
 void __init_new_context(struct task_struct *tsk, struct mm_struct *mm);
 void __new_context(struct mm_struct *mm);
@@ -154,10 +151,6 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 		__flush_icache_all();
 #endif
 	if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) {
-#ifdef CONFIG_SMP
-		struct mm_struct **crt_mm = &per_cpu(current_mm, cpu);
-		*crt_mm = next;
-#endif
 		check_and_switch_context(next, tsk);
 		if (cache_is_vivt())
 			cpumask_clear_cpu(cpu, mm_cpumask(prev));
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index d80aef0..cbca8a4 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -18,9 +18,6 @@
 
 static DEFINE_RAW_SPINLOCK(cpu_asid_lock);
 unsigned int cpu_last_asid = ASID_FIRST_VERSION;
-#ifdef CONFIG_SMP
-DEFINE_PER_CPU(struct mm_struct *, current_mm);
-#endif
 
 void cpu_set_reserved_ttbr0(void)
 {
@@ -90,14 +87,7 @@ static void reset_context(void *info)
 {
 	unsigned int asid;
 	unsigned int cpu = smp_processor_id();
-	struct mm_struct *mm = per_cpu(current_mm, cpu);
-
-	/*
-	 * Check if a current_mm was set on this CPU as it might still
-	 * be in the early booting stages and using the reserved ASID.
-	 */
-	if (!mm)
-		return;
+	struct mm_struct *mm = current->active_mm;
 
 	smp_rmb();
 	asid = cpu_last_asid + cpu;

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 6/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
                   ` (4 preceding siblings ...)
  2011-11-29 12:22 ` [RFC PATCH 5/6] ARM: Remove current_mm per-cpu variable Catalin Marinas
@ 2011-11-29 12:22 ` Catalin Marinas
  2011-11-29 12:48 ` [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Peter Zijlstra
  2011-12-01  3:14 ` Frank Rowand
  7 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-11-29 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

This patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition for
ARMv5 and earlier processors. On such processors, the context switch
requires a full cache flush. To avoid high interrupt latencies, this
patch defers the mm switching is deferred to the post-lock switch hook.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/include/asm/mmu_context.h |   26 +++++++++++++++++++++-----
 arch/arm/include/asm/system.h      |    9 ---------
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
index 56710d8..f52729c 100644
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -99,19 +99,35 @@ static inline void finish_arch_post_lock_switch(void)
 
 #else	/* !CONFIG_CPU_HAS_ASID */
 
+#ifdef CONFIG_MMU
+
 static inline void check_and_switch_context(struct mm_struct *mm,
 					    struct task_struct *tsk)
 {
-#ifdef CONFIG_MMU
 	if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
 		__check_kvm_seq(mm);
-	cpu_switch_mm(mm->pgd, mm);
-#endif
+
+	/*
+	 * Defer the cpu_switch_mm() call and continue running with the old
+	 * mm. Since we only support UP systems on non-ASID CPUs, the old mm
+	 * will remain valid until the finish_arch_post_lock_switch() call.
+	 */
+	set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
 }
 
-#define init_new_context(tsk,mm)	0
+#define finish_arch_post_lock_switch \
+	finish_arch_post_lock_switch
+static inline void finish_arch_post_lock_switch(void)
+{
+	if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
+		struct mm_struct *mm = current->mm;
+		cpu_switch_mm(mm->pgd, mm);
+	}
+}
 
-#define finish_arch_post_lock_switch()	do { } while (0)
+#endif	/* CONFIG_MMU */
+
+#define init_new_context(tsk,mm)	0
 
 #endif	/* CONFIG_CPU_HAS_ASID */
 
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
index 3daebde..ac7fade 100644
--- a/arch/arm/include/asm/system.h
+++ b/arch/arm/include/asm/system.h
@@ -218,15 +218,6 @@ static inline void set_copro_access(unsigned int val)
 }
 
 /*
- * switch_mm() may do a full cache flush over the context switch,
- * so enable interrupts over the context switch to avoid high
- * latency.
- */
-#ifndef CONFIG_CPU_HAS_ASID
-#define __ARCH_WANT_INTERRUPTS_ON_CTXSW
-#endif
-
-/*
  * switch_to(prev, next) should switch from task `prev' to `next'
  * `prev' will never be the same as `next'.  schedule() itself
  * contains the memory barrier to tell GCC not to cache `current'.

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
                   ` (5 preceding siblings ...)
  2011-11-29 12:22 ` [RFC PATCH 6/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs Catalin Marinas
@ 2011-11-29 12:48 ` Peter Zijlstra
  2011-12-01  3:14 ` Frank Rowand
  7 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2011-11-29 12:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2011-11-29 at 12:22 +0000, Catalin Marinas wrote:
> Hi,
> 
> This set of patches removes the use of __ARCH_WANT_INTERRUPTS_ON_CTXSW
> on ARM.
> 
> As a background, the ARM architecture versions consist of two main sets
> with regards to the MMU switching needs:
> 
> 1. ARMv5 and earlier have VIVT caches and they require a full cache and
>    TLB flush at every context switch.
> 2. ARMv6 and later have VIPT caches and the TLBs are tagged with an ASID
>    (application specific ID). The number of ASIDs is limited to 256 and
>    the allocation algorithm requires IPIs when all the ASIDs have been
>    used.
> 
> Both cases above require interrupts enabled during context switch for
> latency reasons (1) or deadlock avoidance (2).
> 
> The first patch in the series introduces a new scheduler hook invoked
> after the rq->lock is released and interrupts enabled. The subsequent
> two patches change the ARM context switching code (for processors in
> category 2 above) to use a reserved TTBR value instead of a reserved
> ASID. The 4th patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW
> definition for ASID-capable processors by deferring the new ASID
> allocation to the post-lock switch hook.
> 
> The last patch also removes __ARCH_WANT_INTERRUPTS_ON_CTXSW for ARMv5
> and earlier processors. It defers the cpu_switch_mm call to the
> post-lock switch hook. Since this is only running on UP systems and the
> preemption is disabled during context switching, it assumes that the old
> mm is still valid until the post-lock switch hook.

Yeah, see how there's a if (mm) mmdrop(mm) after that.

> The series has been tested on Cortex-A9 (vexpress) and ARM926
> (versatile). Comments are welcome.

Yay!!! Although there's a tiny merge conflict between your tree and tip,
we moved kernel/sched.c around, you'll find it in kernel/sched/core.c
after you merge up.

---
Subject: sched: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Tue Nov 29 13:44:40 CET 2011

Now that the last user is dead, remove support for
__ARCH_WANT_INTERRUPTS_ON_CTXSW.

Much-thanks-to: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/fork.c        |    4 ----
 kernel/sched/core.c  |   40 +---------------------------------------
 kernel/sched/sched.h |    6 ------
 3 files changed, 1 insertion(+), 49 deletions(-)

Index: linux-2.6/kernel/fork.c
===================================================================
--- linux-2.6.orig/kernel/fork.c
+++ linux-2.6/kernel/fork.c
@@ -1191,11 +1191,7 @@ static struct task_struct *copy_process(
 #endif
 #ifdef CONFIG_TRACE_IRQFLAGS
 	p->irq_events = 0;
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	p->hardirqs_enabled = 1;
-#else
 	p->hardirqs_enabled = 0;
-#endif
 	p->hardirq_enable_ip = 0;
 	p->hardirq_enable_event = 0;
 	p->hardirq_disable_ip = _THIS_IP_;
Index: linux-2.6/kernel/sched/core.c
===================================================================
--- linux-2.6.orig/kernel/sched/core.c
+++ linux-2.6/kernel/sched/core.c
@@ -1460,25 +1460,6 @@ static void ttwu_queue_remote(struct tas
 	if (llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list))
 		smp_send_reschedule(cpu);
 }
-
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-static int ttwu_activate_remote(struct task_struct *p, int wake_flags)
-{
-	struct rq *rq;
-	int ret = 0;
-
-	rq = __task_rq_lock(p);
-	if (p->on_cpu) {
-		ttwu_activate(rq, p, ENQUEUE_WAKEUP);
-		ttwu_do_wakeup(rq, p, wake_flags);
-		ret = 1;
-	}
-	__task_rq_unlock(rq);
-
-	return ret;
-
-}
-#endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 #endif /* CONFIG_SMP */
 
 static int ttwu_share_cache(int this_cpu, int cpu)
@@ -1559,21 +1540,8 @@ try_to_wake_up(struct task_struct *p, un
 	 * If the owning (remote) cpu is still in the middle of schedule() with
 	 * this task as prev, wait until its done referencing the task.
 	 */
-	while (p->on_cpu) {
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-		/*
-		 * In case the architecture enables interrupts in
-		 * context_switch(), we cannot busy wait, since that
-		 * would lead to deadlocks when an interrupt hits and
-		 * tries to wake up @prev. So bail and do a complete
-		 * remote wakeup.
-		 */
-		if (ttwu_activate_remote(p, wake_flags))
-			goto stat;
-#else
+	while (p->on_cpu)
 		cpu_relax();
-#endif
-	}
 	/*
 	 * Pairs with the smp_wmb() in finish_lock_switch().
 	 */
@@ -1916,13 +1884,7 @@ static void finish_task_switch(struct rq
 	 */
 	prev_state = prev->state;
 	finish_arch_switch(prev);
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	local_irq_disable();
-#endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 	perf_event_task_sched_in(prev, current);
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	local_irq_enable();
-#endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 	finish_lock_switch(rq, prev);
 
 	fire_sched_in_preempt_notifiers(current);
Index: linux-2.6/kernel/sched/sched.h
===================================================================
--- linux-2.6.orig/kernel/sched/sched.h
+++ linux-2.6/kernel/sched/sched.h
@@ -685,11 +685,7 @@ static inline void prepare_lock_switch(s
 	 */
 	next->on_cpu = 1;
 #endif
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	raw_spin_unlock_irq(&rq->lock);
-#else
 	raw_spin_unlock(&rq->lock);
-#endif
 }
 
 static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
@@ -703,9 +699,7 @@ static inline void finish_lock_switch(st
 	smp_wmb();
 	prev->on_cpu = 0;
 #endif
-#ifndef __ARCH_WANT_INTERRUPTS_ON_CTXSW
 	local_irq_enable();
-#endif
 }
 #endif /* __ARCH_WANT_UNLOCKED_CTXSW */
 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs
  2011-11-29 12:22 ` [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs Catalin Marinas
@ 2011-12-01  2:57   ` Frank Rowand
  2011-12-01  9:26     ` Catalin Marinas
  0 siblings, 1 reply; 13+ messages in thread
From: Frank Rowand @ 2011-12-01  2:57 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/29/11 04:22, Catalin Marinas wrote:
> Since the ASIDs must be unique to an mm across all the CPUs in a system,
> the __new_context() function needs to broadcast a context reset event to
> all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
> cannot be issued with interrupts disabled and ARM had to define
> __ARCH_WANT_INTERRUPTS_ON_CTXSW.
> 
> This patch changes the check_context() function to
> check_and_switch_context() called from switch_mm(). In case of
> ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed, it defers
> the __new_context() and cpu_switch_mm() calls to the post-lock switch
> hook where the interrupts are enabled. Setting the reserved TTBR0 was
> also moved to check_and_switch_context() from cpu_v7_switch_mm().
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> ---
>  arch/arm/include/asm/mmu_context.h |   81 ++++++++++++++++++++++++++++--------
>  arch/arm/include/asm/system.h      |    2 +
>  arch/arm/include/asm/thread_info.h |    1 +
>  arch/arm/mm/context.c              |    4 +-
>  arch/arm/mm/proc-v7.S              |    3 -
>  5 files changed, 69 insertions(+), 22 deletions(-)
> 

< snip >

> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> index 2faff3b..d5334d9 100644
> --- a/arch/arm/mm/proc-v7.S
> +++ b/arch/arm/mm/proc-v7.S
> @@ -116,9 +116,6 @@ ENTRY(cpu_v7_switch_mm)
>  #ifdef CONFIG_ARM_ERRATA_430973
>  	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
>  #endif
> -	mrc	p15, 0, r2, c2, c0, 1		@ load TTB 1
> -	mcr	p15, 0, r2, c2, c0, 0		@ into TTB 0
> -	isb
>  #ifdef CONFIG_ARM_ERRATA_754322
>  	dsb
>  #endif

I do not have a tree that matches this version of cpu_v7_switch_mm().
Can you point me at a tree that I can see this in?

-Frank

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition
  2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
                   ` (6 preceding siblings ...)
  2011-11-29 12:48 ` [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Peter Zijlstra
@ 2011-12-01  3:14 ` Frank Rowand
  2011-12-01  9:26   ` Catalin Marinas
  7 siblings, 1 reply; 13+ messages in thread
From: Frank Rowand @ 2011-12-01  3:14 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/29/11 04:22, Catalin Marinas wrote:
> Hi,
> 
> This set of patches removes the use of __ARCH_WANT_INTERRUPTS_ON_CTXSW
> on ARM.

All 6 patches:

Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs
  2011-12-01  2:57   ` Frank Rowand
@ 2011-12-01  9:26     ` Catalin Marinas
  2011-12-01 19:42       ` Frank Rowand
  0 siblings, 1 reply; 13+ messages in thread
From: Catalin Marinas @ 2011-12-01  9:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 01, 2011 at 02:57:07AM +0000, Frank Rowand wrote:
> On 11/29/11 04:22, Catalin Marinas wrote:
> > Since the ASIDs must be unique to an mm across all the CPUs in a system,
> > the __new_context() function needs to broadcast a context reset event to
> > all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
> > cannot be issued with interrupts disabled and ARM had to define
> > __ARCH_WANT_INTERRUPTS_ON_CTXSW.
> > 
> > This patch changes the check_context() function to
> > check_and_switch_context() called from switch_mm(). In case of
> > ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed, it defers
> > the __new_context() and cpu_switch_mm() calls to the post-lock switch
> > hook where the interrupts are enabled. Setting the reserved TTBR0 was
> > also moved to check_and_switch_context() from cpu_v7_switch_mm().
> > 
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Russell King <linux@arm.linux.org.uk>
> > ---
> >  arch/arm/include/asm/mmu_context.h |   81 ++++++++++++++++++++++++++++--------
> >  arch/arm/include/asm/system.h      |    2 +
> >  arch/arm/include/asm/thread_info.h |    1 +
> >  arch/arm/mm/context.c              |    4 +-
> >  arch/arm/mm/proc-v7.S              |    3 -
> >  5 files changed, 69 insertions(+), 22 deletions(-)
> > 
> 
> < snip >
> 
> > diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> > index 2faff3b..d5334d9 100644
> > --- a/arch/arm/mm/proc-v7.S
> > +++ b/arch/arm/mm/proc-v7.S
> > @@ -116,9 +116,6 @@ ENTRY(cpu_v7_switch_mm)
> >  #ifdef CONFIG_ARM_ERRATA_430973
> >  	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
> >  #endif
> > -	mrc	p15, 0, r2, c2, c0, 1		@ load TTB 1
> > -	mcr	p15, 0, r2, c2, c0, 0		@ into TTB 0
> > -	isb
> >  #ifdef CONFIG_ARM_ERRATA_754322
> >  	dsb
> >  #endif
> 
> I do not have a tree that matches this version of cpu_v7_switch_mm().
> Can you point me at a tree that I can see this in?

That's added by the second patch in the series (and removed in a later
patch but it is a logical change in both situations and keeps the code
bisectable).

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition
  2011-12-01  3:14 ` Frank Rowand
@ 2011-12-01  9:26   ` Catalin Marinas
  0 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2011-12-01  9:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 01, 2011 at 03:14:05AM +0000, Frank Rowand wrote:
> On 11/29/11 04:22, Catalin Marinas wrote:
> > Hi,
> > 
> > This set of patches removes the use of __ARCH_WANT_INTERRUPTS_ON_CTXSW
> > on ARM.
> 
> All 6 patches:
> 
> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs
  2011-12-01  9:26     ` Catalin Marinas
@ 2011-12-01 19:42       ` Frank Rowand
  0 siblings, 0 replies; 13+ messages in thread
From: Frank Rowand @ 2011-12-01 19:42 UTC (permalink / raw)
  To: linux-arm-kernel

On 12/01/11 01:26, Catalin Marinas wrote:
> On Thu, Dec 01, 2011 at 02:57:07AM +0000, Frank Rowand wrote:
>> On 11/29/11 04:22, Catalin Marinas wrote:
>>> Since the ASIDs must be unique to an mm across all the CPUs in a system,
>>> the __new_context() function needs to broadcast a context reset event to
>>> all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
>>> cannot be issued with interrupts disabled and ARM had to define
>>> __ARCH_WANT_INTERRUPTS_ON_CTXSW.
>>>
>>> This patch changes the check_context() function to
>>> check_and_switch_context() called from switch_mm(). In case of
>>> ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed, it defers
>>> the __new_context() and cpu_switch_mm() calls to the post-lock switch
>>> hook where the interrupts are enabled. Setting the reserved TTBR0 was
>>> also moved to check_and_switch_context() from cpu_v7_switch_mm().
>>>
>>> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
>>> Cc: Russell King <linux@arm.linux.org.uk>
>>> ---
>>>  arch/arm/include/asm/mmu_context.h |   81 ++++++++++++++++++++++++++++--------
>>>  arch/arm/include/asm/system.h      |    2 +
>>>  arch/arm/include/asm/thread_info.h |    1 +
>>>  arch/arm/mm/context.c              |    4 +-
>>>  arch/arm/mm/proc-v7.S              |    3 -
>>>  5 files changed, 69 insertions(+), 22 deletions(-)
>>>
>>
>> < snip >
>>
>>> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
>>> index 2faff3b..d5334d9 100644
>>> --- a/arch/arm/mm/proc-v7.S
>>> +++ b/arch/arm/mm/proc-v7.S
>>> @@ -116,9 +116,6 @@ ENTRY(cpu_v7_switch_mm)
>>>  #ifdef CONFIG_ARM_ERRATA_430973
>>>  	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
>>>  #endif
>>> -	mrc	p15, 0, r2, c2, c0, 1		@ load TTB 1
>>> -	mcr	p15, 0, r2, c2, c0, 0		@ into TTB 0
>>> -	isb
>>>  #ifdef CONFIG_ARM_ERRATA_754322
>>>  	dsb
>>>  #endif
>>
>> I do not have a tree that matches this version of cpu_v7_switch_mm().
>> Can you point me at a tree that I can see this in?
> 
> That's added by the second patch in the series (and removed in a later
> patch but it is a logical change in both situations and keeps the code
> bisectable).
> 

Ah, yes!  Thanks.

-Frank

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-12-01 19:42 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 1/6] sched: Introduce the finish_arch_post_lock_switch() scheduler hook Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 2/6] ARM: Use TTBR1 instead of reserved context ID Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 3/6] ARM: Allow ASID 0 to be allocated to tasks Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs Catalin Marinas
2011-12-01  2:57   ` Frank Rowand
2011-12-01  9:26     ` Catalin Marinas
2011-12-01 19:42       ` Frank Rowand
2011-11-29 12:22 ` [RFC PATCH 5/6] ARM: Remove current_mm per-cpu variable Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 6/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs Catalin Marinas
2011-11-29 12:48 ` [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Peter Zijlstra
2011-12-01  3:14 ` Frank Rowand
2011-12-01  9:26   ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).