linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: peterz@infradead.org (Peter Zijlstra)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition
Date: Tue, 29 Nov 2011 13:48:01 +0100	[thread overview]
Message-ID: <1322570881.2921.230.camel@twins> (raw)
In-Reply-To: <1322569352-23584-1-git-send-email-catalin.marinas@arm.com>

On Tue, 2011-11-29 at 12:22 +0000, Catalin Marinas wrote:
> Hi,
> 
> This set of patches removes the use of __ARCH_WANT_INTERRUPTS_ON_CTXSW
> on ARM.
> 
> As a background, the ARM architecture versions consist of two main sets
> with regards to the MMU switching needs:
> 
> 1. ARMv5 and earlier have VIVT caches and they require a full cache and
>    TLB flush at every context switch.
> 2. ARMv6 and later have VIPT caches and the TLBs are tagged with an ASID
>    (application specific ID). The number of ASIDs is limited to 256 and
>    the allocation algorithm requires IPIs when all the ASIDs have been
>    used.
> 
> Both cases above require interrupts enabled during context switch for
> latency reasons (1) or deadlock avoidance (2).
> 
> The first patch in the series introduces a new scheduler hook invoked
> after the rq->lock is released and interrupts enabled. The subsequent
> two patches change the ARM context switching code (for processors in
> category 2 above) to use a reserved TTBR value instead of a reserved
> ASID. The 4th patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW
> definition for ASID-capable processors by deferring the new ASID
> allocation to the post-lock switch hook.
> 
> The last patch also removes __ARCH_WANT_INTERRUPTS_ON_CTXSW for ARMv5
> and earlier processors. It defers the cpu_switch_mm call to the
> post-lock switch hook. Since this is only running on UP systems and the
> preemption is disabled during context switching, it assumes that the old
> mm is still valid until the post-lock switch hook.

Yeah, see how there's a if (mm) mmdrop(mm) after that.

> The series has been tested on Cortex-A9 (vexpress) and ARM926
> (versatile). Comments are welcome.

Yay!!! Although there's a tiny merge conflict between your tree and tip,
we moved kernel/sched.c around, you'll find it in kernel/sched/core.c
after you merge up.

---
Subject: sched: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Tue Nov 29 13:44:40 CET 2011

Now that the last user is dead, remove support for
__ARCH_WANT_INTERRUPTS_ON_CTXSW.

Much-thanks-to: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/fork.c        |    4 ----
 kernel/sched/core.c  |   40 +---------------------------------------
 kernel/sched/sched.h |    6 ------
 3 files changed, 1 insertion(+), 49 deletions(-)

Index: linux-2.6/kernel/fork.c
===================================================================
--- linux-2.6.orig/kernel/fork.c
+++ linux-2.6/kernel/fork.c
@@ -1191,11 +1191,7 @@ static struct task_struct *copy_process(
 #endif
 #ifdef CONFIG_TRACE_IRQFLAGS
 	p->irq_events = 0;
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	p->hardirqs_enabled = 1;
-#else
 	p->hardirqs_enabled = 0;
-#endif
 	p->hardirq_enable_ip = 0;
 	p->hardirq_enable_event = 0;
 	p->hardirq_disable_ip = _THIS_IP_;
Index: linux-2.6/kernel/sched/core.c
===================================================================
--- linux-2.6.orig/kernel/sched/core.c
+++ linux-2.6/kernel/sched/core.c
@@ -1460,25 +1460,6 @@ static void ttwu_queue_remote(struct tas
 	if (llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list))
 		smp_send_reschedule(cpu);
 }
-
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-static int ttwu_activate_remote(struct task_struct *p, int wake_flags)
-{
-	struct rq *rq;
-	int ret = 0;
-
-	rq = __task_rq_lock(p);
-	if (p->on_cpu) {
-		ttwu_activate(rq, p, ENQUEUE_WAKEUP);
-		ttwu_do_wakeup(rq, p, wake_flags);
-		ret = 1;
-	}
-	__task_rq_unlock(rq);
-
-	return ret;
-
-}
-#endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 #endif /* CONFIG_SMP */
 
 static int ttwu_share_cache(int this_cpu, int cpu)
@@ -1559,21 +1540,8 @@ try_to_wake_up(struct task_struct *p, un
 	 * If the owning (remote) cpu is still in the middle of schedule() with
 	 * this task as prev, wait until its done referencing the task.
 	 */
-	while (p->on_cpu) {
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-		/*
-		 * In case the architecture enables interrupts in
-		 * context_switch(), we cannot busy wait, since that
-		 * would lead to deadlocks when an interrupt hits and
-		 * tries to wake up @prev. So bail and do a complete
-		 * remote wakeup.
-		 */
-		if (ttwu_activate_remote(p, wake_flags))
-			goto stat;
-#else
+	while (p->on_cpu)
 		cpu_relax();
-#endif
-	}
 	/*
 	 * Pairs with the smp_wmb() in finish_lock_switch().
 	 */
@@ -1916,13 +1884,7 @@ static void finish_task_switch(struct rq
 	 */
 	prev_state = prev->state;
 	finish_arch_switch(prev);
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	local_irq_disable();
-#endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 	perf_event_task_sched_in(prev, current);
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	local_irq_enable();
-#endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 	finish_lock_switch(rq, prev);
 
 	fire_sched_in_preempt_notifiers(current);
Index: linux-2.6/kernel/sched/sched.h
===================================================================
--- linux-2.6.orig/kernel/sched/sched.h
+++ linux-2.6/kernel/sched/sched.h
@@ -685,11 +685,7 @@ static inline void prepare_lock_switch(s
 	 */
 	next->on_cpu = 1;
 #endif
-#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
-	raw_spin_unlock_irq(&rq->lock);
-#else
 	raw_spin_unlock(&rq->lock);
-#endif
 }
 
 static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
@@ -703,9 +699,7 @@ static inline void finish_lock_switch(st
 	smp_wmb();
 	prev->on_cpu = 0;
 #endif
-#ifndef __ARCH_WANT_INTERRUPTS_ON_CTXSW
 	local_irq_enable();
-#endif
 }
 #endif /* __ARCH_WANT_UNLOCKED_CTXSW */
 

  parent reply	other threads:[~2011-11-29 12:48 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-29 12:22 [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 1/6] sched: Introduce the finish_arch_post_lock_switch() scheduler hook Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 2/6] ARM: Use TTBR1 instead of reserved context ID Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 3/6] ARM: Allow ASID 0 to be allocated to tasks Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 4/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs Catalin Marinas
2011-12-01  2:57   ` Frank Rowand
2011-12-01  9:26     ` Catalin Marinas
2011-12-01 19:42       ` Frank Rowand
2011-11-29 12:22 ` [RFC PATCH 5/6] ARM: Remove current_mm per-cpu variable Catalin Marinas
2011-11-29 12:22 ` [RFC PATCH 6/6] ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs Catalin Marinas
2011-11-29 12:48 ` Peter Zijlstra [this message]
2011-12-01  3:14 ` [RFC PATCH 0/6] ARM: Remove the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition Frank Rowand
2011-12-01  9:26   ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1322570881.2921.230.camel@twins \
    --to=peterz@infradead.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).