From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: tglx@linutronix.de, peterz@infradead.org,
paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au,
mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org,
vincent.guittot@linaro.org
Cc: sbw@mit.edu, tj@kernel.org, amit.kucheria@linaro.org,
rostedt@goodmis.org, rjw@sisk.pl,
srivatsa.bhat@linux.vnet.ibm.com, wangyun@linux.vnet.ibm.com,
xiaoguangrong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com,
linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [RFC PATCH 04/10] sched, cpu hotplug: Use stable online cpus in try_to_wake_up() & select_task_rq()
Date: Tue, 04 Dec 2012 14:24:50 +0530 [thread overview]
Message-ID: <20121204085447.25919.39629.stgit@srivatsabhat.in.ibm.com> (raw)
In-Reply-To: <20121204085149.25919.29920.stgit@srivatsabhat.in.ibm.com>
With stop_machine() gone from the CPU offline path, we can't depend on
preempt_disable() to prevent CPUs from going offline from under us.
Use the get/put_online_cpus_stable_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Scheduler functions such as try_to_wake_up() and select_task_rq() (and even
select_fallback_rq()) deal with picking new CPUs to run tasks. It would be
better if they picked those CPUs from the set of CPUs that are going to
remain online. So use the cpu_online_stable_mask while making those decisions.
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---
kernel/sched/core.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d8927f..ef6ada4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1103,6 +1103,10 @@ EXPORT_SYMBOL_GPL(kick_process);
#ifdef CONFIG_SMP
/*
* ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ *
+ * Must be called under get/put_online_cpus_stable_atomic() or
+ * equivalent, to avoid CPUs from going offline from underneath
+ * us.
*/
static int select_fallback_rq(int cpu, struct task_struct *p)
{
@@ -1112,7 +1116,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
/* Look for allowed, online CPU in same node. */
for_each_cpu(dest_cpu, nodemask) {
- if (!cpu_online(dest_cpu))
+ if (!cpu_online_stable(dest_cpu))
continue;
if (!cpu_active(dest_cpu))
continue;
@@ -1123,7 +1127,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
for (;;) {
/* Any allowed, online CPU? */
for_each_cpu(dest_cpu, tsk_cpus_allowed(p)) {
- if (!cpu_online(dest_cpu))
+ if (!cpu_online_stable(dest_cpu))
continue;
if (!cpu_active(dest_cpu))
continue;
@@ -1166,6 +1170,9 @@ out:
/*
* The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ *
+ * Must be called under get/put_online_cpus_stable_atomic(), to prevent
+ * CPUs from going offline from underneath us.
*/
static inline
int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
@@ -1183,7 +1190,7 @@ int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
* not worry about this generic constraint ]
*/
if (unlikely(!cpumask_test_cpu(cpu, tsk_cpus_allowed(p)) ||
- !cpu_online(cpu)))
+ !cpu_online_stable(cpu)))
cpu = select_fallback_rq(task_cpu(p), p);
return cpu;
@@ -1406,6 +1413,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
int cpu, success = 0;
smp_wmb();
+ get_online_cpus_stable_atomic();
raw_spin_lock_irqsave(&p->pi_lock, flags);
if (!(p->state & state))
goto out;
@@ -1446,6 +1454,7 @@ stat:
ttwu_stat(p, cpu, wake_flags);
out:
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+ put_online_cpus_stable_atomic();
return success;
}
@@ -1624,6 +1633,7 @@ void wake_up_new_task(struct task_struct *p)
unsigned long flags;
struct rq *rq;
+ get_online_cpus_stable_atomic();
raw_spin_lock_irqsave(&p->pi_lock, flags);
#ifdef CONFIG_SMP
/*
@@ -1644,6 +1654,7 @@ void wake_up_new_task(struct task_struct *p)
p->sched_class->task_woken(rq, p);
#endif
task_rq_unlock(rq, p, &flags);
+ put_online_cpus_stable_atomic();
}
#ifdef CONFIG_PREEMPT_NOTIFIERS
@@ -2541,6 +2552,7 @@ void sched_exec(void)
unsigned long flags;
int dest_cpu;
+ get_online_cpus_stable_atomic();
raw_spin_lock_irqsave(&p->pi_lock, flags);
dest_cpu = p->sched_class->select_task_rq(p, SD_BALANCE_EXEC, 0);
if (dest_cpu == smp_processor_id())
@@ -2550,11 +2562,13 @@ void sched_exec(void)
struct migration_arg arg = { p, dest_cpu };
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+ put_online_cpus_stable_atomic();
stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
return;
}
unlock:
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+ put_online_cpus_stable_atomic();
}
#endif
next prev parent reply other threads:[~2012-12-04 8:56 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-04 8:53 [RFC PATCH 00/10] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2012-12-04 8:53 ` [RFC PATCH 01/10] CPU hotplug: Introduce "stable" cpu online mask, for atomic hotplug readers Srivatsa S. Bhat
2012-12-04 15:17 ` Tejun Heo
2012-12-04 21:14 ` Srivatsa S. Bhat
2012-12-04 22:10 ` Andrew Morton
2012-12-05 2:56 ` Michael Wang
2012-12-05 3:28 ` Michael Wang
2012-12-05 12:38 ` Srivatsa S. Bhat
2012-12-04 8:54 ` [RFC PATCH 02/10] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat
2012-12-04 22:17 ` Andrew Morton
2012-12-05 12:41 ` Srivatsa S. Bhat
2012-12-04 23:39 ` Rusty Russell
2012-12-05 12:44 ` Srivatsa S. Bhat
2012-12-04 8:54 ` [RFC PATCH 03/10] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat
2012-12-04 8:54 ` Srivatsa S. Bhat [this message]
2012-12-04 8:55 ` [RFC PATCH 05/10] kick_process(), cpu-hotplug: Prevent offlining of target CPU properly Srivatsa S. Bhat
2012-12-04 8:55 ` [RFC PATCH 06/10] yield_to(), cpu-hotplug: Prevent offlining of other CPUs properly Srivatsa S. Bhat
2012-12-04 8:55 ` [RFC PATCH 07/10] KVM: VMX: fix invalid cpu passed to smp_call_function_single Srivatsa S. Bhat
2012-12-04 8:55 ` [RFC PATCH 08/10] KVM: VMX: fix memory order between loading vmcs and clearing vmcs Srivatsa S. Bhat
2012-12-04 8:56 ` [RFC PATCH 09/10] KVM: VMX: fix unsyc vmcs status when cpu is going down Srivatsa S. Bhat
2012-12-04 8:56 ` [RFC PATCH 10/10] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121204085447.25919.39629.stgit@srivatsabhat.in.ibm.com \
--to=srivatsa.bhat@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=amit.kucheria@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=nikunj@linux.vnet.ibm.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=rjw@sisk.pl \
--cc=rostedt@goodmis.org \
--cc=rusty@rustcorp.com.au \
--cc=sbw@mit.edu \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=wangyun@linux.vnet.ibm.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).