From: Tejun Heo <tj@kernel.org>
To: torvalds@linux-foundation.org, awalls@radix.net,
linux-kernel@vger.kernel.org, jeff@garzik.org, mingo@elte.hu,
akpm@linux-foundation.org, jens.axboe@oracle.com,
rusty@rustcorp.com.au, cl@linux-foundation.org,
dhowells@redhat.com, arjan@linux.intel.com, avi@redhat.com,
peterz@infradead.org, johannes@sipsolutions.net,
andi@firstfloor.org
Cc: Tejun Heo <tj@kernel.org>, Mike Galbraith <efault@gmx.de>
Subject: [PATCH 03/27] sched: implement __set_cpus_allowed()
Date: Fri, 18 Dec 2009 21:57:44 +0900 [thread overview]
Message-ID: <1261141088-2014-4-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1261141088-2014-1-git-send-email-tj@kernel.org>
set_cpus_allowed_ptr() modifies the allowed cpu mask of a task. The
function performs the following checks before applying new mask.
* Check whether PF_THREAD_BOUND is set. This is set for bound
kthreads so that they can't be moved around.
* Check whether the target cpu is still marked active - cpu_active().
Active state is cleared early while downing a cpu.
This patch adds __set_cpus_allowed() which takes @force parameter
which when true makes __set_cpus_allowed() ignore PF_THREAD_BOUND and
use cpu online state instead of active state for the latter. This
allows migrating tasks to CPUs as long as they are online.
set_cpus_allowed_ptr() is implemented as inline wrapper around
__set_cpus_allowed().
Due to the way migration is implemented, the @force parameter needs to
be passed over to the migration thread. @force parameter is added to
struct migration_req and passed to __migrate_task().
Please note the naming discrepancy between set_cpus_allowed_ptr() and
the new functions. The _ptr suffix is from the days when cpumask API
wasn't mature and future changes should drop it from
set_cpus_allowed_ptr() too.
NOTE: It would be nice to implement kthread_bind() in terms of
__set_cpus_allowed() if we can drop the capability to bind to a
dead CPU from kthread_bind(), which doesn't seem too popular
anyway. With such change, we'll have set_cpus_allowed_ptr() for
regular tasks and kthread_bind() for kthreads and can use
PF_THREAD_BOUND instead of passing @force parameter around.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Ingo Molnar <mingo@elte.hu>
---
include/linux/sched.h | 14 +++++++++---
kernel/sched.c | 55 ++++++++++++++++++++++++++++++++-----------------
2 files changed, 46 insertions(+), 23 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6231576..c3a9c3d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1864,11 +1864,11 @@ static inline void rcu_copy_process(struct task_struct *p)
#endif
#ifdef CONFIG_SMP
-extern int set_cpus_allowed_ptr(struct task_struct *p,
- const struct cpumask *new_mask);
+extern int __set_cpus_allowed(struct task_struct *p,
+ const struct cpumask *new_mask, bool force);
#else
-static inline int set_cpus_allowed_ptr(struct task_struct *p,
- const struct cpumask *new_mask)
+static inline int __set_cpus_allowed(struct task_struct *p,
+ const struct cpumask *new_mask, bool force)
{
if (!cpumask_test_cpu(0, new_mask))
return -EINVAL;
@@ -1876,6 +1876,12 @@ static inline int set_cpus_allowed_ptr(struct task_struct *p,
}
#endif
+static inline int set_cpus_allowed_ptr(struct task_struct *p,
+ const struct cpumask *new_mask)
+{
+ return __set_cpus_allowed(p, new_mask, false);
+}
+
#ifndef CONFIG_CPUMASK_OFFSTACK
static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
{
diff --git a/kernel/sched.c b/kernel/sched.c
index 910bf74..f7e2c32 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2128,6 +2128,7 @@ struct migration_req {
struct task_struct *task;
int dest_cpu;
+ bool force;
struct completion done;
};
@@ -2136,8 +2137,8 @@ struct migration_req {
* The task's runqueue lock must be held.
* Returns true if you have to wait for migration thread.
*/
-static int
-migrate_task(struct task_struct *p, int dest_cpu, struct migration_req *req)
+static int migrate_task(struct task_struct *p, int dest_cpu, bool force,
+ struct migration_req *req)
{
struct rq *rq = task_rq(p);
@@ -2154,6 +2155,7 @@ migrate_task(struct task_struct *p, int dest_cpu, struct migration_req *req)
init_completion(&req->done);
req->task = p;
req->dest_cpu = dest_cpu;
+ req->force = force;
list_add(&req->list, &rq->migration_queue);
return 1;
@@ -3112,7 +3114,7 @@ static void sched_migrate_task(struct task_struct *p, int dest_cpu)
goto out;
/* force the process onto the specified CPU */
- if (migrate_task(p, dest_cpu, &req)) {
+ if (migrate_task(p, dest_cpu, false, &req)) {
/* Need to wait for migration thread (might exit: take ref). */
struct task_struct *mt = rq->migration_thread;
@@ -7078,29 +7080,39 @@ static inline void sched_init_granularity(void)
* 7) we wake up and the migration is done.
*/
-/*
- * Change a given task's CPU affinity. Migrate the thread to a
- * proper CPU and schedule it away if the CPU it's executing on
- * is removed from the allowed bitmask.
+/**
+ * __set_cpus_allowed - change a task's CPU affinity
+ * @p: task to change CPU affinity for
+ * @new_mask: new CPU affinity
+ * @force: override CPU active status and PF_THREAD_BOUND check
+ *
+ * Migrate the thread to a proper CPU and schedule it away if the CPU
+ * it's executing on is removed from the allowed bitmask.
+ *
+ * The caller must have a valid reference to the task, the task must
+ * not exit() & deallocate itself prematurely. The call is not atomic;
+ * no spinlocks may be held.
*
- * NOTE: the caller must have a valid reference to the task, the
- * task must not exit() & deallocate itself prematurely. The
- * call is not atomic; no spinlocks may be held.
+ * If @force is %true, PF_THREAD_BOUND test is bypassed and CPU active
+ * state is ignored as long as the CPU is online.
*/
-int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
+int __set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask,
+ bool force)
{
+ const struct cpumask *cpu_cand_mask =
+ force ? cpu_online_mask : cpu_active_mask;
struct migration_req req;
unsigned long flags;
struct rq *rq;
int ret = 0;
rq = task_rq_lock(p, &flags);
- if (!cpumask_intersects(new_mask, cpu_active_mask)) {
+ if (!cpumask_intersects(new_mask, cpu_cand_mask)) {
ret = -EINVAL;
goto out;
}
- if (unlikely((p->flags & PF_THREAD_BOUND) && p != current &&
+ if (unlikely((p->flags & PF_THREAD_BOUND) && !force && p != current &&
!cpumask_equal(&p->cpus_allowed, new_mask))) {
ret = -EINVAL;
goto out;
@@ -7117,7 +7129,8 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
if (cpumask_test_cpu(task_cpu(p), new_mask))
goto out;
- if (migrate_task(p, cpumask_any_and(cpu_active_mask, new_mask), &req)) {
+ if (migrate_task(p, cpumask_any_and(cpu_cand_mask, new_mask), force,
+ &req)) {
/* Need help from migration thread: drop lock and wait. */
struct task_struct *mt = rq->migration_thread;
@@ -7134,7 +7147,7 @@ out:
return ret;
}
-EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
+EXPORT_SYMBOL_GPL(__set_cpus_allowed);
/*
* Move (not current) task off this cpu, onto dest cpu. We're doing
@@ -7147,12 +7160,15 @@ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
*
* Returns non-zero if task was successfully migrated.
*/
-static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
+static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu,
+ bool force)
{
+ const struct cpumask *cpu_cand_mask =
+ force ? cpu_online_mask : cpu_active_mask;
struct rq *rq_dest, *rq_src;
int ret = 0, on_rq;
- if (unlikely(!cpu_active(dest_cpu)))
+ if (unlikely(!cpumask_test_cpu(dest_cpu, cpu_cand_mask)))
return ret;
rq_src = cpu_rq(src_cpu);
@@ -7231,7 +7247,8 @@ static int migration_thread(void *data)
if (req->task != NULL) {
raw_spin_unlock(&rq->lock);
- __migrate_task(req->task, cpu, req->dest_cpu);
+ __migrate_task(req->task, cpu, req->dest_cpu,
+ req->force);
} else if (likely(cpu == (badcpu = smp_processor_id()))) {
req->dest_cpu = RCU_MIGRATION_GOT_QS;
raw_spin_unlock(&rq->lock);
@@ -7256,7 +7273,7 @@ static int __migrate_task_irq(struct task_struct *p, int src_cpu, int dest_cpu)
int ret;
local_irq_disable();
- ret = __migrate_task(p, src_cpu, dest_cpu);
+ ret = __migrate_task(p, src_cpu, dest_cpu, false);
local_irq_enable();
return ret;
}
--
1.6.4.2
next prev parent reply other threads:[~2009-12-18 13:03 UTC|newest]
Thread overview: 104+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-18 12:57 Tejun Heo
2009-12-18 12:57 ` [PATCH 01/27] sched: rename preempt_notifiers to sched_notifiers and refactor implementation Tejun Heo
2009-12-18 12:57 ` [PATCH 02/27] sched: refactor try_to_wake_up() Tejun Heo
2009-12-18 12:57 ` Tejun Heo [this message]
2009-12-18 12:57 ` [PATCH 04/27] sched: make sched_notifiers unconditional Tejun Heo
2009-12-18 12:57 ` [PATCH 05/27] sched: add wakeup/sleep sched_notifiers and allow NULL notifier ops Tejun Heo
2009-12-18 12:57 ` [PATCH 06/27] sched: implement try_to_wake_up_local() Tejun Heo
2009-12-18 12:57 ` [PATCH 07/27] acpi: use queue_work_on() instead of binding workqueue worker to cpu0 Tejun Heo
2009-12-18 12:57 ` [PATCH 08/27] stop_machine: reimplement without using workqueue Tejun Heo
2009-12-18 12:57 ` [PATCH 09/27] workqueue: misc/cosmetic updates Tejun Heo
2009-12-18 12:57 ` [PATCH 10/27] workqueue: merge feature parameters into flags Tejun Heo
2009-12-18 12:57 ` [PATCH 11/27] workqueue: define both bit position and mask for work flags Tejun Heo
2009-12-18 12:57 ` [PATCH 12/27] workqueue: separate out process_one_work() Tejun Heo
2009-12-18 12:57 ` [PATCH 13/27] workqueue: temporarily disable workqueue tracing Tejun Heo
2009-12-18 12:57 ` [PATCH 14/27] workqueue: kill cpu_populated_map Tejun Heo
2009-12-18 12:57 ` [PATCH 15/27] workqueue: update cwq alignement Tejun Heo
2009-12-18 12:57 ` [PATCH 16/27] workqueue: reimplement workqueue flushing using color coded works Tejun Heo
2009-12-18 12:57 ` [PATCH 17/27] workqueue: introduce worker Tejun Heo
2009-12-18 12:57 ` [PATCH 18/27] workqueue: reimplement work flushing using linked works Tejun Heo
2009-12-18 12:58 ` [PATCH 19/27] workqueue: implement per-cwq active work limit Tejun Heo
2009-12-18 12:58 ` [PATCH 20/27] workqueue: reimplement workqueue freeze using max_active Tejun Heo
2009-12-18 12:58 ` [PATCH 21/27] workqueue: introduce global cwq and unify cwq locks Tejun Heo
2009-12-18 12:58 ` [PATCH 22/27] workqueue: implement worker states Tejun Heo
2009-12-18 12:58 ` [PATCH 23/27] workqueue: reimplement CPU hotplugging support using trustee Tejun Heo
2009-12-18 12:58 ` [PATCH 24/27] workqueue: make single thread workqueue shared worker pool friendly Tejun Heo
2009-12-18 12:58 ` [PATCH 25/27] workqueue: use shared worklist and pool all workers per cpu Tejun Heo
2009-12-18 12:58 ` [PATCH 26/27] workqueue: implement concurrency managed dynamic worker pool Tejun Heo
2009-12-18 12:58 ` [PATCH 27/27] workqueue: increase max_active of keventd and kill current_is_keventd() Tejun Heo
2009-12-18 13:00 ` SUBJ: [RFC PATCHSET] concurrency managed workqueue, take#2 Tejun Heo
2009-12-18 13:03 ` Tejun Heo
2009-12-18 13:45 ` workqueue thing Peter Zijlstra
2009-12-18 13:50 ` Andi Kleen
2009-12-18 15:01 ` Arjan van de Ven
2009-12-21 3:19 ` Tejun Heo
2009-12-21 9:17 ` Jens Axboe
2009-12-21 10:35 ` Peter Zijlstra
2009-12-21 11:09 ` Andi Kleen
2009-12-21 11:17 ` Arjan van de Ven
2009-12-21 11:33 ` Andi Kleen
2009-12-21 13:18 ` Tejun Heo
2009-12-21 11:11 ` Arjan van de Ven
2009-12-21 13:22 ` Tejun Heo
2009-12-21 13:53 ` Arjan van de Ven
2009-12-21 14:19 ` Tejun Heo
2009-12-21 15:19 ` Arjan van de Ven
2009-12-22 0:00 ` Tejun Heo
2009-12-22 11:10 ` Peter Zijlstra
2009-12-22 17:20 ` Linus Torvalds
2009-12-22 17:47 ` Peter Zijlstra
2009-12-22 18:07 ` Andi Kleen
2009-12-22 18:20 ` Peter Zijlstra
2009-12-23 8:17 ` Stijn Devriendt
2009-12-23 8:43 ` Peter Zijlstra
2009-12-23 9:01 ` Stijn Devriendt
2009-12-22 18:28 ` Linus Torvalds
2009-12-23 8:06 ` Johannes Berg
2009-12-23 3:37 ` Tejun Heo
2009-12-23 6:52 ` Herbert Xu
2009-12-23 8:00 ` Steffen Klassert
2009-12-23 8:01 ` [PATCH 0/2] Parallel crypto/IPsec v7 Steffen Klassert
2009-12-23 8:03 ` [PATCH 1/2] padata: generic parallelization/serialization interface Steffen Klassert
2009-12-23 8:04 ` [PATCH 2/2] crypto: pcrypt - Add pcrypt crypto parallelization wrapper Steffen Klassert
2010-01-07 5:39 ` [PATCH 0/2] Parallel crypto/IPsec v7 Herbert Xu
2010-01-16 9:44 ` David Miller
2009-12-18 15:30 ` workqueue thing Linus Torvalds
2009-12-18 15:39 ` Ingo Molnar
2009-12-18 15:39 ` Peter Zijlstra
2009-12-18 15:47 ` Linus Torvalds
2009-12-18 15:53 ` Peter Zijlstra
2009-12-21 3:04 ` Tejun Heo
2009-12-21 9:22 ` Peter Zijlstra
2009-12-21 13:30 ` Tejun Heo
2009-12-21 14:26 ` Peter Zijlstra
2009-12-21 23:50 ` Tejun Heo
2009-12-22 11:00 ` Peter Zijlstra
2009-12-22 11:03 ` Peter Zijlstra
2009-12-23 3:43 ` Tejun Heo
2009-12-22 11:04 ` Peter Zijlstra
2009-12-23 3:48 ` Tejun Heo
2009-12-22 11:06 ` Peter Zijlstra
2009-12-23 4:18 ` Tejun Heo
2009-12-23 4:42 ` Linus Torvalds
2009-12-23 6:02 ` Ingo Molnar
2009-12-23 6:13 ` Jeff Garzik
2009-12-23 7:53 ` Ingo Molnar
2009-12-23 8:41 ` Peter Zijlstra
2009-12-23 10:25 ` Jeff Garzik
2009-12-23 13:33 ` Stefan Richter
2009-12-23 14:20 ` Mark Brown
2009-12-23 7:09 ` Tejun Heo
2009-12-23 8:01 ` Ingo Molnar
2009-12-23 8:12 ` Ingo Molnar
2009-12-23 8:32 ` Tejun Heo
2009-12-23 8:42 ` Ingo Molnar
2009-12-23 8:27 ` Tejun Heo
2009-12-23 8:37 ` Ingo Molnar
2009-12-23 8:49 ` Tejun Heo
2009-12-23 8:49 ` Ingo Molnar
2009-12-23 9:03 ` Tejun Heo
2009-12-23 13:40 ` Stefan Richter
2009-12-23 13:43 ` Stefan Richter
2009-12-23 8:25 ` Arjan van de Ven
2009-12-23 13:00 ` Stefan Richter
2009-12-23 8:31 ` Stijn Devriendt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1261141088-2014-4-git-send-email-tj@kernel.org \
--to=tj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=arjan@linux.intel.com \
--cc=avi@redhat.com \
--cc=awalls@radix.net \
--cc=cl@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=efault@gmx.de \
--cc=jeff@garzik.org \
--cc=jens.axboe@oracle.com \
--cc=johannes@sipsolutions.net \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
--cc=rusty@rustcorp.com.au \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox