On 10/25/2015 03:26 AM, Peter Zijlstra wrote: > On Sat, Oct 24, 2015 at 10:23:14PM -0700, Joonwoo Park wrote: >> @@ -1069,7 +1069,7 @@ static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new >> { >> lockdep_assert_held(&rq->lock); >> >> - dequeue_task(rq, p, 0); >> + dequeue_task(rq, p, DEQUEUE_MIGRATING); >> p->on_rq = TASK_ON_RQ_MIGRATING; >> set_task_cpu(p, new_cpu); >> raw_spin_unlock(&rq->lock); > >> @@ -5656,7 +5671,7 @@ static void detach_task(struct task_struct *p, struct lb_env *env) >> { >> lockdep_assert_held(&env->src_rq->lock); >> >> - deactivate_task(env->src_rq, p, 0); >> + deactivate_task(env->src_rq, p, DEQUEUE_MIGRATING); >> p->on_rq = TASK_ON_RQ_MIGRATING; >> set_task_cpu(p, env->dst_cpu); >> } > > Also note that on both sites we also set TASK_ON_RQ_MIGRATING -- albeit > late. Can't you simply set that earlier (and back to QUEUED later) and > test for task_on_rq_migrating() instead of blowing up the fastpath like > you did? > Yes it's doable. I also find it's much simpler. Please find patch v2. I verified v2 does same job as v1 by comparing sched_stat_wait time with sched_switch - sched_wakeup timestamp. Thanks, Joonwoo