From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754078AbbIMK7k (ORCPT ); Sun, 13 Sep 2015 06:59:40 -0400 Received: from terminus.zytor.com ([198.137.202.10]:43319 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753824AbbIMK7h (ORCPT ); Sun, 13 Sep 2015 06:59:37 -0400 Date: Sun, 13 Sep 2015 03:59:01 -0700 From: tip-bot for Byungchul Park Message-ID: Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, hpa@zytor.com, byungchul.park@lge.com, efault@gmx.de, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org Reply-To: peterz@infradead.org, mingo@kernel.org, tglx@linutronix.de, efault@gmx.de, hpa@zytor.com, byungchul.park@lge.com, linux-kernel@vger.kernel.org, torvalds@linux-foundation.org In-Reply-To: <1440069720-27038-3-git-send-email-byungchul.park@lge.com> References: <1440069720-27038-3-git-send-email-byungchul.park@lge.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Have task_move_group_fair() unconditionally add the entity load to the runqueue Git-Commit-ID: 50a2a3b246149d041065a67ccb3e98145f780a2f X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 50a2a3b246149d041065a67ccb3e98145f780a2f Gitweb: http://git.kernel.org/tip/50a2a3b246149d041065a67ccb3e98145f780a2f Author: Byungchul Park AuthorDate: Thu, 20 Aug 2015 20:21:57 +0900 Committer: Ingo Molnar CommitDate: Sun, 13 Sep 2015 09:52:46 +0200 sched/fair: Have task_move_group_fair() unconditionally add the entity load to the runqueue Currently we conditionally add the entity load to the rq when moving the task between cgroups. This doesn't make sense as we always 'migrate' the task between cgroups, so we should always migrate the load too. [ The history here is that we used to only migrate the blocked load which was only meaningfull when !queued. ] Signed-off-by: Byungchul Park [ Rewrote the changelog. ] Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1440069720-27038-3-git-send-email-byungchul.park@lge.com Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a72a71b..959b2ea 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8041,13 +8041,12 @@ static void task_move_group_fair(struct task_struct *p, int queued) se->vruntime -= cfs_rq_of(se)->min_vruntime; set_task_rq(p, task_cpu(p)); se->depth = se->parent ? se->parent->depth + 1 : 0; - if (!queued) { - cfs_rq = cfs_rq_of(se); + cfs_rq = cfs_rq_of(se); + if (!queued) se->vruntime += cfs_rq->min_vruntime; - /* Virtually synchronize task with its new cfs_rq */ - attach_entity_load_avg(cfs_rq, se); - } + /* Virtually synchronize task with its new cfs_rq */ + attach_entity_load_avg(cfs_rq, se); } void free_fair_sched_group(struct task_group *tg)