From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752783AbbJTJcQ (ORCPT ); Tue, 20 Oct 2015 05:32:16 -0400 Received: from terminus.zytor.com ([198.137.202.10]:49669 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752541AbbJTJcN (ORCPT ); Tue, 20 Oct 2015 05:32:13 -0400 Date: Tue, 20 Oct 2015 02:31:22 -0700 From: tip-bot for Yuyang Du Message-ID: Cc: peterz@infradead.org, torvalds@linux-foundation.org, mingo@kernel.org, linux-kernel@vger.kernel.org, efault@gmx.de, yuyang.du@intel.com, hpa@zytor.com, tglx@linutronix.de, dietmar.eggemann@arm.com Reply-To: peterz@infradead.org, efault@gmx.de, linux-kernel@vger.kernel.org, mingo@kernel.org, torvalds@linux-foundation.org, yuyang.du@intel.com, dietmar.eggemann@arm.com, hpa@zytor.com, tglx@linutronix.de In-Reply-To: <1444699103-20272-2-git-send-email-yuyang.du@intel.com> References: <1444699103-20272-2-git-send-email-yuyang.du@intel.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Update task group' s load_avg after task migration Git-Commit-ID: 3e386d56bafbb6d2540b49367444997fc671ea69 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 3e386d56bafbb6d2540b49367444997fc671ea69 Gitweb: http://git.kernel.org/tip/3e386d56bafbb6d2540b49367444997fc671ea69 Author: Yuyang Du AuthorDate: Tue, 13 Oct 2015 09:18:23 +0800 Committer: Ingo Molnar CommitDate: Tue, 20 Oct 2015 10:13:35 +0200 sched/fair: Update task group's load_avg after task migration When cfs_rq has cfs_rq->removed_load_avg set (when a task migrates from this cfs_rq), we need to update its contribution to the group's load_avg. This should not increase tg's update too much, because in most cases, the cfs_rq has already decayed its load_avg. Tested-by: Dietmar Eggemann Signed-off-by: Yuyang Du Signed-off-by: Peter Zijlstra (Intel) Acked-by: Dietmar Eggemann Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1444699103-20272-2-git-send-email-yuyang.du@intel.com Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bc62c50..9a5e60f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2664,13 +2664,14 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); /* Group cfs_rq's load_avg is used for task_h_load and update_cfs_share */ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) { - int decayed; struct sched_avg *sa = &cfs_rq->avg; + int decayed, removed = 0; if (atomic_long_read(&cfs_rq->removed_load_avg)) { long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); sa->load_avg = max_t(long, sa->load_avg - r, 0); sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0); + removed = 1; } if (atomic_long_read(&cfs_rq->removed_util_avg)) { @@ -2688,7 +2689,7 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) cfs_rq->load_last_update_time_copy = sa->last_update_time; #endif - return decayed; + return decayed || removed; } /* Update task and its cfs_rq load average */