From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757297AbcEEJnO (ORCPT ); Thu, 5 May 2016 05:43:14 -0400 Received: from terminus.zytor.com ([198.137.202.10]:44274 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756770AbcEEJnM (ORCPT ); Thu, 5 May 2016 05:43:12 -0400 Date: Thu, 5 May 2016 02:42:32 -0700 From: tip-bot for Dietmar Eggemann Message-ID: Cc: torvalds@linux-foundation.org, hpa@zytor.com, dietmar.eggemann@arm.com, efault@gmx.de, linux-kernel@vger.kernel.org, mingo@kernel.org, peterz@infradead.org, tglx@linutronix.de, morten.rasmussen@arm.com Reply-To: mingo@kernel.org, linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com, efault@gmx.de, hpa@zytor.com, torvalds@linux-foundation.org, morten.rasmussen@arm.com, tglx@linutronix.de, peterz@infradead.org In-Reply-To: <1461958364-675-2-git-send-email-dietmar.eggemann@arm.com> References: <1461958364-675-2-git-send-email-dietmar.eggemann@arm.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Remove stale power aware scheduling comments Git-Commit-ID: 0a9b23ce46cd5d3a360fbefca8ffce441c55046e X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 0a9b23ce46cd5d3a360fbefca8ffce441c55046e Gitweb: http://git.kernel.org/tip/0a9b23ce46cd5d3a360fbefca8ffce441c55046e Author: Dietmar Eggemann AuthorDate: Fri, 29 Apr 2016 20:32:38 +0100 Committer: Ingo Molnar CommitDate: Thu, 5 May 2016 09:41:09 +0200 sched/fair: Remove stale power aware scheduling comments Commit 8e7fbcbc22c1 ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs") deleted the power aware scheduling support. This patch gets rid of the remaining power aware scheduling related comments in the code as well. Signed-off-by: Dietmar Eggemann Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Mike Galbraith Cc: Morten Rasmussen Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1461958364-675-2-git-send-email-dietmar.eggemann@arm.com Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7a00c7c..537d71e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7027,9 +7027,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s * We're trying to get all the cpus to the average_load, so we don't * want to push ourselves above the average load, nor do we wish to * reduce the max loaded cpu below the average load. At the same time, - * we also don't want to reduce the group load below the group capacity - * (so that we can implement power-savings policies etc). Thus we look - * for the minimum possible imbalance. + * we also don't want to reduce the group load below the group + * capacity. Thus we look for the minimum possible imbalance. */ max_pull = min(busiest->avg_load - sds->avg_load, load_above_capacity); @@ -7053,10 +7052,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s /** * find_busiest_group - Returns the busiest group within the sched_domain - * if there is an imbalance. If there isn't an imbalance, and - * the user has opted for power-savings, it returns a group whose - * CPUs can be put to idle by rebalancing those tasks elsewhere, if - * such a group exists. + * if there is an imbalance. * * Also calculates the amount of weighted load which should be moved * to restore balance. @@ -7064,9 +7060,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s * @env: The load balancing environment. * * Return: - The busiest group if imbalance exists. - * - If no imbalance and user has opted for power-savings balance, - * return the least loaded group whose CPUs can be - * put to idle by rebalancing its tasks onto our group. */ static struct sched_group *find_busiest_group(struct lb_env *env) {