public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: tip-bot for Nikhil Rao <ncrao@google.com>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, hpa@zytor.com, mingo@redhat.com,
	a.p.zijlstra@chello.nl, ncrao@google.com, tglx@linutronix.de,
	mingo@elte.hu
Subject: [tip:sched/core] sched: Set group_imb only a task can be pulled from the busiest cpu
Date: Mon, 18 Oct 2010 19:23:28 GMT	[thread overview]
Message-ID: <tip-2582f0eba54066b5e98ff2b27ef0cfa833b59f54@git.kernel.org> (raw)
In-Reply-To: <1286996978-7007-3-git-send-email-ncrao@google.com>

Commit-ID:  2582f0eba54066b5e98ff2b27ef0cfa833b59f54
Gitweb:     http://git.kernel.org/tip/2582f0eba54066b5e98ff2b27ef0cfa833b59f54
Author:     Nikhil Rao <ncrao@google.com>
AuthorDate: Wed, 13 Oct 2010 12:09:36 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Mon, 18 Oct 2010 20:52:17 +0200

sched: Set group_imb only a task can be pulled from the busiest cpu

When cycling through sched groups to determine the busiest group, set
group_imb only if the busiest cpu has more than 1 runnable task. This patch
fixes the case where two cpus in a group have one runnable task each, but there
is a large weight differential between these two tasks. The load balancer is
unable to migrate any task from this group, and hence do not consider this
group to be imbalanced.

Signed-off-by: Nikhil Rao <ncrao@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1286996978-7007-3-git-send-email-ncrao@google.com>
[ small code readability edits ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/sched_fair.c |   12 +++++++-----
 1 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index bf87192..3656480 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -2378,7 +2378,7 @@ static inline void update_sg_lb_stats(struct sched_domain *sd,
 			int local_group, const struct cpumask *cpus,
 			int *balance, struct sg_lb_stats *sgs)
 {
-	unsigned long load, max_cpu_load, min_cpu_load;
+	unsigned long load, max_cpu_load, min_cpu_load, max_nr_running;
 	int i;
 	unsigned int balance_cpu = -1, first_idle_cpu = 0;
 	unsigned long avg_load_per_task = 0;
@@ -2389,6 +2389,7 @@ static inline void update_sg_lb_stats(struct sched_domain *sd,
 	/* Tally up the load of all CPUs in the group */
 	max_cpu_load = 0;
 	min_cpu_load = ~0UL;
+	max_nr_running = 0;
 
 	for_each_cpu_and(i, sched_group_cpus(group), cpus) {
 		struct rq *rq = cpu_rq(i);
@@ -2406,8 +2407,10 @@ static inline void update_sg_lb_stats(struct sched_domain *sd,
 			load = target_load(i, load_idx);
 		} else {
 			load = source_load(i, load_idx);
-			if (load > max_cpu_load)
+			if (load > max_cpu_load) {
 				max_cpu_load = load;
+				max_nr_running = rq->nr_running;
+			}
 			if (min_cpu_load > load)
 				min_cpu_load = load;
 		}
@@ -2447,11 +2450,10 @@ static inline void update_sg_lb_stats(struct sched_domain *sd,
 	if (sgs->sum_nr_running)
 		avg_load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;
 
-	if ((max_cpu_load - min_cpu_load) > 2*avg_load_per_task)
+	if ((max_cpu_load - min_cpu_load) > 2*avg_load_per_task && max_nr_running > 1)
 		sgs->group_imb = 1;
 
-	sgs->group_capacity =
-		DIV_ROUND_CLOSEST(group->cpu_power, SCHED_LOAD_SCALE);
+	sgs->group_capacity = DIV_ROUND_CLOSEST(group->cpu_power, SCHED_LOAD_SCALE);
 	if (!sgs->group_capacity)
 		sgs->group_capacity = fix_small_capacity(sd, group);
 }

  reply	other threads:[~2010-10-18 19:23 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-13 19:09 [PATCH 0/4][RFC v2] Improve load balancing when tasks have large weight differential Nikhil Rao
2010-10-13 19:09 ` [PATCH 1/4] sched: do not consider SCHED_IDLE tasks to be cache hot Nikhil Rao
2010-10-13 19:09 ` [PATCH 2/4] sched: set group_imb only a task can be pulled from the busiest cpu Nikhil Rao
2010-10-18 19:23   ` tip-bot for Nikhil Rao [this message]
2010-10-13 19:09 ` [PATCH 3/4] sched: drop group_capacity to 1 only if local group has extra capacity Nikhil Rao
2010-10-14  5:48   ` Nikhil Rao
2010-10-14 23:42     ` Suresh Siddha
2010-10-15 11:50     ` Peter Zijlstra
2010-10-15 16:13       ` Nikhil Rao
2010-10-15 17:05         ` Peter Zijlstra
2010-10-15 17:13           ` Suresh Siddha
2010-10-15 17:24             ` Peter Zijlstra
2010-10-15 17:27           ` Nikhil Rao
2010-10-13 19:09 ` [PATCH 4/4] sched: force balancing on newidle balance if local group has capacity Nikhil Rao
2010-10-15 12:06   ` Peter Zijlstra
2010-10-15 12:18     ` Mike Galbraith
2010-10-15 12:20       ` Peter Zijlstra
2010-10-15 12:35         ` Mike Galbraith
2010-10-15 16:19           ` Nikhil Rao
2010-10-15 12:08   ` Peter Zijlstra
2010-10-15 16:20     ` Nikhil Rao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tip-2582f0eba54066b5e98ff2b27ef0cfa833b59f54@git.kernel.org \
    --to=ncrao@google.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox