From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Ingo Molnar <mingo@elte.hu>
Cc: linux-kernel@vger.kernel.org, Gautham R Shenoy <ego@in.ibm.com>,
Andreas Herrmann <andreas.herrmann3@amd.com>,
Balbir Singh <balbir@in.ibm.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: [RFC][PATCH 6/6] sched: try to deal with low capacity
Date: Thu, 27 Aug 2009 17:00:57 +0200 [thread overview]
Message-ID: <20090827150524.238884869@chello.nl> (raw)
In-Reply-To: 20090827150051.846026837@chello.nl
[-- Attachment #1: sched-lb-6.patch --]
[-- Type: text/plain, Size: 1784 bytes --]
When the capacity drops low, we want to migrate load away. Allow the
load-balancer to remove all tasks when we hit rock bottom.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
---
kernel/sched.c | 29 ++++++++++++++++++++++++++---
1 file changed, 26 insertions(+), 3 deletions(-)
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -3951,7 +3951,7 @@ static inline void update_sd_lb_stats(st
* and move all the excess tasks away.
*/
if (prefer_sibling)
- sgs.group_capacity = 1;
+ sgs.group_capacity = min(sgs.group_capacity, 1);
if (local_group) {
sds->this_load = sgs.avg_load;
@@ -4183,6 +4183,26 @@ ret:
return NULL;
}
+static struct sched_group *group_of(int cpu)
+{
+ struct sched_domain *sd = rcu_dereference(cpu_rq(cpu)->sd);
+
+ if (!sd)
+ return NULL;
+
+ return sd->groups;
+}
+
+static unsigned long power_of(int cpu)
+{
+ struct sched_group *group = group_of(cpu);
+
+ if (!group)
+ return SCHED_LOAD_SCALE;
+
+ return group->__cpu_power;
+}
+
/*
* find_busiest_queue - find the busiest runqueue among the cpus in group.
*/
@@ -4195,15 +4215,18 @@ find_busiest_queue(struct sched_group *g
int i;
for_each_cpu(i, sched_group_cpus(group)) {
+ unsigned long power = power_of(i);
+ unsigned long capacity = power >> SCHED_LOAD_SHIFT;
unsigned long wl;
if (!cpumask_test_cpu(i, cpus))
continue;
rq = cpu_rq(i);
- wl = weighted_cpuload(i);
+ wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
+ wl /= power;
- if (rq->nr_running == 1 && wl > imbalance)
+ if (capacity && rq->nr_running == 1 && wl > imbalance)
continue;
if (wl > max_load) {
--
next prev parent reply other threads:[~2009-08-27 15:09 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-27 15:00 [RFC][PATCH 0/6] load-balancing and cpu_power Peter Zijlstra
2009-08-27 15:00 ` [RFC][PATCH 1/6] sched: restore __cpu_power to a straight sum of power Peter Zijlstra
2009-08-27 15:00 ` [RFC][PATCH 2/6] sched: SD_PREFER_SIBLING Peter Zijlstra
2009-08-27 15:00 ` [RFC][PATCH 3/6] sched: update the cpu_power sum during load-balance Peter Zijlstra
2009-08-27 15:00 ` [RFC][PATCH 4/6] sched: dynamic cpu_power Peter Zijlstra
2009-08-27 15:00 ` [RFC][PATCH 5/6] sched: scale down cpu_power due to RT tasks Peter Zijlstra
2009-08-27 15:00 ` Peter Zijlstra [this message]
2009-08-28 18:17 ` [RFC][PATCH 0/6] load-balancing and cpu_power Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090827150524.238884869@chello.nl \
--to=a.p.zijlstra@chello.nl \
--cc=andreas.herrmann3@amd.com \
--cc=balbir@in.ibm.com \
--cc=ego@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox