From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Ingo Molnar <mingo@elte.hu>
Cc: linux-kernel@vger.kernel.org, Gautham R Shenoy <ego@in.ibm.com>,
Andreas Herrmann <andreas.herrmann3@amd.com>,
Balbir Singh <balbir@in.ibm.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: [RFC][PATCH 3/8] sched: update the cpu_power sum during load-balance
Date: Tue, 01 Sep 2009 10:34:34 +0200 [thread overview]
Message-ID: <20090901083825.985050292@chello.nl> (raw)
In-Reply-To: 20090901083431.748830771@chello.nl
[-- Attachment #1: sched-lb-3.patch --]
[-- Type: text/plain, Size: 2677 bytes --]
In order to prepare for a more dynamic cpu_power, update the group sum
while walking the sched domains during load-balance.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
kernel/sched.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -3699,6 +3699,28 @@ static inline int check_power_save_busie
}
#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
+static void update_sched_power(struct sched_domain *sd)
+{
+ struct sched_domain *child = sd->child;
+ struct sched_group *group, *sdg = sd->groups;
+ unsigned long power = sdg->__cpu_power;
+
+ if (!child) {
+ /* compute cpu power for this cpu */
+ return;
+ }
+
+ sdg->__cpu_power = 0;
+
+ group = child->groups;
+ do {
+ sdg->__cpu_power += group->__cpu_power;
+ group = group->next;
+ } while (group != child->groups);
+
+ if (power != sdg->__cpu_power)
+ sdg->reciprocal_cpu_power = reciprocal_value(sdg->__cpu_power);
+}
/**
* update_sg_lb_stats - Update sched_group's statistics for load balancing.
@@ -3712,7 +3734,8 @@ static inline int check_power_save_busie
* @balance: Should we balance.
* @sgs: variable to hold the statistics for this group.
*/
-static inline void update_sg_lb_stats(struct sched_group *group, int this_cpu,
+static inline void update_sg_lb_stats(struct sched_domain *sd,
+ struct sched_group *group, int this_cpu,
enum cpu_idle_type idle, int load_idx, int *sd_idle,
int local_group, const struct cpumask *cpus,
int *balance, struct sg_lb_stats *sgs)
@@ -3723,8 +3746,11 @@ static inline void update_sg_lb_stats(st
unsigned long sum_avg_load_per_task;
unsigned long avg_load_per_task;
- if (local_group)
+ if (local_group) {
balance_cpu = group_first_cpu(group);
+ if (balance_cpu == this_cpu)
+ update_sched_power(sd);
+ }
/* Tally up the load of all CPUs in the group */
sum_avg_load_per_task = avg_load_per_task = 0;
@@ -3828,7 +3854,7 @@ static inline void update_sd_lb_stats(st
local_group = cpumask_test_cpu(this_cpu,
sched_group_cpus(group));
memset(&sgs, 0, sizeof(sgs));
- update_sg_lb_stats(group, this_cpu, idle, load_idx, sd_idle,
+ update_sg_lb_stats(sd, group, this_cpu, idle, load_idx, sd_idle,
local_group, cpus, balance, &sgs);
if (local_group && balance && !(*balance))
@@ -3863,7 +3889,6 @@ static inline void update_sd_lb_stats(st
update_sd_power_savings_stats(group, sds, local_group, &sgs);
group = group->next;
} while (group != sd->groups);
-
}
/**
--
next prev parent reply other threads:[~2009-09-01 8:52 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-01 8:34 [RFC][PATCH 0/8] load-balancing and cpu_power -v2 Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 1/8] sched: restore __cpu_power to a straight sum of power Peter Zijlstra
2009-09-04 8:54 ` [tip:sched/balancing] sched: Restore " tip-bot for Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 2/8] sched: SD_PREFER_SIBLING Peter Zijlstra
2009-09-04 8:55 ` [tip:sched/balancing] sched: Add SD_PREFER_SIBLING tip-bot for Peter Zijlstra
2009-09-01 8:34 ` Peter Zijlstra [this message]
2009-09-02 11:17 ` [RFC][PATCH 3/8] sched: update the cpu_power sum during load-balance Gautham R Shenoy
2009-09-02 11:25 ` Peter Zijlstra
2009-09-04 8:55 ` [tip:sched/balancing] sched: Update " tip-bot for Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 4/8] sched: add smt_gain Peter Zijlstra
2009-09-02 11:22 ` Gautham R Shenoy
2009-09-02 11:26 ` Peter Zijlstra
2009-09-04 8:55 ` [tip:sched/balancing] sched: Add smt_gain tip-bot for Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 5/8] sched: dynamic cpu_power Peter Zijlstra
2009-09-02 11:24 ` Gautham R Shenoy
2009-09-04 8:55 ` [tip:sched/balancing] sched: Implement " tip-bot for Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 6/8] sched: scale down cpu_power due to RT tasks Peter Zijlstra
2009-09-04 8:56 ` [tip:sched/balancing] sched: Scale " tip-bot for Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 7/8] sched: try to deal with low capacity Peter Zijlstra
2009-09-02 11:29 ` Gautham R Shenoy
2009-09-04 8:56 ` [tip:sched/balancing] sched: Try " tip-bot for Peter Zijlstra
2009-09-01 8:34 ` [RFC][PATCH 8/8] sched: remove reciprocal for cpu_power Peter Zijlstra
2009-09-03 12:12 ` Andreas Herrmann
2009-09-04 8:56 ` [tip:sched/balancing] sched: Remove " tip-bot for Peter Zijlstra
2009-09-02 10:57 ` [RFC][PATCH 0/8] load-balancing and cpu_power -v2 Gautham R Shenoy
2009-09-03 12:10 ` Andreas Herrmann
2009-09-03 13:38 ` Peter Zijlstra
2009-09-04 7:19 ` Ingo Molnar
2009-09-04 9:27 ` [crash] " Ingo Molnar
2009-09-04 10:25 ` [tip:sched/balancing] sched: Fix dynamic power-balancing crash tip-bot for Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090901083825.985050292@chello.nl \
--to=a.p.zijlstra@chello.nl \
--cc=andreas.herrmann3@amd.com \
--cc=balbir@in.ibm.com \
--cc=ego@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox