From: dino@in.ibm.com
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org,
John Stultz <johnstul@us.ibm.com>,
Darren Hart <dvhltc@us.ibm.com>, John Kacur <jkacur@redhat.com>
Subject: [patch -rt 03/17] sched: update the cpu_power sum during load-balance
Date: Thu, 22 Oct 2009 18:07:46 +0530 [thread overview]
Message-ID: <20091022124110.216791990@spinlock.in.ibm.com> (raw)
In-Reply-To: 20091022123743.506956796@spinlock.in.ibm.com
[-- Attachment #1: sched-lb-3.patch --]
[-- Type: text/plain, Size: 2625 bytes --]
In order to prepare for a more dynamic cpu_power, update the group sum
while walking the sched domains during load-balance.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Dinakar Guniguntala <dino@in.ibm.com>
---
kernel/sched.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
Index: linux-2.6.31.4-rt14/kernel/sched.c
===================================================================
--- linux-2.6.31.4-rt14.orig/kernel/sched.c 2009-10-16 09:15:30.000000000 -0400
+++ linux-2.6.31.4-rt14/kernel/sched.c 2009-10-16 09:15:32.000000000 -0400
@@ -3780,6 +3780,28 @@
}
#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
+static void update_sched_power(struct sched_domain *sd)
+{
+ struct sched_domain *child = sd->child;
+ struct sched_group *group, *sdg = sd->groups;
+ unsigned long power = sdg->__cpu_power;
+
+ if (!child) {
+ /* compute cpu power for this cpu */
+ return;
+ }
+
+ sdg->__cpu_power = 0;
+
+ group = child->groups;
+ do {
+ sdg->__cpu_power += group->__cpu_power;
+ group = group->next;
+ } while (group != child->groups);
+
+ if (power != sdg->__cpu_power)
+ sdg->reciprocal_cpu_power = reciprocal_value(sdg->__cpu_power);
+}
/**
* update_sg_lb_stats - Update sched_group's statistics for load balancing.
@@ -3793,7 +3815,8 @@
* @balance: Should we balance.
* @sgs: variable to hold the statistics for this group.
*/
-static inline void update_sg_lb_stats(struct sched_group *group, int this_cpu,
+static inline void update_sg_lb_stats(struct sched_domain *sd,
+ struct sched_group *group, int this_cpu,
enum cpu_idle_type idle, int load_idx, int *sd_idle,
int local_group, const struct cpumask *cpus,
int *balance, struct sg_lb_stats *sgs)
@@ -3804,8 +3827,11 @@
unsigned long sum_avg_load_per_task;
unsigned long avg_load_per_task;
- if (local_group)
+ if (local_group) {
balance_cpu = group_first_cpu(group);
+ if (balance_cpu == this_cpu)
+ update_sched_power(sd);
+ }
/* Tally up the load of all CPUs in the group */
sum_avg_load_per_task = avg_load_per_task = 0;
@@ -3909,7 +3935,7 @@
local_group = cpumask_test_cpu(this_cpu,
sched_group_cpus(group));
memset(&sgs, 0, sizeof(sgs));
- update_sg_lb_stats(group, this_cpu, idle, load_idx, sd_idle,
+ update_sg_lb_stats(sd, group, this_cpu, idle, load_idx, sd_idle,
local_group, cpus, balance, &sgs);
if (local_group && balance && !(*balance))
@@ -3944,7 +3970,6 @@
update_sd_power_savings_stats(group, sds, local_group, &sgs);
group = group->next;
} while (group != sd->groups);
-
}
/**
--
next prev parent reply other threads:[~2009-10-22 12:41 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-22 12:37 [patch -rt 00/17] [patch -rt] Sched load balance backport dino
2009-10-22 12:37 ` [patch -rt 01/17] sched: restore __cpu_power to a straight sum of power dino
2009-10-22 12:37 ` [patch -rt 02/17] sched: SD_PREFER_SIBLING dino
2009-10-22 12:37 ` dino [this message]
2009-10-22 12:37 ` [patch -rt 04/17] sched: add smt_gain dino
2009-10-22 12:37 ` [patch -rt 05/17] sched: dynamic cpu_power dino
2009-10-22 12:37 ` [patch -rt 06/17] sched: scale down cpu_power due to RT tasks dino
2009-10-22 12:37 ` [patch -rt 07/17] sched: try to deal with low capacity dino
2009-10-22 12:37 ` [patch -rt 08/17] sched: remove reciprocal for cpu_power dino
2009-10-22 12:37 ` [patch -rt 09/17] x86: move APERF/MPERF into a X86_FEATURE dino
2009-10-22 12:37 ` [patch -rt 10/17] x86: Add generic aperf/mperf code dino
2009-10-22 12:37 ` [patch -rt 11/17] Provide an arch specific hook for cpufreq based scaling of cpu_power dino
2009-10-22 12:37 ` [patch -rt 12/17] x86: sched: provide arch implementations using aperf/mperf dino
2009-10-22 12:37 ` [patch -rt 13/17] sched: cleanup wake_idle power saving dino
2009-10-22 12:37 ` [patch -rt 14/17] sched: cleanup wake_idle dino
2009-10-22 12:37 ` [patch -rt 15/17] sched: Add a missing = dino
2009-10-22 12:37 ` [patch -rt 16/17] sched: Deal with low-load in wake_affine() dino
2009-10-22 12:38 ` [patch -rt 17/17] sched: Fix dynamic power-balancing crash dino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091022124110.216791990@spinlock.in.ibm.com \
--to=dino@in.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=dvhltc@us.ibm.com \
--cc=jkacur@redhat.com \
--cc=johnstul@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox