From: Gautham R Shenoy <ego@in.ibm.com>
To: "Ingo Molnar" <mingo@elte.hu>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
"Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org,
Suresh Siddha <suresh.b.siddha@intel.com>,
"Balbir Singh" <balbir@in.ibm.com>,
Nick Piggin <nickpiggin@yahoo.com.au>,
"Dhaval Giani" <dhaval@linux.vnet.ibm.com>,
Bharata B Rao <bharata@linux.vnet.ibm.com>,
Gautham R Shenoy <ego@in.ibm.com>
Subject: [RFC PATCH 07/11] sched: Create helper to calculate small_imbalance in find_busiest_group.
Date: Wed, 25 Mar 2009 14:44:06 +0530 [thread overview]
Message-ID: <20090325091406.13992.54316.stgit@sofia.in.ibm.com> (raw)
In-Reply-To: <20090325091239.13992.96090.stgit@sofia.in.ibm.com>
We have two places in find_busiest_group() where we need to calculate the
minor imbalance before returning the busiest group. Encapsulate this
functionality into a seperate helper function.
Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
---
kernel/sched.c | 131 ++++++++++++++++++++++++++++++--------------------------
1 files changed, 70 insertions(+), 61 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 5e01162..364866f 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3380,6 +3380,71 @@ group_next:
} while (group != sd->groups);
}
+
+/**
+ * fix_small_imbalance - Calculate the minor imbalance that exists
+ * amongst the groups of a sched_domain, during
+ * load balancing.
+ * @sds: Statistics of the sched_domain whose imbalance is to be calculated.
+ * @this_cpu: The cpu at whose sched_domain we're performing load-balance.
+ * @imbalance: Variable to store the imbalance.
+ */
+static inline void fix_small_imbalance(struct sd_lb_stats *sds,
+ int this_cpu, unsigned long *imbalance)
+{
+ unsigned long tmp, pwr_now = 0, pwr_move = 0;
+ unsigned int imbn = 2;
+
+ if (sds->this_nr_running) {
+ sds->this_load_per_task /= sds->this_nr_running;
+ if (sds->busiest_load_per_task >
+ sds->this_load_per_task)
+ imbn = 1;
+ } else
+ sds->this_load_per_task =
+ cpu_avg_load_per_task(this_cpu);
+
+ if (sds->max_load - sds->this_load + sds->busiest_load_per_task >=
+ sds->busiest_load_per_task * imbn) {
+ *imbalance = sds->busiest_load_per_task;
+ return;
+ }
+
+ /*
+ * OK, we don't have enough imbalance to justify moving tasks,
+ * however we may be able to increase total CPU power used by
+ * moving them.
+ */
+
+ pwr_now += sds->busiest->__cpu_power *
+ min(sds->busiest_load_per_task, sds->max_load);
+ pwr_now += sds->this->__cpu_power *
+ min(sds->this_load_per_task, sds->this_load);
+ pwr_now /= SCHED_LOAD_SCALE;
+
+ /* Amount of load we'd subtract */
+ tmp = sg_div_cpu_power(sds->busiest,
+ sds->busiest_load_per_task * SCHED_LOAD_SCALE);
+ if (sds->max_load > tmp)
+ pwr_move += sds->busiest->__cpu_power *
+ min(sds->busiest_load_per_task, sds->max_load - tmp);
+
+ /* Amount of load we'd add */
+ if (sds->max_load * sds->busiest->__cpu_power <
+ sds->busiest_load_per_task * SCHED_LOAD_SCALE)
+ tmp = sg_div_cpu_power(sds->this,
+ sds->max_load * sds->busiest->__cpu_power);
+ else
+ tmp = sg_div_cpu_power(sds->this,
+ sds->busiest_load_per_task * SCHED_LOAD_SCALE);
+ pwr_move += sds->this->__cpu_power *
+ min(sds->this_load_per_task, sds->this_load + tmp);
+ pwr_move /= SCHED_LOAD_SCALE;
+
+ /* Move if we gain throughput */
+ if (pwr_move > pwr_now)
+ *imbalance = sds->busiest_load_per_task;
+}
/******* find_busiest_group() helpers end here *********************/
/*
@@ -3443,7 +3508,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
*/
if (sds.max_load < sds.avg_load) {
*imbalance = 0;
- goto small_imbalance;
+ fix_small_imbalance(&sds, this_cpu, imbalance);
+ goto ret_busiest;
}
/* Don't want to pull so many tasks that a group would go idle */
@@ -3461,67 +3527,10 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* a think about bumping its value to force at least one task to be
* moved
*/
- if (*imbalance < sds.busiest_load_per_task) {
- unsigned long tmp, pwr_now, pwr_move;
- unsigned int imbn;
-
-small_imbalance:
- pwr_move = pwr_now = 0;
- imbn = 2;
- if (sds.this_nr_running) {
- sds.this_load_per_task /= sds.this_nr_running;
- if (sds.busiest_load_per_task >
- sds.this_load_per_task)
- imbn = 1;
- } else
- sds.this_load_per_task =
- cpu_avg_load_per_task(this_cpu);
-
- if (sds.max_load - sds.this_load +
- sds.busiest_load_per_task >=
- sds.busiest_load_per_task * imbn) {
- *imbalance = sds.busiest_load_per_task;
- return sds.busiest;
- }
-
- /*
- * OK, we don't have enough imbalance to justify moving tasks,
- * however we may be able to increase total CPU power used by
- * moving them.
- */
-
- pwr_now += sds.busiest->__cpu_power *
- min(sds.busiest_load_per_task, sds.max_load);
- pwr_now += sds.this->__cpu_power *
- min(sds.this_load_per_task, sds.this_load);
- pwr_now /= SCHED_LOAD_SCALE;
-
- /* Amount of load we'd subtract */
- tmp = sg_div_cpu_power(sds.busiest,
- sds.busiest_load_per_task * SCHED_LOAD_SCALE);
- if (sds.max_load > tmp)
- pwr_move += sds.busiest->__cpu_power *
- min(sds.busiest_load_per_task,
- sds.max_load - tmp);
-
- /* Amount of load we'd add */
- if (sds.max_load * sds.busiest->__cpu_power <
- sds.busiest_load_per_task * SCHED_LOAD_SCALE)
- tmp = sg_div_cpu_power(sds.this,
- sds.max_load * sds.busiest->__cpu_power);
- else
- tmp = sg_div_cpu_power(sds.this,
- sds.busiest_load_per_task * SCHED_LOAD_SCALE);
- pwr_move += sds.this->__cpu_power *
- min(sds.this_load_per_task,
- sds.this_load + tmp);
- pwr_move /= SCHED_LOAD_SCALE;
-
- /* Move if we gain throughput */
- if (pwr_move > pwr_now)
- *imbalance = sds.busiest_load_per_task;
- }
+ if (*imbalance < sds.busiest_load_per_task)
+ fix_small_imbalance(&sds, this_cpu, imbalance);
+ret_busiest:
return sds.busiest;
out_balanced:
next prev parent reply other threads:[~2009-03-25 9:15 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-25 9:13 [RFC PATCH 00/11] sched: find_busiest_group() cleanup Gautham R Shenoy
2009-03-25 9:13 ` [RFC PATCH 01/11] sched: Simple helper functions for find_busiest_group() Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:13 ` [RFC PATCH 02/11] sched: Fix indentations in find_busiest_group using gotos Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] sched: Fix indentations in find_busiest_group() " Gautham R Shenoy
2009-03-25 9:13 ` [RFC PATCH 03/11] sched: Define structure to store the sched_group statistics for fbg() Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:13 ` [RFC PATCH 04/11] sched: Create a helper function to calculate sched_group stats " Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:13 ` [RFC PATCH 05/11] sched: Define structure to store the sched_domain statistics " Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:14 ` [RFC PATCH 06/11] sched: Create a helper function to calculate sched_domain stats " Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:14 ` Gautham R Shenoy [this message]
2009-03-25 9:46 ` [tip:sched/balancing] sched: Create helper to calculate small_imbalance in fbg() Gautham R Shenoy
2009-03-25 9:14 ` [RFC PATCH 08/11] sched: Create a helper function to calculate imbalance Gautham R Shenoy
2009-03-25 9:46 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:14 ` [RFC PATCH 09/11] sched: Optimize the !power_savings_balance during find_busiest_group Gautham R Shenoy
2009-03-25 9:47 ` [tip:sched/balancing] sched: Optimize the !power_savings_balance during fbg() Gautham R Shenoy
2009-03-25 9:14 ` [RFC PATCH 10/11] sched: Refactor the power savings balance code Gautham R Shenoy
2009-03-25 9:47 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 9:14 ` [RFC PATCH 11/11] sched: Add comments to find_busiest_group() function Gautham R Shenoy
2009-03-25 9:47 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 11:43 ` [RFC PATCH 11/11] " Gautham R Shenoy
2009-03-25 12:29 ` Ingo Molnar
2009-03-25 13:07 ` Gautham R Shenoy
2009-03-25 13:10 ` Ingo Molnar
2009-03-25 12:30 ` [tip:sched/balancing] " Gautham R Shenoy
2009-03-25 16:04 ` Ray Lee
2009-03-25 16:17 ` Ingo Molnar
2009-03-25 19:17 ` Gautham R Shenoy
2009-03-25 9:30 ` [RFC PATCH 00/11] sched: find_busiest_group() cleanup Ingo Molnar
2009-03-25 9:42 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090325091406.13992.54316.stgit@sofia.in.ibm.com \
--to=ego@in.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=balbir@in.ibm.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=dhaval@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=nickpiggin@yahoo.com.au \
--cc=suresh.b.siddha@intel.com \
--cc=svaidy@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox