From: Peter Zijlstra <peterz@infradead.org>
To: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>,
LKML <linux-kernel@vger.kernel.org>,
"Ma, Ling" <ling.ma@intel.com>,
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
"ego@in.ibm.com" <ego@in.ibm.com>,
"svaidy@linux.vnet.ibm.com" <svaidy@linux.vnet.ibm.com>,
Arun R Bharadwaj <arun@linux.vnet.ibm.com>
Subject: Re: change in sched cpu_power causing regressions with SCHED_MC
Date: Wed, 24 Feb 2010 18:43:38 +0100 [thread overview]
Message-ID: <1267033418.16023.326.camel@laptop> (raw)
In-Reply-To: <1266970432.11588.22.camel@sbs-t61.sc.intel.com>
On Tue, 2010-02-23 at 16:13 -0800, Suresh Siddha wrote:
>
> Ok. Here is the patch with complete changelog. I added "Cc stable" tag
> so that it can be picked up for 2.6.32 and 2.6.33, as we would like to
> see this regression addressed in those kernels. Peter/Ingo: Can you
> please queue this patch for -tip for 2.6.34?
>
Have picked it up with the following changes, Thanks!
Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -2471,10 +2471,6 @@ static inline void update_sg_lb_stats(st
/* Adjust by relative CPU power of the group */
sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power;
-
- if (sgs->sum_nr_running)
- avg_load_per_task =
- sgs->sum_weighted_load / sgs->sum_nr_running;
/*
* Consider the group unbalanced when the imbalance is larger
* than the average weight of two tasks.
@@ -2484,6 +2480,9 @@ static inline void update_sg_lb_stats(st
* normalized nr_running number somewhere that negates
* the hierarchy?
*/
+ if (sgs->sum_nr_running)
+ avg_load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;
+
if ((max_cpu_load - min_cpu_load) > 2*avg_load_per_task)
sgs->group_imb = 1;
@@ -2642,6 +2641,13 @@ static inline void calculate_imbalance(s
unsigned long *imbalance)
{
unsigned long max_pull, load_above_capacity = ~0UL;
+
+ sds.busiest_load_per_task /= sds.busiest_nr_running;
+ if (sds.group_imb) {
+ sds.busiest_load_per_task =
+ min(sds.busiest_load_per_task, sds.avg_load);
+ }
+
/*
* In the presence of smp nice balancing, certain scenarios can have
* max load less than avg load(as we skip the groups at or below
@@ -2742,7 +2748,6 @@ find_busiest_group(struct sched_domain *
* 4) This group is more busy than the avg busieness at this
* sched_domain.
* 5) The imbalance is within the specified limit.
- * 6) Any rebalance would lead to ping-pong
*/
if (!(*balance))
goto ret;
@@ -2761,12 +2766,6 @@ find_busiest_group(struct sched_domain *
if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load)
goto out_balanced;
- sds.busiest_load_per_task /= sds.busiest_nr_running;
- if (sds.group_imb)
- sds.busiest_load_per_task =
- min(sds.busiest_load_per_task, sds.avg_load);
-
-
/* Looks like there is an imbalance. Compute it */
calculate_imbalance(&sds, this_cpu, imbalance);
return sds.busiest;
next prev parent reply other threads:[~2010-02-24 17:43 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-02-13 1:14 [patch] sched: fix SMT scheduler regression in find_busiest_queue() Suresh Siddha
2010-02-13 1:31 ` change in sched cpu_power causing regressions with SCHED_MC Suresh Siddha
2010-02-13 10:36 ` Peter Zijlstra
2010-02-13 10:42 ` Peter Zijlstra
2010-02-13 18:37 ` Vaidyanathan Srinivasan
2010-02-13 18:49 ` Suresh Siddha
2010-02-13 18:39 ` Vaidyanathan Srinivasan
2010-02-19 2:16 ` Suresh Siddha
2010-02-19 12:32 ` Arun R Bharadwaj
2010-02-19 13:03 ` Vaidyanathan Srinivasan
2010-02-19 19:15 ` Suresh Siddha
2010-02-19 14:05 ` Peter Zijlstra
2010-02-19 18:36 ` Suresh Siddha
2010-02-19 19:47 ` Peter Zijlstra
2010-02-19 19:50 ` Suresh Siddha
2010-02-19 20:02 ` Peter Zijlstra
2010-02-20 1:13 ` Suresh Siddha
2010-02-22 18:50 ` Peter Zijlstra
2010-02-24 0:13 ` Suresh Siddha
2010-02-24 17:43 ` Peter Zijlstra [this message]
2010-02-24 19:31 ` Suresh Siddha
2010-02-26 10:24 ` [tip:sched/core] sched: Fix SCHED_MC regression caused by change in sched cpu_power tip-bot for Suresh Siddha
2010-02-26 14:55 ` tip-bot for Suresh Siddha
2010-02-19 19:52 ` change in sched cpu_power causing regressions with SCHED_MC Peter Zijlstra
2010-02-13 18:33 ` Vaidyanathan Srinivasan
2010-02-13 18:27 ` [patch] sched: fix SMT scheduler regression in find_busiest_queue() Vaidyanathan Srinivasan
2010-02-13 18:39 ` Suresh Siddha
2010-02-13 18:56 ` Vaidyanathan Srinivasan
2010-02-13 20:25 ` Vaidyanathan Srinivasan
2010-02-13 20:36 ` Vaidyanathan Srinivasan
2010-02-14 10:11 ` Peter Zijlstra
2010-02-15 12:35 ` Vaidyanathan Srinivasan
2010-02-15 13:00 ` Peter Zijlstra
2010-02-16 15:59 ` Vaidyanathan Srinivasan
2010-02-16 17:28 ` Peter Zijlstra
2010-02-16 18:25 ` Vaidyanathan Srinivasan
2010-02-16 18:46 ` Vaidyanathan Srinivasan
2010-02-16 18:48 ` Peter Zijlstra
2010-02-15 22:29 ` Peter Zijlstra
2010-02-16 14:16 ` [tip:sched/urgent] sched: Fix " tip-bot for Suresh Siddha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1267033418.16023.326.camel@laptop \
--to=peterz@infradead.org \
--cc=arun@linux.vnet.ibm.com \
--cc=ego@in.ibm.com \
--cc=ling.ma@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=suresh.b.siddha@intel.com \
--cc=svaidy@linux.vnet.ibm.com \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox