* [patch] sched: fix active load balance
@ 2005-04-13 19:07 Siddha, Suresh B
2005-04-13 20:08 ` Ingo Molnar
0 siblings, 1 reply; 5+ messages in thread
From: Siddha, Suresh B @ 2005-04-13 19:07 UTC (permalink / raw)
To: nickpiggin, akpm; +Cc: linux-kernel
Recent changes to active load balance (sched2-fix-smt-scheduling-problems.patch
in -mm) is not resulting in any task movement from busiest to target_cpu.
Attached patch(ontop of -mm) fixes this issue.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
--- linux-2.6.12-rc2-mm3/kernel/sched.c 2005-04-11 23:08:30.875103512 -0700
+++ linux/kernel/sched.c 2005-04-13 10:44:37.400829992 -0700
@@ -2131,7 +2131,7 @@
*/
static void active_load_balance(runqueue_t *busiest_rq, int busiest_cpu)
{
- struct sched_domain *tmp = NULL, *sd;
+ struct sched_domain *sd;
runqueue_t *target_rq;
int target_cpu = busiest_rq->push_cpu;
@@ -2152,13 +2152,10 @@
double_lock_balance(busiest_rq, target_rq);
/* Search for an sd spanning us and the target CPU. */
- for_each_domain(target_cpu, sd) {
+ for_each_domain(target_cpu, sd)
if ((sd->flags & SD_LOAD_BALANCE) &&
- cpu_isset(busiest_cpu, sd->span)) {
- sd = tmp;
+ cpu_isset(busiest_cpu, sd->span))
break;
- }
- }
if (unlikely(sd == NULL))
goto out;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched: fix active load balance
2005-04-13 19:07 [patch] sched: fix active load balance Siddha, Suresh B
@ 2005-04-13 20:08 ` Ingo Molnar
2005-04-13 20:28 ` Siddha, Suresh B
0 siblings, 1 reply; 5+ messages in thread
From: Ingo Molnar @ 2005-04-13 20:08 UTC (permalink / raw)
To: Siddha, Suresh B; +Cc: nickpiggin, akpm, linux-kernel
* Siddha, Suresh B <suresh.b.siddha@intel.com> wrote:
> - for_each_domain(target_cpu, sd) {
> + for_each_domain(target_cpu, sd)
> if ((sd->flags & SD_LOAD_BALANCE) &&
> - cpu_isset(busiest_cpu, sd->span)) {
> - sd = tmp;
> + cpu_isset(busiest_cpu, sd->span))
> break;
> - }
> - }
hm, the right fix i think is to do:
for_each_domain(target_cpu, tmp) {
if ((tmp->flags & SD_LOAD_BALANCE) &&
cpu_isset(busiest_cpu, tmp->span)) {
sd = tmp;
break;
}
}
because when balancing we want to match the widest-scope domain, not the
first one. The s/tmp/sd thing was a typo i suspect.
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched: fix active load balance
2005-04-13 20:08 ` Ingo Molnar
@ 2005-04-13 20:28 ` Siddha, Suresh B
2005-04-13 23:06 ` Nick Piggin
2005-04-14 7:23 ` Ingo Molnar
0 siblings, 2 replies; 5+ messages in thread
From: Siddha, Suresh B @ 2005-04-13 20:28 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Siddha, Suresh B, nickpiggin, akpm, linux-kernel
On Wed, Apr 13, 2005 at 10:08:28PM +0200, Ingo Molnar wrote:
>
> * Siddha, Suresh B <suresh.b.siddha@intel.com> wrote:
>
> > - for_each_domain(target_cpu, sd) {
> > + for_each_domain(target_cpu, sd)
> > if ((sd->flags & SD_LOAD_BALANCE) &&
> > - cpu_isset(busiest_cpu, sd->span)) {
> > - sd = tmp;
> > + cpu_isset(busiest_cpu, sd->span))
> > break;
> > - }
> > - }
>
> hm, the right fix i think is to do:
>
> for_each_domain(target_cpu, tmp) {
> if ((tmp->flags & SD_LOAD_BALANCE) &&
> cpu_isset(busiest_cpu, tmp->span)) {
> sd = tmp;
> break;
> }
> }
Your suggestion also looks similar to my patch. You are also breaking on the
first one.
> because when balancing we want to match the widest-scope domain, not the
> first one.
We want the first domain spanning both the cpu's. That is the domain where
normal load balance failed and we restore to active load balance.
thanks,
suresh
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched: fix active load balance
2005-04-13 20:28 ` Siddha, Suresh B
@ 2005-04-13 23:06 ` Nick Piggin
2005-04-14 7:23 ` Ingo Molnar
1 sibling, 0 replies; 5+ messages in thread
From: Nick Piggin @ 2005-04-13 23:06 UTC (permalink / raw)
To: Siddha, Suresh B; +Cc: Ingo Molnar, akpm, linux-kernel
Siddha, Suresh B wrote:
> On Wed, Apr 13, 2005 at 10:08:28PM +0200, Ingo Molnar wrote:
>
>>* Siddha, Suresh B <suresh.b.siddha@intel.com> wrote:
>>
>>
>>>- for_each_domain(target_cpu, sd) {
>>>+ for_each_domain(target_cpu, sd)
>>> if ((sd->flags & SD_LOAD_BALANCE) &&
>>>- cpu_isset(busiest_cpu, sd->span)) {
>>>- sd = tmp;
>>>+ cpu_isset(busiest_cpu, sd->span))
>>> break;
>>>- }
>>>- }
>>
Yep that was broken :(
Thanks for picking that up Suresh.
>>hm, the right fix i think is to do:
>>
>> for_each_domain(target_cpu, tmp) {
>> if ((tmp->flags & SD_LOAD_BALANCE) &&
>> cpu_isset(busiest_cpu, tmp->span)) {
>> sd = tmp;
>> break;
>> }
>> }
>
>
> Your suggestion also looks similar to my patch. You are also breaking on the
> first one.
>
>
>>because when balancing we want to match the widest-scope domain, not the
>>first one.
>
>
> We want the first domain spanning both the cpu's. That is the domain where
> normal load balance failed and we restore to active load balance.
>
Yes, that's right.
--
SUSE Labs, Novell Inc.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched: fix active load balance
2005-04-13 20:28 ` Siddha, Suresh B
2005-04-13 23:06 ` Nick Piggin
@ 2005-04-14 7:23 ` Ingo Molnar
1 sibling, 0 replies; 5+ messages in thread
From: Ingo Molnar @ 2005-04-14 7:23 UTC (permalink / raw)
To: Siddha, Suresh B; +Cc: nickpiggin, akpm, linux-kernel
* Siddha, Suresh B <suresh.b.siddha@intel.com> wrote:
> Your suggestion also looks similar to my patch. You are also breaking
> on the first one.
yeah.
> We want the first domain spanning both the cpu's. That is the domain
> where normal load balance failed and we restore to active load
> balance.
indeed.
Acked-by: Ingo Molnar <mingo@elte.hu>
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2005-04-14 7:24 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-04-13 19:07 [patch] sched: fix active load balance Siddha, Suresh B
2005-04-13 20:08 ` Ingo Molnar
2005-04-13 20:28 ` Siddha, Suresh B
2005-04-13 23:06 ` Nick Piggin
2005-04-14 7:23 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox