public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Williams <pwil3058@bigpond.net.au>
To: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Con Kolivas <kernel@kolivas.org>,
	mingo@elte.hu, nickpiggin@yahoo.com.au,
	linux-kernel@vger.kernel.org
Subject: Re: smp 'nice' bias support breaks scheduler behavior
Date: Fri, 27 Jan 2006 10:36:36 +1100	[thread overview]
Message-ID: <43D95D04.8050802@bigpond.net.au> (raw)
In-Reply-To: <200601262325.05296.kernel@kolivas.org>

Con Kolivas wrote:
> On Thursday 26 January 2006 21:52, Siddha, Suresh B wrote:
> 
>>Con,
>>
>>
>>>[PATCH] sched: implement nice support across physical cpus on SMP
>>
>>I don't see imbalance calculations in find_busiest_group() take
>>prio_bias into account. This will result in wrong imbalance value and
>>will cause issues.
> 
> 
> 
> in 2.6.16-rc1:
> 
> find_busiest_group(....
> 
> 	load = __target_load(i, load_idx, idle);
> else
> 	load = __source_load(i, load_idx, idle);
> 
> where __target_load and __source_load is where we take into account prio_bias.
> 
> I'm not sure which code you're looking at, but Peter Williams is working on 
> rewriting the smp nice balancing code in -mm at the moment so that is quite 
> different from current linus tree.
> 

Yes, indeed.  And it would be very helpful if people interested in this 
topic (and that have test suites designed to test whether "niceness" is 
being well balanced across CPUs) could test it.  This is especially the 
case for larger systems as I do not have ready access for testing on them.

> 
> 
>>For example on a DP system with HT, if there are three runnable processes
>>(simple infinite loop with same nice value), this patch is resulting in
>>bouncing of these 3 processes from one processor to another...Lets assume
>>if the 3 processes are scheduled as 2 in package-0 and 1 in package1..
>>Now when the busy processor on package-1 does load balance and as
>>imbalance doesn't take "prio_bias" into account, this will kick active
>>load balance on package-0.. And this is continuing for ever, resulting
>>in bouncing from one processor to another.
>>
>>Even when the system is completely loaded and if there is an imbalance,
>>this patch causes wrong imabalance counts and cause unoptimized
>>movements.
>>
>>Do you want to look into this and post a patch for 2.6.16?

Thanks,
Peter
-- 
Peter Williams                                   pwil3058@bigpond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce

  reply	other threads:[~2006-01-26 23:36 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-26 10:52 smp 'nice' bias support breaks scheduler behavior Siddha, Suresh B
2006-01-26 12:25 ` Con Kolivas
2006-01-26 23:36   ` Peter Williams [this message]
2006-01-26 23:56     ` Siddha, Suresh B
2006-01-27  1:29       ` Con Kolivas
2006-01-27  1:34         ` Siddha, Suresh B
2006-01-27  1:54           ` Con Kolivas
2006-01-27  2:11             ` Siddha, Suresh B
2006-01-27  2:58               ` Con Kolivas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43D95D04.8050802@bigpond.net.au \
    --to=pwil3058@bigpond.net.au \
    --cc=kernel@kolivas.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=nickpiggin@yahoo.com.au \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox