public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
To: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>,
	Ingo Molnar <mingo@elte.hu>,
	akpm@osdl.org, linux-kernel@vger.kernel.org, steiner@sgi.com,
	dvhltc@us.ibm.com, mbligh@mbligh.org
Subject: Re: allow the load to grow upto its cpu_power (was Re: [Patch] don't kick ALB in the presence of pinned task)
Date: Tue, 9 Aug 2005 19:03:53 -0700	[thread overview]
Message-ID: <20050809190352.D1938@unix-os.sc.intel.com> (raw)
In-Reply-To: <42F94A00.3070504@yahoo.com.au>; from nickpiggin@yahoo.com.au on Wed, Aug 10, 2005 at 10:27:44AM +1000

On Wed, Aug 10, 2005 at 10:27:44AM +1000, Nick Piggin wrote:
> Yeah this makes sense. Thanks.
> 
> I think we'll only need your first line change to fix this, though.
> 
> Your second change will break situations where a single group is very
> loaded, but it is in a domain with lots of cpu_power
> (total_load <= total_power).

In that case, we will move the excess load from that group to some
other group which is below its capacity. Instead of bringing everyone
to avg load, we make sure that everyone is at or below its cpu_power.
This will minimize the movements between the nodes.

For example, Let us assume sched groups node-0, node-1 each has 
4*SCHED_LOAD_SCALE as its cpu_power.

And with 6 tasks on node-0 and 0 on node-1, current load balance 
will move 3 tasks from node-0 to 1. But with my patch, it will move only 
2 tasks to node-1. Is this what you are referring to as breakage?

Even with just the first line change, we will still allow going into
a state of 4 on node-0 and 2 on node-1.

With the second hunk of the patch we are minimizing the movement between nodes
and at the same time making sure everyone is below its cpu_power, when
the system is lightly loaded.

If the group's resources are very critical, groups cpu_power should
represent that criticality.

thanks,
suresh

  reply	other threads:[~2005-08-10  2:04 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-08-02  0:42 [Patch] don't kick ALB in the presence of pinned task Siddha, Suresh B
2005-08-02  6:09 ` Nick Piggin
2005-08-02  9:43   ` Ingo Molnar
2005-08-02 10:06     ` Nick Piggin
2005-08-02 21:12   ` Siddha, Suresh B
2005-08-02  9:27 ` Ingo Molnar
2005-08-09 23:08   ` allow the load to grow upto its cpu_power (was Re: [Patch] don't kick ALB in the presence of pinned task) Siddha, Suresh B
2005-08-10  0:27     ` Nick Piggin
2005-08-10  2:03       ` Siddha, Suresh B [this message]
2005-08-11  3:09         ` Nick Piggin
2005-08-11 18:14           ` Siddha, Suresh B
2005-08-11 23:49             ` Nick Piggin
2005-08-12  0:39               ` Siddha, Suresh B
2005-08-12  1:24                 ` Nick Piggin
2005-08-12  1:44                   ` Siddha, Suresh B
2005-08-10  7:16     ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050809190352.D1938@unix-os.sc.intel.com \
    --to=suresh.b.siddha@intel.com \
    --cc=akpm@osdl.org \
    --cc=dvhltc@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mbligh@mbligh.org \
    --cc=mingo@elte.hu \
    --cc=nickpiggin@yahoo.com.au \
    --cc=steiner@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox