public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: mingo@kernel.org, pjt@google.com, vatsa@in.ibm.com,
	suresh.b.siddha@intel.com, efault@gmx.de
Cc: linux-kernel@vger.kernel.org, Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: [RFC][PATCH 1/5] sched, fair: Let minimally loaded cpu balance the group
Date: Tue, 01 May 2012 20:14:31 +0200	[thread overview]
Message-ID: <20120501182610.745844312@chello.nl> (raw)
In-Reply-To: 20120501181430.007891123@chello.nl

[-- Attachment #1: sched-balance-min-cpu.patch --]
[-- Type: text/plain, Size: 1476 bytes --]

Currently we let the leftmost (or first idle) cpu acend the
sched_domain tree and perform load-balancing. The result is that the
busiest cpu in the group might be performing this function and pull
more load to itself. The next load balance pass will then try to
equalize this again.

Change this to pick the least loaded cpu to perform higher domain
balancing.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched/fair.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Index: linux-2.6/kernel/sched/fair.c
===================================================================
--- linux-2.6.orig/kernel/sched/fair.c
+++ linux-2.6/kernel/sched/fair.c
@@ -3779,7 +3779,8 @@ static inline void update_sg_lb_stats(st
 {
 	unsigned long load, max_cpu_load, min_cpu_load, max_nr_running;
 	int i;
-	unsigned int balance_cpu = -1, first_idle_cpu = 0;
+	unsigned int balance_cpu = -1;
+	unsigned long balance_load = ~0UL;
 	unsigned long avg_load_per_task = 0;
 
 	if (local_group)
@@ -3795,12 +3796,11 @@ static inline void update_sg_lb_stats(st
 
 		/* Bias balancing toward cpus of our domain */
 		if (local_group) {
-			if (idle_cpu(i) && !first_idle_cpu) {
-				first_idle_cpu = 1;
+			load = target_load(i, load_idx);
+			if (load < balance_load || idle_cpu(i)) {
+				balance_load = load;
 				balance_cpu = i;
 			}
-
-			load = target_load(i, load_idx);
 		} else {
 			load = source_load(i, load_idx);
 			if (load > max_cpu_load) {



  reply	other threads:[~2012-05-01 18:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-01 18:14 [RFC][PATCH 0/5] various sched and numa bits Peter Zijlstra
2012-05-01 18:14 ` Peter Zijlstra [this message]
2012-05-02 10:25   ` [RFC][PATCH 1/5] sched, fair: Let minimally loaded cpu balance the group Srivatsa Vaddagiri
2012-05-02 10:31     ` Peter Zijlstra
2012-05-02 10:34       ` Srivatsa Vaddagiri
2012-05-04  0:05         ` Suresh Siddha
2012-05-04 16:09           ` Peter Zijlstra
2012-05-01 18:14 ` [RFC][PATCH 2/5] sched, fair: Add some serialization to the sched_domain load-balance walk Peter Zijlstra
2012-05-01 18:14 ` [RFC][PATCH 3/5] x86: Allow specifying node_distance() for numa=fake Peter Zijlstra
2012-05-01 18:14 ` [RFC][PATCH 4/5] x86: Hard partition cpu topology masks on node boundaries Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120501182610.745844312@chello.nl \
    --to=a.p.zijlstra@chello.nl \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=pjt@google.com \
    --cc=suresh.b.siddha@intel.com \
    --cc=vatsa@in.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox