public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: Reduce the rate of needless idle load balancing
@ 2014-05-20 20:17 Tim Chen
  2014-05-20 20:51 ` Jason Low
  0 siblings, 1 reply; 14+ messages in thread
From: Tim Chen @ 2014-05-20 20:17 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: Andrew Morton, Len Brown, Russ Anderson, Dimitri Sivanich,
	Hedi Berriche, Andi Kleen, Michel Lespinasse, Rik van Riel,
	Peter Hurley, linux-kernel

The current no_hz idle load balancer do load balancing on *all* idle cpus,
even though the time due to load balance for a particular
idle cpu could be still a while in future.  This introduces a much
higher load balancing rate than what is necessary.  The patch
changes the behavior by only doing idle load balancing on
behalf of an idle cpu only when time is due for load balancing.

On SGI's systems with over 3000 cores, the cpu responsible for idle balancing
got overwhelmed with idle balancing, and introduces a lot of OS noise
to workloads.  This patch fixes the issue.

Thanks.

Tim

Acked-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 kernel/sched/fair.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9b4c4f3..97132db 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6764,12 +6764,17 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle)
 
 		rq = cpu_rq(balance_cpu);
 
-		raw_spin_lock_irq(&rq->lock);
-		update_rq_clock(rq);
-		update_idle_cpu_load(rq);
-		raw_spin_unlock_irq(&rq->lock);
-
-		rebalance_domains(rq, CPU_IDLE);
+		/*
+		 * If time for next balance is due,
+		 * do the balance.
+		 */
+		if (time_after(jiffies + 1, rq->next_balance)) {
+			raw_spin_lock_irq(&rq->lock);
+			update_rq_clock(rq);
+			update_idle_cpu_load(rq);
+			raw_spin_unlock_irq(&rq->lock);
+			rebalance_domains(rq, CPU_IDLE);
+		}
 
 		if (time_after(this_rq->next_balance, rq->next_balance))
 			this_rq->next_balance = rq->next_balance;
-- 
1.7.11.7



^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-06-05 14:35 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-20 20:17 [PATCH] sched: Reduce the rate of needless idle load balancing Tim Chen
2014-05-20 20:51 ` Jason Low
2014-05-20 20:58   ` Rik van Riel
2014-05-20 20:59   ` Tim Chen
2014-05-20 21:04     ` Tim Chen
2014-05-21  1:15       ` Joe Perches
2014-05-21 16:37         ` Tim Chen
2014-05-21 18:26           ` Davidlohr Bueso
2014-05-21 18:49             ` Tim Chen
2014-05-20 21:09     ` Jason Low
2014-05-20 21:12       ` Tim Chen
2014-05-20 21:39       ` Tim Chen
2014-05-21  6:38         ` Peter Zijlstra
2014-06-05 14:34         ` [tip:sched/core] sched/balancing: " tip-bot for Tim Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox