public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] clarify find_busiest_group
@ 2004-01-23  6:31 Martin J. Bligh
  2004-01-23  6:44 ` Nick Piggin
  0 siblings, 1 reply; 6+ messages in thread
From: Martin J. Bligh @ 2004-01-23  6:31 UTC (permalink / raw)
  To: Nick Piggin, Andrew Morton; +Cc: linux-kernel, nevdull, dvhart

Fix a minor nit with the find_busiest_group code. No functional change,
but makes the code simpler and clearer. This patch does two things ... 
adds some more expansive comments, and removes this if clause:

      if (*imbalance < SCHED_LOAD_SCALE
                      && max_load - this_load > SCHED_LOAD_SCALE)
		*imbalance = SCHED_LOAD_SCALE;

If we remove the scaling factor, we're basically conditionally doing:

	if (*imbalance < 1)
		*imbalance = 1;

Which is pointless, as the very next thing we do is to remove the scaling
factor, rounding up to the nearest integer as we do:

	*imbalance = (*imbalance + SCHED_LOAD_SCALE - 1) >> SCHED_LOAD_SHIFT;

Thus the if statement is redundant, and only makes the code harder to read ;-)

M.

diff -aurpN -X /home/fletch/.diff.exclude mm5/kernel/sched.c mm5-find_busiest_group/kernel/sched.c
--- mm5/kernel/sched.c	Wed Jan 21 22:07:11 2004
+++ mm5-find_busiest_group/kernel/sched.c	Thu Jan 22 22:21:22 2004
@@ -1440,11 +1440,16 @@ nextgroup:
 	if (idle == NOT_IDLE && 100*max_load <= domain->imbalance_pct*this_load)
 		goto out_balanced;
 
-	/* Take the minimum possible imbalance. */
+	/* 
+	 * We're trying to get all the cpus to the average_load, so we don't 
+	 * want to push ourselves above the average load, nor do we wish to 
+	 * reduce the max loaded cpu below the average load, as either of these
+	 * actions would just result in more rebalancing later, and ping-pong
+	 * tasks around. Thus we look for the minimum possible imbalance.
+	 */
 	*imbalance = min(max_load - avg_load, avg_load - this_load);
-	if (*imbalance < SCHED_LOAD_SCALE
-			&& max_load - this_load > SCHED_LOAD_SCALE)
-		*imbalance = SCHED_LOAD_SCALE;
+	
+	/* Get rid of the scaling factor now, rounding *up* as we divide */
 	*imbalance = (*imbalance + SCHED_LOAD_SCALE - 1) >> SCHED_LOAD_SHIFT;
 
 	if (*imbalance == 0) {


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2004-01-23 15:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-01-23  6:31 [PATCH] clarify find_busiest_group Martin J. Bligh
2004-01-23  6:44 ` Nick Piggin
2004-01-23  7:02   ` Andrew Morton
2004-01-23 12:43   ` vts have stopped working here Ed Tomlinson
2004-01-23 13:05     ` Vojtech Pavlik
2004-01-23 15:05       ` Ed Tomlinson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox