public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Matthew Dobson <colpatch@us.ibm.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: Nick Piggin <piggin@cyberone.com.au>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Michael Hohnbaum <hohnbaum@us.ibm.com>
Subject: Re: [rfc][patch] kernel/sched.c oddness?
Date: Thu, 03 Oct 2002 14:15:09 -0700	[thread overview]
Message-ID: <3D9CB35D.90503@us.ibm.com> (raw)
In-Reply-To: Pine.LNX.4.44.0210030840110.4477-100000@localhost.localdomain

[-- Attachment #1: Type: text/plain, Size: 1054 bytes --]

Ingo Molnar wrote:
> this was done intentionally, and this scenario (1+2 tasks) is the very
> worst scenario. The problem is that by trying to balance all 3 tasks we
> now have 3 tasks that trash their cache going from one CPU to another.  
> (this is what happens with your patch - even with another approach we'd
> have to trash at least one task)
> 
> By keeping 2 tasks on one CPU and 1 task on the other CPU we avoid
> cross-CPU migration of threads. Think about the 2+3 or 4+5 tasks case
> rather, do we want absolutely perfect balancing, or good SMP affinity and
> good combined performance?
OK...  But what about the (imbalance / 2) part?  Either the comment 
needs to change, or the code.  Attatched is a slightly revised patch for 
the code.  The comment patch would be even easier:

- 
/* It needs an at least ~25% imbalance to trigger balancing. */
+ 
/* It needs an at least ~50% imbalance to trigger balancing. */

Either way works for me.  I'd like to see something done, as the 
comments don't match the code right now...

Cheers!

-Matt

[-- Attachment #2: sched_cleanup-2.5.40.patch --]
[-- Type: text/plain, Size: 912 bytes --]

diff -Nur --exclude-from=/usr/src/.dontdiff linux-2.5.40-vanilla/kernel/sched.c linux-2.5.40-sched_cleanup/kernel/sched.c
--- linux-2.5.40-vanilla/kernel/sched.c	Tue Oct  1 00:07:35 2002
+++ linux-2.5.40-sched_cleanup/kernel/sched.c	Thu Oct  3 14:09:31 2002
@@ -689,10 +689,10 @@
 	if (likely(!busiest))
 		goto out;
 
-	*imbalance = (max_load - nr_running) / 2;
+	*imbalance = max_load - nr_running;
 
 	/* It needs an at least ~25% imbalance to trigger balancing. */
-	if (!idle && (*imbalance < (max_load + 3)/4)) {
+	if (!idle && (*imbalance <= (max_load + 3)/4)) {
 		busiest = NULL;
 		goto out;
 	}
@@ -746,6 +746,11 @@
 	task_t *tmp;
 
 	busiest = find_busiest_queue(this_rq, this_cpu, idle, &imbalance);
+	/*
+	 * We only want to steal a number of tasks equal to 1/2 the imbalance,
+ 	 * otherwise, we'll just shift the imbalance to the new queue.
+	 */
+	imbalance /= 2;
 	if (!busiest)
 		goto out;
 

  parent reply	other threads:[~2002-10-03 21:12 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-02 18:41 [rfc][patch] kernel/sched.c oddness? Matthew Dobson
2002-10-03  0:06 ` Nick Piggin
2002-10-03  0:30   ` Matthew Dobson
2002-10-03  6:54   ` Ingo Molnar
2002-10-03  8:32     ` Nick Piggin
2002-10-03 21:15     ` Matthew Dobson [this message]
2002-10-04  7:46       ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3D9CB35D.90503@us.ibm.com \
    --to=colpatch@us.ibm.com \
    --cc=hohnbaum@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=piggin@cyberone.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox