From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Ingo Molnar <mingo@elte.hu>
Cc: Andrew Morton <akpm@osdl.org>,
"Siddha, Suresh B" <suresh.b.siddha@intel.com>,
linux-kernel <linux-kernel@vger.kernel.org>,
Jack Steiner <steiner@sgi.com>
Subject: [patch 2/2] sched: reduce locking in periodic balancing
Date: Tue, 02 Aug 2005 22:25:12 +1000 [thread overview]
Message-ID: <42EF6628.4070102@yahoo.com.au> (raw)
In-Reply-To: <42EF65FF.2000102@yahoo.com.au>
[-- Attachment #1: Type: text/plain, Size: 33 bytes --]
2/2
--
SUSE Labs, Novell Inc.
[-- Attachment #2: sched-less-locking.patch --]
[-- Type: text/plain, Size: 2013 bytes --]
During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.
Holding the runqueue lock will only help to stabalise ->nr_running,
however this isn't doesn't do much to help because tasks being woken
will simply get held up on the runqueue lock, so ->nr_running would
not provide a really accurate picture of runqueue load in that case
anyway.
What's more, ->nr_running (and possibly the cpu_load averages) of
remote runqueues won't be stable anyway, so load balancing is always
an inexact operation.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c 2005-08-02 21:35:38.000000000 +1000
+++ linux-2.6/kernel/sched.c 2005-08-02 21:35:38.000000000 +1000
@@ -2051,7 +2051,6 @@ static int load_balance(int this_cpu, ru
int nr_moved, all_pinned = 0;
int active_balance = 0;
- spin_lock(&this_rq->lock);
schedstat_inc(sd, lb_cnt[idle]);
group = find_busiest_group(sd, this_cpu, &imbalance, idle);
@@ -2078,18 +2077,16 @@ static int load_balance(int this_cpu, ru
* still unbalanced. nr_moved simply stays zero, so it is
* correctly treated as an imbalance.
*/
- double_lock_balance(this_rq, busiest);
+ double_rq_lock(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
imbalance, sd, idle, &all_pinned);
- spin_unlock(&busiest->lock);
+ double_rq_unlock(this_rq, busiest);
/* All tasks on this runqueue were pinned by CPU affinity */
if (unlikely(all_pinned))
goto out_balanced;
}
- spin_unlock(&this_rq->lock);
-
if (!nr_moved) {
schedstat_inc(sd, lb_failed[idle]);
sd->nr_balance_failed++;
@@ -2132,8 +2129,6 @@ static int load_balance(int this_cpu, ru
return nr_moved;
out_balanced:
- spin_unlock(&this_rq->lock);
-
schedstat_inc(sd, lb_balanced[idle]);
sd->nr_balance_failed = 0;
next prev parent reply other threads:[~2005-08-02 12:29 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-08-02 12:23 [patch 0/2] sched: reduce locking Nick Piggin
2005-08-02 12:24 ` [patch 1/2] sched: reduce locking in newidle balancing Nick Piggin
2005-08-02 12:25 ` Nick Piggin [this message]
2005-08-03 7:59 ` [patch 2/2] sched: reduce locking in periodic balancing Ingo Molnar
2005-08-03 10:25 ` Nick Piggin
2005-08-03 7:51 ` [patch 1/2] sched: reduce locking in newidle balancing Ingo Molnar
2005-08-02 12:40 ` [patch 0/2] sched: reduce locking Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=42EF6628.4070102@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=steiner@sgi.com \
--cc=suresh.b.siddha@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox