public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch 0/2] sched: reduce locking
@ 2005-08-02 12:23 Nick Piggin
  2005-08-02 12:24 ` [patch 1/2] sched: reduce locking in newidle balancing Nick Piggin
  2005-08-02 12:40 ` [patch 0/2] sched: reduce locking Ingo Molnar
  0 siblings, 2 replies; 7+ messages in thread
From: Nick Piggin @ 2005-08-02 12:23 UTC (permalink / raw)
  To: Ingo Molnar, Andrew Morton, Siddha, Suresh B, linux-kernel,
	Jack Steiner

Hi,
I've had these patches around for a while, and I'd like to
get rid of them. They could possibly even go in 2.6.13.

I haven't really done performance testing because it is
difficult to get real workloads going that really stress
these things. There are small improvements on things like
tbench on bigger systems, but nothing greatly interesting.

I think on real workloads, things could get more interesting.

Actually, it would be interesting to know how these go on the
_really_ big systems, and whether lock and cacheline contention
in the scheduler is still a problem for them.

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [patch 1/2] sched: reduce locking in newidle balancing
  2005-08-02 12:23 [patch 0/2] sched: reduce locking Nick Piggin
@ 2005-08-02 12:24 ` Nick Piggin
  2005-08-02 12:25   ` [patch 2/2] sched: reduce locking in periodic balancing Nick Piggin
  2005-08-03  7:51   ` [patch 1/2] sched: reduce locking in newidle balancing Ingo Molnar
  2005-08-02 12:40 ` [patch 0/2] sched: reduce locking Ingo Molnar
  1 sibling, 2 replies; 7+ messages in thread
From: Nick Piggin @ 2005-08-02 12:24 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Andrew Morton, Siddha, Suresh B, linux-kernel, Jack Steiner

[-- Attachment #1: Type: text/plain, Size: 33 bytes --]

1/2

-- 
SUSE Labs, Novell Inc.


[-- Attachment #2: sched-less-newidle-locking.patch --]
[-- Type: text/plain, Size: 1600 bytes --]

Similarly to the earlier change in load_balance, only lock the runqueue
in load_balance_newidle if the busiest queue found has a nr_running > 1.
This will reduce frequency of expensive remote runqueue lock aquisitions
in the schedule() path on some workloads.

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c	2005-08-02 21:35:36.000000000 +1000
+++ linux-2.6/kernel/sched.c	2005-08-02 21:56:40.000000000 +1000
@@ -2080,8 +2080,7 @@ static int load_balance(int this_cpu, ru
 		 */
 		double_lock_balance(this_rq, busiest);
 		nr_moved = move_tasks(this_rq, this_cpu, busiest,
-						imbalance, sd, idle,
-						&all_pinned);
+					imbalance, sd, idle, &all_pinned);
 		spin_unlock(&busiest->lock);
 
 		/* All tasks on this runqueue were pinned by CPU affinity */
@@ -2176,18 +2175,22 @@ static int load_balance_newidle(int this
 
 	BUG_ON(busiest == this_rq);
 
-	/* Attempt to move tasks */
-	double_lock_balance(this_rq, busiest);
-
 	schedstat_add(sd, lb_imbalance[NEWLY_IDLE], imbalance);
-	nr_moved = move_tasks(this_rq, this_cpu, busiest,
+
+	nr_moved = 0;
+	if (busiest->nr_running > 1) {
+		/* Attempt to move tasks */
+		double_lock_balance(this_rq, busiest);
+		nr_moved = move_tasks(this_rq, this_cpu, busiest,
 					imbalance, sd, NEWLY_IDLE, NULL);
+		spin_unlock(&busiest->lock);
+	}
+
 	if (!nr_moved)
 		schedstat_inc(sd, lb_failed[NEWLY_IDLE]);
 	else
 		sd->nr_balance_failed = 0;
 
-	spin_unlock(&busiest->lock);
 	return nr_moved;
 
 out_balanced:

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [patch 2/2] sched: reduce locking in periodic balancing
  2005-08-02 12:24 ` [patch 1/2] sched: reduce locking in newidle balancing Nick Piggin
@ 2005-08-02 12:25   ` Nick Piggin
  2005-08-03  7:59     ` Ingo Molnar
  2005-08-03  7:51   ` [patch 1/2] sched: reduce locking in newidle balancing Ingo Molnar
  1 sibling, 1 reply; 7+ messages in thread
From: Nick Piggin @ 2005-08-02 12:25 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Andrew Morton, Siddha, Suresh B, linux-kernel, Jack Steiner

[-- Attachment #1: Type: text/plain, Size: 33 bytes --]

2/2

-- 
SUSE Labs, Novell Inc.


[-- Attachment #2: sched-less-locking.patch --]
[-- Type: text/plain, Size: 2013 bytes --]

During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.

Holding the runqueue lock will only help to stabalise ->nr_running,
however this isn't doesn't do much to help because tasks being woken
will simply get held up on the runqueue lock, so ->nr_running would
not provide a really accurate picture of runqueue load in that case
anyway.

What's more, ->nr_running (and possibly the cpu_load averages) of
remote runqueues won't be stable anyway, so load balancing is always
an inexact operation.

Signed-off-by: Nick Piggin <npiggin@suse.de>


Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c	2005-08-02 21:35:38.000000000 +1000
+++ linux-2.6/kernel/sched.c	2005-08-02 21:35:38.000000000 +1000
@@ -2051,7 +2051,6 @@ static int load_balance(int this_cpu, ru
 	int nr_moved, all_pinned = 0;
 	int active_balance = 0;
 
-	spin_lock(&this_rq->lock);
 	schedstat_inc(sd, lb_cnt[idle]);
 
 	group = find_busiest_group(sd, this_cpu, &imbalance, idle);
@@ -2078,18 +2077,16 @@ static int load_balance(int this_cpu, ru
 		 * still unbalanced. nr_moved simply stays zero, so it is
 		 * correctly treated as an imbalance.
 		 */
-		double_lock_balance(this_rq, busiest);
+		double_rq_lock(this_rq, busiest);
 		nr_moved = move_tasks(this_rq, this_cpu, busiest,
 					imbalance, sd, idle, &all_pinned);
-		spin_unlock(&busiest->lock);
+		double_rq_unlock(this_rq, busiest);
 
 		/* All tasks on this runqueue were pinned by CPU affinity */
 		if (unlikely(all_pinned))
 			goto out_balanced;
 	}
 
-	spin_unlock(&this_rq->lock);
-
 	if (!nr_moved) {
 		schedstat_inc(sd, lb_failed[idle]);
 		sd->nr_balance_failed++;
@@ -2132,8 +2129,6 @@ static int load_balance(int this_cpu, ru
 	return nr_moved;
 
 out_balanced:
-	spin_unlock(&this_rq->lock);
-
 	schedstat_inc(sd, lb_balanced[idle]);
 
 	sd->nr_balance_failed = 0;

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [patch 0/2] sched: reduce locking
  2005-08-02 12:23 [patch 0/2] sched: reduce locking Nick Piggin
  2005-08-02 12:24 ` [patch 1/2] sched: reduce locking in newidle balancing Nick Piggin
@ 2005-08-02 12:40 ` Ingo Molnar
  1 sibling, 0 replies; 7+ messages in thread
From: Ingo Molnar @ 2005-08-02 12:40 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Andrew Morton, Siddha, Suresh B, linux-kernel, Jack Steiner


* Nick Piggin <nickpiggin@yahoo.com.au> wrote:

> Hi,
> I've had these patches around for a while, and I'd like to
> get rid of them. They could possibly even go in 2.6.13.

Looks good. Regarding timeline: unless 2.6.13 reopens (due to whatever 
delay), these should not get into 2.6.13 - but they are perfectly fine 
for -mm and for 2.6.14.

	Ingo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [patch 1/2] sched: reduce locking in newidle balancing
  2005-08-02 12:24 ` [patch 1/2] sched: reduce locking in newidle balancing Nick Piggin
  2005-08-02 12:25   ` [patch 2/2] sched: reduce locking in periodic balancing Nick Piggin
@ 2005-08-03  7:51   ` Ingo Molnar
  1 sibling, 0 replies; 7+ messages in thread
From: Ingo Molnar @ 2005-08-03  7:51 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Andrew Morton, Siddha, Suresh B, linux-kernel, Jack Steiner


* Nick Piggin <nickpiggin@yahoo.com.au> wrote:

> Similarly to the earlier change in load_balance, only lock the 
> runqueue in load_balance_newidle if the busiest queue found has a 
> nr_running > 1. This will reduce frequency of expensive remote 
> runqueue lock aquisitions in the schedule() path on some workloads.
> 
> Signed-off-by: Nick Piggin <npiggin@suse.de>

Acked-by: Ingo Molnar <mingo@elte.hu>

	Ingo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [patch 2/2] sched: reduce locking in periodic balancing
  2005-08-02 12:25   ` [patch 2/2] sched: reduce locking in periodic balancing Nick Piggin
@ 2005-08-03  7:59     ` Ingo Molnar
  2005-08-03 10:25       ` Nick Piggin
  0 siblings, 1 reply; 7+ messages in thread
From: Ingo Molnar @ 2005-08-03  7:59 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Andrew Morton, Siddha, Suresh B, linux-kernel, Jack Steiner


* Nick Piggin <nickpiggin@yahoo.com.au> wrote:

> During periodic load balancing, don't hold this runqueue's lock while
> scanning remote runqueues, which can take a non trivial amount of time
> especially on very large systems.
> 
> Holding the runqueue lock will only help to stabalise ->nr_running,

s/stabalise/stabilise/

> however this isn't doesn't do much to help because tasks being woken 

s/isn't //

> will simply get held up on the runqueue lock, so ->nr_running would 
> not provide a really accurate picture of runqueue load in that case 
> anyway.
> 
> What's more, ->nr_running (and possibly the cpu_load averages) of
> remote runqueues won't be stable anyway, so load balancing is always
> an inexact operation.
> 
> Signed-off-by: Nick Piggin <npiggin@suse.de>

btw., holding the runqueue lock during the initial scanning portion of 
load-balancing is one of the top PREEMPT_RT critical paths on SMP. (It's 
not bad, but it's one of the factors that makes SMP latencies higher.)

Acked-by: Ingo Molnar <mingo@elte.hu>

	Ingo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [patch 2/2] sched: reduce locking in periodic balancing
  2005-08-03  7:59     ` Ingo Molnar
@ 2005-08-03 10:25       ` Nick Piggin
  0 siblings, 0 replies; 7+ messages in thread
From: Nick Piggin @ 2005-08-03 10:25 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Andrew Morton, Siddha, Suresh B, linux-kernel, Jack Steiner

Ingo Molnar wrote:

[...]

Thanks for the corrections.

> 
> btw., holding the runqueue lock during the initial scanning portion of 
> load-balancing is one of the top PREEMPT_RT critical paths on SMP. (It's 
> not bad, but it's one of the factors that makes SMP latencies higher.)
> 

Good, I'm glad they will be of some immediate help.

> Acked-by: Ingo Molnar <mingo@elte.hu>
> 

Thanks.

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2005-08-03 10:26 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-08-02 12:23 [patch 0/2] sched: reduce locking Nick Piggin
2005-08-02 12:24 ` [patch 1/2] sched: reduce locking in newidle balancing Nick Piggin
2005-08-02 12:25   ` [patch 2/2] sched: reduce locking in periodic balancing Nick Piggin
2005-08-03  7:59     ` Ingo Molnar
2005-08-03 10:25       ` Nick Piggin
2005-08-03  7:51   ` [patch 1/2] sched: reduce locking in newidle balancing Ingo Molnar
2005-08-02 12:40 ` [patch 0/2] sched: reduce locking Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox