public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] sched: pending patches
@ 2009-01-28 13:51 Peter Zijlstra
  2009-01-28 13:51 ` [PATCH 1/3] sched: symmetric sync vs avg_overlap Peter Zijlstra
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Peter Zijlstra @ 2009-01-28 13:51 UTC (permalink / raw)
  To: mingo, efault; +Cc: linux-kernel, Peter Zijlstra



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/3] sched: symmetric sync vs avg_overlap
  2009-01-28 13:51 [PATCH 0/3] sched: pending patches Peter Zijlstra
@ 2009-01-28 13:51 ` Peter Zijlstra
  2009-01-28 13:51 ` [PATCH 2/3] sched: clear buddies more aggressively Peter Zijlstra
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2009-01-28 13:51 UTC (permalink / raw)
  To: mingo, efault; +Cc: linux-kernel, Peter Zijlstra

[-- Attachment #1: sched-sync_vs_overlap.patch --]
[-- Type: text/plain, Size: 1028 bytes --]

Reinstate the weakening of the sync hint if set. This yields a more
symmetric usage of avg_overlap.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -2360,9 +2360,15 @@ static int try_to_wake_up(struct task_st
 	if (!sched_feat(SYNC_WAKEUPS))
 		sync = 0;
 
-	if (!sync && (current->se.avg_overlap < sysctl_sched_migration_cost &&
-			    p->se.avg_overlap < sysctl_sched_migration_cost))
-		sync = 1;
+	if (!sync) {
+		if (current->se.avg_overlap < sysctl_sched_migration_cost &&
+			  p->se.avg_overlap < sysctl_sched_migration_cost)
+			sync = 1;
+	} else {
+		if (current->se.avg_overlap >= sysctl_sched_migration_cost ||
+			  p->se.avg_overlap >= sysctl_sched_migration_cost)
+			sync = 0;
+	}
 
 #ifdef CONFIG_SMP
 	if (sched_feat(LB_WAKEUP_UPDATE)) {

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 2/3] sched: clear buddies more aggressively
  2009-01-28 13:51 [PATCH 0/3] sched: pending patches Peter Zijlstra
  2009-01-28 13:51 ` [PATCH 1/3] sched: symmetric sync vs avg_overlap Peter Zijlstra
@ 2009-01-28 13:51 ` Peter Zijlstra
  2009-01-28 13:51 ` [PATCH 3/3] sched: fix buddie group latency Peter Zijlstra
  2009-01-29 12:27 ` [PATCH 0/3] sched: pending patches Ingo Molnar
  3 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2009-01-28 13:51 UTC (permalink / raw)
  To: mingo, efault; +Cc: linux-kernel, Peter Zijlstra

[-- Attachment #1: sched-clear-buddies.patch --]
[-- Type: text/plain, Size: 1460 bytes --]

From: Mike Galbraith <efault@gmx.de>

It was noticed that a task could get re-elected past its run quota due to buddy
affinities. This could increase latency a little. Cure it by more aggresively
clearing buddy state.

We do so in two situations:
 - when we force preempt
 - when we select a buddy to run

Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched_fair.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -768,8 +768,14 @@ check_preempt_tick(struct cfs_rq *cfs_rq
 
 	ideal_runtime = sched_slice(cfs_rq, curr);
 	delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
-	if (delta_exec > ideal_runtime)
+	if (delta_exec > ideal_runtime) {
 		resched_task(rq_of(cfs_rq)->curr);
+		/*
+		 * The current task ran long enough, ensure it doesn't get
+		 * re-elected due to buddy favours.
+		 */
+		clear_buddies(cfs_rq, curr);
+	}
 }
 
 static void
@@ -1492,6 +1498,11 @@ static struct task_struct *pick_next_tas
 
 	do {
 		se = pick_next_entity(cfs_rq);
+		/*
+		 * If se was a buddy, clear it so that it will have to earn
+		 * the favour again.
+		 */
+		clear_buddies(cfs_rq, se);
 		set_next_entity(cfs_rq, se);
 		cfs_rq = group_cfs_rq(se);
 	} while (cfs_rq);

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 3/3] sched: fix buddie group latency
  2009-01-28 13:51 [PATCH 0/3] sched: pending patches Peter Zijlstra
  2009-01-28 13:51 ` [PATCH 1/3] sched: symmetric sync vs avg_overlap Peter Zijlstra
  2009-01-28 13:51 ` [PATCH 2/3] sched: clear buddies more aggressively Peter Zijlstra
@ 2009-01-28 13:51 ` Peter Zijlstra
  2009-01-29 12:27 ` [PATCH 0/3] sched: pending patches Ingo Molnar
  3 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2009-01-28 13:51 UTC (permalink / raw)
  To: mingo, efault; +Cc: linux-kernel, Peter Zijlstra

[-- Attachment #1: sched-group-buddies.patch --]
[-- Type: text/plain, Size: 1529 bytes --]

Similar to the previous patch, by not clearing buddies we can select entities
past their run quota, which can increase latency. This means we have to clear
group buddies as well.

Do not use the group clear for pick_next_task(), otherwise that'll get O(n^2).

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched_fair.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -719,7 +719,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, st
 		__enqueue_entity(cfs_rq, se);
 }
 
-static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
+static void __clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	if (cfs_rq->last == se)
 		cfs_rq->last = NULL;
@@ -728,6 +728,12 @@ static void clear_buddies(struct cfs_rq 
 		cfs_rq->next = NULL;
 }
 
+static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
+{
+	for_each_sched_entity(se)
+		__clear_buddies(cfs_rq_of(se), se);
+}
+
 static void
 dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int sleep)
 {
@@ -1502,7 +1508,7 @@ static struct task_struct *pick_next_tas
 		 * If se was a buddy, clear it so that it will have to earn
 		 * the favour again.
 		 */
-		clear_buddies(cfs_rq, se);
+		__clear_buddies(cfs_rq, se);
 		set_next_entity(cfs_rq, se);
 		cfs_rq = group_cfs_rq(se);
 	} while (cfs_rq);

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/3] sched: pending patches
  2009-01-28 13:51 [PATCH 0/3] sched: pending patches Peter Zijlstra
                   ` (2 preceding siblings ...)
  2009-01-28 13:51 ` [PATCH 3/3] sched: fix buddie group latency Peter Zijlstra
@ 2009-01-29 12:27 ` Ingo Molnar
  3 siblings, 0 replies; 5+ messages in thread
From: Ingo Molnar @ 2009-01-29 12:27 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: efault, linux-kernel


applied to tip/sched/urgent, thanks Peter and Mike!

	Ingo

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-01-29 12:27 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-28 13:51 [PATCH 0/3] sched: pending patches Peter Zijlstra
2009-01-28 13:51 ` [PATCH 1/3] sched: symmetric sync vs avg_overlap Peter Zijlstra
2009-01-28 13:51 ` [PATCH 2/3] sched: clear buddies more aggressively Peter Zijlstra
2009-01-28 13:51 ` [PATCH 3/3] sched: fix buddie group latency Peter Zijlstra
2009-01-29 12:27 ` [PATCH 0/3] sched: pending patches Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox