* [PATCH 0/2] sched: select_task_rq_fair() cleanup/changes @ 2009-11-12 14:55 Peter Zijlstra 2009-11-12 14:55 ` [PATCH 1/2] sched: cleanup select_task_rq_fair() Peter Zijlstra 2009-11-12 14:55 ` [PATCH 2/2] sched: More generic WAKE_AFFINE vs select_idle_sibling() Peter Zijlstra 0 siblings, 2 replies; 5+ messages in thread From: Peter Zijlstra @ 2009-11-12 14:55 UTC (permalink / raw) To: Ingo Molnar, Mike Galbraith; +Cc: LKML, Peter Zijlstra Some bits I came up with while trying to grok the recent changes in the mentioned function. Please have a careful look. ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] sched: cleanup select_task_rq_fair() 2009-11-12 14:55 [PATCH 0/2] sched: select_task_rq_fair() cleanup/changes Peter Zijlstra @ 2009-11-12 14:55 ` Peter Zijlstra 2009-11-13 9:30 ` [tip:sched/core] sched: Cleanup select_task_rq_fair() tip-bot for Peter Zijlstra 2009-11-12 14:55 ` [PATCH 2/2] sched: More generic WAKE_AFFINE vs select_idle_sibling() Peter Zijlstra 1 sibling, 1 reply; 5+ messages in thread From: Peter Zijlstra @ 2009-11-12 14:55 UTC (permalink / raw) To: Ingo Molnar, Mike Galbraith; +Cc: LKML, Peter Zijlstra [-- Attachment #1: sched-idle-sibling.patch --] [-- Type: text/plain, Size: 2980 bytes --] Clean up the new affine to idle sibling bits while trying to grok them. Should not have any function differences. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> --- kernel/sched_fair.c | 73 ++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 51 insertions(+), 22 deletions(-) Index: linux-2.6/kernel/sched_fair.c =================================================================== --- linux-2.6.orig/kernel/sched_fair.c +++ linux-2.6/kernel/sched_fair.c @@ -1345,6 +1345,41 @@ find_idlest_cpu(struct sched_group *grou } /* + * Try and locate an idle CPU in the sched_domain. + */ +static int +select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target) +{ + int cpu = smp_processor_id(); + int prev_cpu = task_cpu(p); + int i; + + /* + * If this domain spans both cpu and prev_cpu (see the SD_WAKE_AFFINE + * test in select_task_rq_fair) and the prev_cpu is idle then that's + * always a better target than the current cpu. + */ + if (target == cpu) { + if (!cpu_rq(prev_cpu)->cfs.nr_running) + target = prev_cpu; + } + + /* + * Otherwise, iterate the domain and find an elegible idle cpu. + */ + if (target == -1 || target == cpu) { + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (!cpu_rq(i)->cfs.nr_running) { + target = i; + break; + } + } + } + + return target; +} + +/* * sched_balance_self: balance the current task (running on cpu) in domains * that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and * SD_BALANCE_EXEC. @@ -1399,36 +1434,30 @@ static int select_task_rq_fair(struct ta } if (want_affine && (tmp->flags & SD_WAKE_AFFINE)) { - int candidate = -1, i; + int target = -1; + /* + * If both cpu and prev_cpu are part of this domain, + * cpu is a valid SD_WAKE_AFFINE target. + */ if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) - candidate = cpu; + target = cpu; /* - * Check for an idle shared cache. + * If there's an idle sibling in this domain, make that + * the wake_affine target instead of the current cpu. + * + * XXX: should we possibly do this outside of + * WAKE_AFFINE, in case the shared cache domain is + * smaller than the WAKE_AFFINE domain? */ - if (tmp->flags & SD_PREFER_SIBLING) { - if (candidate == cpu) { - if (!cpu_rq(prev_cpu)->cfs.nr_running) - candidate = prev_cpu; - } - - if (candidate == -1 || candidate == cpu) { - for_each_cpu(i, sched_domain_span(tmp)) { - if (!cpumask_test_cpu(i, &p->cpus_allowed)) - continue; - if (!cpu_rq(i)->cfs.nr_running) { - candidate = i; - break; - } - } - } - } + if (tmp->flags & SD_PREFER_SIBLING) + target = select_idle_sibling(p, tmp, target); - if (candidate >= 0) { + if (target >= 0) { affine_sd = tmp; want_affine = 0; - cpu = candidate; + cpu = target; } } -- ^ permalink raw reply [flat|nested] 5+ messages in thread
* [tip:sched/core] sched: Cleanup select_task_rq_fair() 2009-11-12 14:55 ` [PATCH 1/2] sched: cleanup select_task_rq_fair() Peter Zijlstra @ 2009-11-13 9:30 ` tip-bot for Peter Zijlstra 0 siblings, 0 replies; 5+ messages in thread From: tip-bot for Peter Zijlstra @ 2009-11-13 9:30 UTC (permalink / raw) To: linux-tip-commits Cc: linux-kernel, hpa, mingo, a.p.zijlstra, efault, tglx, mingo Commit-ID: a50bde5130f65733142b32975616427d0ea50856 Gitweb: http://git.kernel.org/tip/a50bde5130f65733142b32975616427d0ea50856 Author: Peter Zijlstra <a.p.zijlstra@chello.nl> AuthorDate: Thu, 12 Nov 2009 15:55:28 +0100 Committer: Ingo Molnar <mingo@elte.hu> CommitDate: Fri, 13 Nov 2009 10:09:58 +0100 sched: Cleanup select_task_rq_fair() Clean up the new affine to idle sibling bits while trying to grok them. Should not have any function differences. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091112145610.832503781@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> --- kernel/sched_fair.c | 73 +++++++++++++++++++++++++++++++++++--------------- 1 files changed, 51 insertions(+), 22 deletions(-) diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index e4d4483..a32df15 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1319,6 +1319,41 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) } /* + * Try and locate an idle CPU in the sched_domain. + */ +static int +select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target) +{ + int cpu = smp_processor_id(); + int prev_cpu = task_cpu(p); + int i; + + /* + * If this domain spans both cpu and prev_cpu (see the SD_WAKE_AFFINE + * test in select_task_rq_fair) and the prev_cpu is idle then that's + * always a better target than the current cpu. + */ + if (target == cpu) { + if (!cpu_rq(prev_cpu)->cfs.nr_running) + target = prev_cpu; + } + + /* + * Otherwise, iterate the domain and find an elegible idle cpu. + */ + if (target == -1 || target == cpu) { + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (!cpu_rq(i)->cfs.nr_running) { + target = i; + break; + } + } + } + + return target; +} + +/* * sched_balance_self: balance the current task (running on cpu) in domains * that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and * SD_BALANCE_EXEC. @@ -1373,36 +1408,30 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag } if (want_affine && (tmp->flags & SD_WAKE_AFFINE)) { - int candidate = -1, i; + int target = -1; + /* + * If both cpu and prev_cpu are part of this domain, + * cpu is a valid SD_WAKE_AFFINE target. + */ if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) - candidate = cpu; + target = cpu; /* - * Check for an idle shared cache. + * If there's an idle sibling in this domain, make that + * the wake_affine target instead of the current cpu. + * + * XXX: should we possibly do this outside of + * WAKE_AFFINE, in case the shared cache domain is + * smaller than the WAKE_AFFINE domain? */ - if (tmp->flags & SD_PREFER_SIBLING) { - if (candidate == cpu) { - if (!cpu_rq(prev_cpu)->cfs.nr_running) - candidate = prev_cpu; - } - - if (candidate == -1 || candidate == cpu) { - for_each_cpu(i, sched_domain_span(tmp)) { - if (!cpumask_test_cpu(i, &p->cpus_allowed)) - continue; - if (!cpu_rq(i)->cfs.nr_running) { - candidate = i; - break; - } - } - } - } + if (tmp->flags & SD_PREFER_SIBLING) + target = select_idle_sibling(p, tmp, target); - if (candidate >= 0) { + if (target >= 0) { affine_sd = tmp; want_affine = 0; - cpu = candidate; + cpu = target; } } ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] sched: More generic WAKE_AFFINE vs select_idle_sibling() 2009-11-12 14:55 [PATCH 0/2] sched: select_task_rq_fair() cleanup/changes Peter Zijlstra 2009-11-12 14:55 ` [PATCH 1/2] sched: cleanup select_task_rq_fair() Peter Zijlstra @ 2009-11-12 14:55 ` Peter Zijlstra 2009-11-13 9:30 ` [tip:sched/core] " tip-bot for Peter Zijlstra 1 sibling, 1 reply; 5+ messages in thread From: Peter Zijlstra @ 2009-11-12 14:55 UTC (permalink / raw) To: Ingo Molnar, Mike Galbraith; +Cc: LKML, Peter Zijlstra [-- Attachment #1: sched-idle-sibling-more.patch --] [-- Type: text/plain, Size: 2339 bytes --] Instead of only considering SD_WAKE_AFFINE | SD_PREFER_SIBLING domains also allow all SD_PREFER_SIBLING domains below a SD_WAKE_AFFINE domain to change the affinity target. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> --- kernel/sched_fair.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) Index: linux-2.6/kernel/sched_fair.c =================================================================== --- linux-2.6.orig/kernel/sched_fair.c +++ linux-2.6/kernel/sched_fair.c @@ -1359,20 +1359,16 @@ select_idle_sibling(struct task_struct * * test in select_task_rq_fair) and the prev_cpu is idle then that's * always a better target than the current cpu. */ - if (target == cpu) { - if (!cpu_rq(prev_cpu)->cfs.nr_running) - target = prev_cpu; - } + if (target == cpu && !cpu_rq(prev_cpu)->cfs.nr_running) + return prev_cpu; /* * Otherwise, iterate the domain and find an elegible idle cpu. */ - if (target == -1 || target == cpu) { - for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { - if (!cpu_rq(i)->cfs.nr_running) { - target = i; - break; - } + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (!cpu_rq(i)->cfs.nr_running) { + target = i; + break; } } @@ -1433,7 +1429,12 @@ static int select_task_rq_fair(struct ta want_sd = 0; } - if (want_affine && (tmp->flags & SD_WAKE_AFFINE)) { + /* + * While iterating the domains looking for a spanning + * WAKE_AFFINE domain, adjust the affine target to any idle cpu + * in cache sharing domains along the way. + */ + if (want_affine) { int target = -1; /* @@ -1446,17 +1447,15 @@ static int select_task_rq_fair(struct ta /* * If there's an idle sibling in this domain, make that * the wake_affine target instead of the current cpu. - * - * XXX: should we possibly do this outside of - * WAKE_AFFINE, in case the shared cache domain is - * smaller than the WAKE_AFFINE domain? */ if (tmp->flags & SD_PREFER_SIBLING) target = select_idle_sibling(p, tmp, target); if (target >= 0) { - affine_sd = tmp; - want_affine = 0; + if (tmp->flags & SD_WAKE_AFFINE) { + affine_sd = tmp; + want_affine = 0; + } cpu = target; } } -- ^ permalink raw reply [flat|nested] 5+ messages in thread
* [tip:sched/core] sched: More generic WAKE_AFFINE vs select_idle_sibling() 2009-11-12 14:55 ` [PATCH 2/2] sched: More generic WAKE_AFFINE vs select_idle_sibling() Peter Zijlstra @ 2009-11-13 9:30 ` tip-bot for Peter Zijlstra 0 siblings, 0 replies; 5+ messages in thread From: tip-bot for Peter Zijlstra @ 2009-11-13 9:30 UTC (permalink / raw) To: linux-tip-commits Cc: linux-kernel, hpa, mingo, a.p.zijlstra, efault, tglx, mingo Commit-ID: fe3bcfe1f6c1fc4ea7706ac2d05e579fd9092682 Gitweb: http://git.kernel.org/tip/fe3bcfe1f6c1fc4ea7706ac2d05e579fd9092682 Author: Peter Zijlstra <a.p.zijlstra@chello.nl> AuthorDate: Thu, 12 Nov 2009 15:55:29 +0100 Committer: Ingo Molnar <mingo@elte.hu> CommitDate: Fri, 13 Nov 2009 10:09:59 +0100 sched: More generic WAKE_AFFINE vs select_idle_sibling() Instead of only considering SD_WAKE_AFFINE | SD_PREFER_SIBLING domains also allow all SD_PREFER_SIBLING domains below a SD_WAKE_AFFINE domain to change the affinity target. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091112145610.909723612@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> --- kernel/sched_fair.c | 33 ++++++++++++++++----------------- 1 files changed, 16 insertions(+), 17 deletions(-) diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index a32df15..f28a267 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1333,20 +1333,16 @@ select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target) * test in select_task_rq_fair) and the prev_cpu is idle then that's * always a better target than the current cpu. */ - if (target == cpu) { - if (!cpu_rq(prev_cpu)->cfs.nr_running) - target = prev_cpu; - } + if (target == cpu && !cpu_rq(prev_cpu)->cfs.nr_running) + return prev_cpu; /* * Otherwise, iterate the domain and find an elegible idle cpu. */ - if (target == -1 || target == cpu) { - for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { - if (!cpu_rq(i)->cfs.nr_running) { - target = i; - break; - } + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (!cpu_rq(i)->cfs.nr_running) { + target = i; + break; } } @@ -1407,7 +1403,12 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag want_sd = 0; } - if (want_affine && (tmp->flags & SD_WAKE_AFFINE)) { + /* + * While iterating the domains looking for a spanning + * WAKE_AFFINE domain, adjust the affine target to any idle cpu + * in cache sharing domains along the way. + */ + if (want_affine) { int target = -1; /* @@ -1420,17 +1421,15 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag /* * If there's an idle sibling in this domain, make that * the wake_affine target instead of the current cpu. - * - * XXX: should we possibly do this outside of - * WAKE_AFFINE, in case the shared cache domain is - * smaller than the WAKE_AFFINE domain? */ if (tmp->flags & SD_PREFER_SIBLING) target = select_idle_sibling(p, tmp, target); if (target >= 0) { - affine_sd = tmp; - want_affine = 0; + if (tmp->flags & SD_WAKE_AFFINE) { + affine_sd = tmp; + want_affine = 0; + } cpu = target; } } ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-11-13 9:33 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-11-12 14:55 [PATCH 0/2] sched: select_task_rq_fair() cleanup/changes Peter Zijlstra 2009-11-12 14:55 ` [PATCH 1/2] sched: cleanup select_task_rq_fair() Peter Zijlstra 2009-11-13 9:30 ` [tip:sched/core] sched: Cleanup select_task_rq_fair() tip-bot for Peter Zijlstra 2009-11-12 14:55 ` [PATCH 2/2] sched: More generic WAKE_AFFINE vs select_idle_sibling() Peter Zijlstra 2009-11-13 9:30 ` [tip:sched/core] " tip-bot for Peter Zijlstra
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox