linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: vincent.guittot@linaro.org (Vincent Guittot)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 02/12] sched: remove a wake_affine condition
Date: Mon, 28 Jul 2014 19:51:36 +0200	[thread overview]
Message-ID: <1406569906-9763-3-git-send-email-vincent.guittot@linaro.org> (raw)
In-Reply-To: <1406569906-9763-1-git-send-email-vincent.guittot@linaro.org>

I have tried to understand the meaning of the condition :
 (this_load <= load &&
  this_load + target_load(prev_cpu, idx) <= tl_per_task)
but i failed to find a use case that can take advantage of it and i haven't
found clear description in the previous commits' log.
Futhermore, the comment of the condition refers to the task_hot function that
was used before being replaced by the current condition:
/*
 * This domain has SD_WAKE_AFFINE and
 * p is cache cold in this domain, and
 * there is no bad imbalance.
 */

If we look more deeply the below condition
 this_load + target_load(prev_cpu, idx) <= tl_per_task

When sync is clear, we have :
 tl_per_task = runnable_load_avg / nr_running
 this_load = max(runnable_load_avg, cpuload[idx])
 target_load =  max(runnable_load_avg', cpuload'[idx])

It implies that runnable_load_avg' == 0 and nr_running <= 1 in order to match the
condition. This implies that runnable_load_avg == 0 too because of the
condition: this_load <= load
but if this _load is null, balanced is already set and the test is redundant.

If sync is set, it's not as straight forward as above (especially if cgroup
are involved) but the policy should be similar as we have removed a task that's
going to sleep in order to get a more accurate load and this_load values.

The current conclusion is that these additional condition don't give any benefit
so we can remove them.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 30 ++++++------------------------
 1 file changed, 6 insertions(+), 24 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7eb9126..57f8d8c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4285,7 +4285,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 {
 	s64 this_load, load;
 	int idx, this_cpu, prev_cpu;
-	unsigned long tl_per_task;
 	struct task_group *tg;
 	unsigned long weight;
 	int balanced;
@@ -4343,32 +4342,15 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 		balanced = this_eff_load <= prev_eff_load;
 	} else
 		balanced = true;
-
-	/*
-	 * If the currently running task will sleep within
-	 * a reasonable amount of time then attract this newly
-	 * woken task:
-	 */
-	if (sync && balanced)
-		return 1;
-
 	schedstat_inc(p, se.statistics.nr_wakeups_affine_attempts);
-	tl_per_task = cpu_avg_load_per_task(this_cpu);
 
-	if (balanced ||
-	    (this_load <= load &&
-	     this_load + target_load(prev_cpu, idx) <= tl_per_task)) {
-		/*
-		 * This domain has SD_WAKE_AFFINE and
-		 * p is cache cold in this domain, and
-		 * there is no bad imbalance.
-		 */
-		schedstat_inc(sd, ttwu_move_affine);
-		schedstat_inc(p, se.statistics.nr_wakeups_affine);
+	if (!balanced)
+		return 0;
 
-		return 1;
-	}
-	return 0;
+	schedstat_inc(sd, ttwu_move_affine);
+	schedstat_inc(p, se.statistics.nr_wakeups_affine);
+
+	return 1;
 }
 
 /*
-- 
1.9.1

  parent reply	other threads:[~2014-07-28 17:51 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-28 17:51 [PATCH v4 00/12] sched: consolidation of cpu_capacity Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 01/12] sched: fix imbalance flag reset Vincent Guittot
2014-11-23 10:25   ` Wanpeng Li
2014-11-24 10:31     ` Vincent Guittot
2014-11-24 23:47   ` Wanpeng Li
2014-11-25  9:04     ` Vincent Guittot
2014-11-25 10:13       ` Wanpeng Li
2014-07-28 17:51 ` Vincent Guittot [this message]
2014-07-28 17:51 ` [PATCH v4 03/12] sched: fix avg_load computation Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 04/12] sched: Allow all archs to set the capacity_orig Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 05/12] ARM: topology: use new cpu_capacity interface Vincent Guittot
2014-07-28 17:51 ` [PATCH 06/12] sched: add per rq cpu_capacity_orig Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 07/12] sched: test the cpu's capacity in wake affine Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 08/12] sched: move cfs task on a CPU with higher capacity Vincent Guittot
2014-07-28 18:43   ` Rik van Riel
2014-07-29  7:40     ` Vincent Guittot
2014-07-28 17:51 ` [PATCH 09/12] sched: add usage_load_avg Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 10/12] sched: get CPU's utilization statistic Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 11/12] sched: replace capacity_factor by utilization Vincent Guittot
2014-07-28 17:51 ` [PATCH v4 12/12] sched: add SD_PREFER_SIBLING for SMT level Vincent Guittot
2014-07-28 18:52 ` [PATCH v4 00/12] sched: consolidation of cpu_capacity Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1406569906-9763-3-git-send-email-vincent.guittot@linaro.org \
    --to=vincent.guittot@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).