linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <efault@gmx.de>,
	Matt Fleming <matt@codeblueprint.co.uk>,
	LKML <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 2/4] sched/fair: Restructure wake_affine to return a CPU id
Date: Tue, 30 Jan 2018 10:45:53 +0000	[thread overview]
Message-ID: <20180130104555.4125-3-mgorman@techsingularity.net> (raw)
In-Reply-To: <20180130104555.4125-1-mgorman@techsingularity.net>

This is a preparation patch that has wake_affine return a CPU ID instead of
a boolean. The intent is to allow the wake_affine() helpers to be avoided
if a decision is already made. This patch has no functional change.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 kernel/sched/fair.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a5d8462c13c..1aebe79da2ab 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5696,7 +5696,7 @@ static int wake_wide(struct task_struct *p)
  *			  scheduling latency of the CPUs. This seems to work
  *			  for the overloaded case.
  */
-static bool
+static int
 wake_affine_idle(int this_cpu, int prev_cpu, int sync)
 {
 	/*
@@ -5706,15 +5706,15 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
 	 * node depending on the IO topology or IRQ affinity settings.
 	 */
 	if (idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
-		return true;
+		return this_cpu;
 
 	if (sync && cpu_rq(this_cpu)->nr_running == 1)
-		return true;
+		return this_cpu;
 
-	return false;
+	return nr_cpumask_bits;
 }
 
-static bool
+static int
 wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 		   int this_cpu, int prev_cpu, int sync)
 {
@@ -5728,7 +5728,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 		unsigned long current_load = task_h_load(current);
 
 		if (current_load > this_eff_load)
-			return true;
+			return this_cpu;
 
 		this_eff_load -= current_load;
 	}
@@ -5745,28 +5745,28 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 		prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
 	prev_eff_load *= capacity_of(this_cpu);
 
-	return this_eff_load <= prev_eff_load;
+	return this_eff_load <= prev_eff_load ? this_cpu : nr_cpumask_bits;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 		       int prev_cpu, int sync)
 {
 	int this_cpu = smp_processor_id();
-	bool affine = false;
+	int target = nr_cpumask_bits;
 
 	if (sched_feat(WA_IDLE))
-		affine = wake_affine_idle(this_cpu, prev_cpu, sync);
+		target = wake_affine_idle(this_cpu, prev_cpu, sync);
 
-	if (sched_feat(WA_WEIGHT) && !affine)
-		affine = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync);
+	if (sched_feat(WA_WEIGHT) && target == nr_cpumask_bits)
+		target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync);
 
 	schedstat_inc(p->se.statistics.nr_wakeups_affine_attempts);
-	if (affine) {
-		schedstat_inc(sd->ttwu_move_affine);
-		schedstat_inc(p->se.statistics.nr_wakeups_affine);
-	}
+	if (target == nr_cpumask_bits)
+		return prev_cpu;
 
-	return affine;
+	schedstat_inc(sd->ttwu_move_affine);
+	schedstat_inc(p->se.statistics.nr_wakeups_affine);
+	return target;
 }
 
 static inline int task_util(struct task_struct *p);
@@ -6359,8 +6359,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
 		if (cpu == prev_cpu)
 			goto pick_cpu;
 
-		if (wake_affine(affine_sd, p, prev_cpu, sync))
-			new_cpu = cpu;
+		new_cpu = wake_affine(affine_sd, p, prev_cpu, sync);
 	}
 
 	if (sd && !(sd_flag & SD_BALANCE_FORK)) {
-- 
2.15.1

  parent reply	other threads:[~2018-01-30 10:46 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-30 10:45 [PATCH 0/4] Reduce migrations and unnecessary spreading of load to multiple CPUs Mel Gorman
2018-01-30 10:45 ` [PATCH 1/4] sched/fair: Remove unnecessary parameters from wake_affine_idle Mel Gorman
2018-02-06 11:55   ` [tip:sched/urgent] sched/fair: Remove unnecessary parameters from wake_affine_idle() tip-bot for Mel Gorman
2018-01-30 10:45 ` Mel Gorman [this message]
2018-02-06 11:56   ` [tip:sched/urgent] sched/fair: Restructure wake_affine*() to return a CPU id tip-bot for Mel Gorman
2018-01-30 10:45 ` [PATCH 3/4] sched/fair: Do not migrate if the prev_cpu is idle Mel Gorman
2018-02-06 11:56   ` [tip:sched/urgent] " tip-bot for Mel Gorman
2018-01-30 10:45 ` [PATCH 4/4] sched/fair: Use a recently used CPU as an idle candidate and the basis for SIS Mel Gorman
2018-01-30 11:50   ` Peter Zijlstra
2018-01-30 12:57     ` Mel Gorman
2018-01-30 13:15       ` Peter Zijlstra
2018-01-30 13:25         ` Mel Gorman
2018-01-30 13:40           ` Peter Zijlstra
2018-01-30 14:06             ` Mel Gorman
2018-01-31  9:22         ` Rafael J. Wysocki
2018-01-31 10:17           ` Peter Zijlstra
2018-01-31 11:54             ` Mel Gorman
2018-01-31 17:44             ` Srinivas Pandruvada
2018-02-01  9:11               ` Peter Zijlstra
2018-02-01  7:50             ` Rafael J. Wysocki
2018-02-01  9:11               ` Peter Zijlstra
2018-02-01 13:18                 ` Srinivas Pandruvada
2018-02-02 11:00                   ` Rafael J. Wysocki
2018-02-02 14:54                     ` Srinivas Pandruvada
2018-02-02 19:48                       ` Mel Gorman
2018-02-02 20:01                         ` Srinivas Pandruvada
2018-02-05 11:10                           ` Mel Gorman
2018-02-05 17:04                             ` Srinivas Pandruvada
2018-02-05 17:50                               ` Mel Gorman
2018-02-04  8:42                         ` Rafael J. Wysocki
2018-02-04  8:38                       ` Rafael J. Wysocki
2018-02-02 11:42                 ` Rafael J. Wysocki
2018-02-02 12:46                   ` Peter Zijlstra
2018-02-02 12:55                     ` Peter Zijlstra
2018-02-02 14:08                     ` Peter Zijlstra
2018-02-03 16:30                       ` Srinivas Pandruvada
2018-02-05 10:44                         ` Peter Zijlstra
2018-02-05 10:58                           ` Ingo Molnar
2018-02-02 12:58                   ` Peter Zijlstra
2018-02-02 13:27                   ` Mel Gorman
2018-01-30 13:15       ` Mike Galbraith
2018-01-30 13:25         ` Peter Zijlstra
2018-01-30 13:35           ` Mike Galbraith
2018-01-30 11:53   ` Peter Zijlstra
2018-01-30 12:59     ` Mel Gorman
2018-01-30 13:06     ` Peter Zijlstra
2018-01-30 13:18       ` Mel Gorman
2018-02-06 11:56   ` [tip:sched/urgent] " tip-bot for Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180130104555.4125-3-mgorman@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt@codeblueprint.co.uk \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).