linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Yuyang Du <yuyang.du@intel.com>
Cc: peterz@infradead.org, linux-kernel@vger.kernel.org,
	bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	juri.lelli@arm.com
Subject: [PATCH v4] sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
Date: Thu, 5 May 2016 09:46:53 +0200	[thread overview]
Message-ID: <20160505074653.GA20065@gmail.com> (raw)
In-Reply-To: <1462305773-7832-9-git-send-email-yuyang.du@intel.com>


* Yuyang Du <yuyang.du@intel.com> wrote:

> After cleaning up the sched metrics, these two definitions that cause
> ambiguity are not needed any more. Use NICE_0_LOAD_SHIFT and NICE_0_LOAD
> instead (the names suggest clearly who they are).
> 
> Suggested-by: Ben Segall <bsegall@google.com>
> Signed-off-by: Yuyang Du <yuyang.du@intel.com>

Yeah, so this patch was a bit of a trainwreck:

 - it didn't build on 32-bit kernels

 - a stale SCHED_LOAD_SHIFT definition was left around

 - the title and th changelog lies actively: it's not a removal, but a complex
   combination of a rename, replace and removal ...

I've fixed that all with the patch below, but _please_ be more careful in the 
future when changing scheduler code, and please also read your changelogs and 
patch titles before sending them out to make sure the label matches contents.

I'll push it all out in tip:sched/core if it passes testing.

Thanks,

	Ingo

=======================>
>From 172895e6b5216eba3e0880460829a8baeefd55f3 Mon Sep 17 00:00:00 2001
From: Yuyang Du <yuyang.du@intel.com>
Date: Tue, 5 Apr 2016 12:12:27 +0800
Subject: [PATCH] sched/fair: Rename SCHED_LOAD_SHIFT to NICE_0_LOAD_SHIFT and remove SCHED_LOAD_SCALE

After cleaning up the sched metrics, there are two definitions that are
ambiguous and confusing: SCHED_LOAD_SHIFT and SCHED_LOAD_SHIFT.

Resolve this:

 - Rename SCHED_LOAD_SHIFT to NICE_0_LOAD_SHIFT, which better reflects what
   it is.

 - Replace SCHED_LOAD_SCALE use with SCHED_CAPACITY_SCALE and remove SCHED_LOAD_SCALE.

Suggested-by: Ben Segall <bsegall@google.com>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: lizefan@huawei.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1459829551-21625-3-git-send-email-yuyang.du@intel.com
[ Rewrote the changelog and fixed the build on 32-bit kernels. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c  |  4 ++--
 kernel/sched/sched.h | 22 +++++++++++-----------
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 76ca86e9fc20..e1485710d1ec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -719,7 +719,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
 {
 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
 	struct sched_avg *sa = &se->avg;
-	long cap = (long)(scale_load_down(SCHED_LOAD_SCALE) - cfs_rq->avg.util_avg) / 2;
+	long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
 
 	if (cap > 0) {
 		if (cfs_rq->avg.util_avg != 0) {
@@ -7010,7 +7010,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	if (busiest->group_type == group_overloaded &&
 	    local->group_type   == group_overloaded) {
 		load_above_capacity = busiest->sum_nr_running *
-					SCHED_LOAD_SCALE;
+				      scale_load_down(NICE_0_LOAD);
 		if (load_above_capacity > busiest->group_capacity)
 			load_above_capacity -= busiest->group_capacity;
 		else
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ad83361f9e67..d24e91b0a722 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -56,25 +56,25 @@ static inline void cpu_load_update_active(struct rq *this_rq) { }
  * increase coverage and consistency always enable it on 64bit platforms.
  */
 #ifdef CONFIG_64BIT
-# define SCHED_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
 # define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
 # define scale_load_down(w)	((w) >> SCHED_FIXEDPOINT_SHIFT)
 #else
-# define SCHED_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
 # define scale_load(w)		(w)
 # define scale_load_down(w)	(w)
 #endif
 
-#define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
-
 /*
- * NICE_0's weight (visible to users) and its load (invisible to users) have
- * independent ranges, but they should be well calibrated. We use scale_load()
- * and scale_load_down(w) to convert between them, and the following must be true:
- * scale_load(sched_prio_to_weight[20]) == NICE_0_LOAD
+ * Task weight (visible to users) and its load (invisible to users) have
+ * independent resolution, but they should be well calibrated. We use
+ * scale_load() and scale_load_down(w) to convert between them. The
+ * following must be true:
+ *
+ *  scale_load(sched_prio_to_weight[USER_PRIO(NICE_TO_PRIO(0))]) == NICE_0_LOAD
+ *
  */
-#define NICE_0_LOAD		SCHED_LOAD_SCALE
-#define NICE_0_SHIFT		SCHED_LOAD_SHIFT
+#define NICE_0_LOAD		(1L << NICE_0_LOAD_SHIFT)
 
 /*
  * Single value that decides SCHED_DEADLINE internal math precision.
@@ -863,7 +863,7 @@ DECLARE_PER_CPU(struct sched_domain *, sd_asym);
 struct sched_group_capacity {
 	atomic_t ref;
 	/*
-	 * CPU capacity of this group, SCHED_LOAD_SCALE being max capacity
+	 * CPU capacity of this group, SCHED_CAPACITY_SCALE being max capacity
 	 * for a single CPU.
 	 */
 	unsigned int capacity;

  reply	other threads:[~2016-05-05  7:47 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-03 20:02 [PATCH v3 00/12] sched/fair: Optimize and clean up sched averages Yuyang Du
2016-05-03 20:02 ` [PATCH v3 01/12] sched/fair: Optimize sum computation with a lookup table Yuyang Du
2016-05-03 20:02 ` [PATCH v3 02/12] sched/fair: Rename variable names for sched averages Yuyang Du
2016-05-03 20:02 ` [PATCH v3 03/12] sched/fair: Change the variable to hold the number of periods to 32bit Yuyang Du
2016-05-05 11:13   ` Morten Rasmussen
2016-05-05 11:23     ` Vincent Guittot
2016-05-05 18:19     ` Yuyang Du
2016-05-10  9:10       ` Morten Rasmussen
2016-05-10  2:05         ` Yuyang Du
2016-05-03 20:02 ` [PATCH v3 04/12] sched/fair: Add __always_inline compiler attribute to __accumulate_sum() Yuyang Du
2016-05-03 20:02 ` [PATCH v3 05/12] sched/fair: Optimize __update_sched_avg() Yuyang Du
2016-05-10  8:46   ` Morten Rasmussen
2016-05-10  2:27     ` Yuyang Du
2016-05-03 20:02 ` [PATCH v3 06/12] documentation: Add scheduler/sched-avg.txt Yuyang Du
2016-05-03 20:02 ` [PATCH v3 07/12] sched/fair: Generalize the load/util averages resolution definition Yuyang Du
2016-05-03 20:02 ` [PATCH v3 08/12] sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE Yuyang Du
2016-05-05  7:46   ` Ingo Molnar [this message]
2016-05-03 20:02 ` [PATCH v3 09/12] sched/fair: Add introduction to the sched average metrics Yuyang Du
2016-05-03 20:02 ` [PATCH v3 10/12] sched/fair: Remove scale_load_down() for load_avg Yuyang Du
2016-05-03 20:02 ` [PATCH v3 11/12] sched/fair: Rename scale_load() and scale_load_down() Yuyang Du
2016-05-03 20:02 ` [PATCH v3 12/12] sched/fair: Enable increased scale for kernel load Yuyang Du

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160505074653.GA20065@gmail.com \
    --to=mingo@kernel.org \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=vincent.guittot@linaro.org \
    --cc=yuyang.du@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).