public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nikhil Rao <ncrao@google.com>
To: Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <peterz@infradead.org>,
	Mike Galbraith <efault@gmx.de>
Cc: linux-kernel@vger.kernel.org,
	"Nikunj A. Dadhania" <nikunj@linux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
	Stephan Barwolf <stephan.baerwolf@tu-ilmenau.de>,
	Nikhil Rao <ncrao@google.com>
Subject: [PATCH v1 02/19] sched: increase SCHED_LOAD_SCALE resolution
Date: Sun,  1 May 2011 18:19:00 -0700	[thread overview]
Message-ID: <1304299157-25769-3-git-send-email-ncrao@google.com> (raw)
In-Reply-To: <1304299157-25769-1-git-send-email-ncrao@google.com>

Introduce SCHED_LOAD_RESOLUTION, which scales is added to SCHED_LOAD_SHIFT and
increases the resolution of SCHED_LOAD_SCALE. This patch sets the value of
SCHED_LOAD_RESOLUTION to 10, scaling up the weights for all sched entities by
a factor of 1024. With this extra resolution, we can handle deeper cgroup
hiearchies and the scheduler can do better shares distribution and load
load balancing on larger systems (especially for low weight task groups).

This does not change the existing user interface, the scaled weights are only
used internally. We do not modify prio_to_weight values or inverses, but use
the original weights when calculating the inverse which is used to scale
execution time delta in calc_delta_mine(). This ensures we do not lose accuracy
when accounting time to the sched entities. Thanks to Nikunj Dadhania for fixing
an bug in c_d_m() that broken fairness.

Signed-off-by: Nikhil Rao <ncrao@google.com>
---
 include/linux/sched.h |    3 ++-
 kernel/sched.c        |   20 +++++++++++---------
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8d1ff2b..d2c3bab 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -794,7 +794,8 @@ enum cpu_idle_type {
 /*
  * Increase resolution of nice-level calculations:
  */
-#define SCHED_LOAD_SHIFT	10
+#define SCHED_LOAD_RESOLUTION	10
+#define SCHED_LOAD_SHIFT	(10 + SCHED_LOAD_RESOLUTION)
 #define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
 
 /*
diff --git a/kernel/sched.c b/kernel/sched.c
index f4b4679..05e3fe2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -293,7 +293,7 @@ static DEFINE_SPINLOCK(task_group_lock);
  *  limitation from this.)
  */
 #define MIN_SHARES	2
-#define MAX_SHARES	(1UL << 18)
+#define MAX_SHARES	(1UL << 18 + SCHED_LOAD_RESOLUTION)
 
 static int root_task_group_load = ROOT_TASK_GROUP_LOAD;
 #endif
@@ -1307,14 +1307,14 @@ calc_delta_mine(unsigned long delta_exec, u64 weight, struct load_weight *lw)
 	u64 tmp;
 
 	if (!lw->inv_weight) {
-		if (BITS_PER_LONG > 32 && unlikely(lw->weight >= WMULT_CONST))
+		unsigned long w = lw->weight >> SCHED_LOAD_RESOLUTION;
+		if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST))
 			lw->inv_weight = 1;
 		else
-			lw->inv_weight = 1 + (WMULT_CONST-lw->weight/2)
-				/ (lw->weight+1);
+			lw->inv_weight = 1 + (WMULT_CONST - w/2) / (w + 1);
 	}
 
-	tmp = (u64)delta_exec * weight;
+	tmp = (u64)delta_exec * ((weight >> SCHED_LOAD_RESOLUTION) + 1);
 	/*
 	 * Check whether we'd overflow the 64-bit multiplication:
 	 */
@@ -1758,12 +1758,13 @@ static void set_load_weight(struct task_struct *p)
 	 * SCHED_IDLE tasks get minimal weight:
 	 */
 	if (p->policy == SCHED_IDLE) {
-		p->se.load.weight = WEIGHT_IDLEPRIO;
+		p->se.load.weight = WEIGHT_IDLEPRIO << SCHED_LOAD_RESOLUTION;
 		p->se.load.inv_weight = WMULT_IDLEPRIO;
 		return;
 	}
 
-	p->se.load.weight = prio_to_weight[p->static_prio - MAX_RT_PRIO];
+	p->se.load.weight = prio_to_weight[p->static_prio - MAX_RT_PRIO]
+				<< SCHED_LOAD_RESOLUTION;
 	p->se.load.inv_weight = prio_to_wmult[p->static_prio - MAX_RT_PRIO];
 }
 
@@ -9129,14 +9130,15 @@ cpu_cgroup_exit(struct cgroup_subsys *ss, struct cgroup *cgrp,
 static int cpu_shares_write_u64(struct cgroup *cgrp, struct cftype *cftype,
 				u64 shareval)
 {
-	return sched_group_set_shares(cgroup_tg(cgrp), shareval);
+	return sched_group_set_shares(cgroup_tg(cgrp),
+				      shareval << SCHED_LOAD_RESOLUTION);
 }
 
 static u64 cpu_shares_read_u64(struct cgroup *cgrp, struct cftype *cft)
 {
 	struct task_group *tg = cgroup_tg(cgrp);
 
-	return (u64) tg->shares;
+	return (u64) tg->shares >> SCHED_LOAD_RESOLUTION;
 }
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
-- 
1.7.3.1


  parent reply	other threads:[~2011-05-02  1:19 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-02  1:18 [PATCH v1 00/19] Increase resolution of load weights Nikhil Rao
2011-05-02  1:18 ` [PATCH v1 01/19] sched: introduce SCHED_POWER_SCALE to scale cpu_power calculations Nikhil Rao
2011-05-02  1:19 ` Nikhil Rao [this message]
2011-05-02  1:19 ` [PATCH v1 03/19] sched: use u64 for load_weight fields Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 04/19] sched: update cpu_load to be u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 05/19] sched: update this_cpu_load() to return u64 value Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 06/19] sched: update source_load(), target_load() and weighted_cpuload() to use u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 07/19] sched: update find_idlest_cpu() to use u64 for load Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 08/19] sched: update find_idlest_group() to use u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 09/19] sched: update division in cpu_avg_load_per_task to use div_u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 10/19] sched: update wake_affine path to use u64, s64 for weights Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 11/19] sched: update update_sg_lb_stats() to use u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 12/19] sched: Update update_sd_lb_stats() " Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 13/19] sched: update f_b_g() to use u64 for weights Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 14/19] sched: change type of imbalance to be u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 15/19] sched: update h_load to use u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 16/19] sched: update move_task() and helper functions to use u64 for weights Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 17/19] sched: update f_b_q() to use u64 for weighted cpuload Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 18/19] sched: update shares distribution to use u64 Nikhil Rao
2011-05-02  1:19 ` [PATCH v1 19/19] sched: convert atomic ops in shares update to use atomic64_t ops Nikhil Rao
2011-05-02  6:14 ` [PATCH v1 00/19] Increase resolution of load weights Ingo Molnar
2011-05-04  0:58   ` Nikhil Rao
2011-05-04  1:07     ` Nikhil Rao
2011-05-04 11:13     ` Ingo Molnar
2011-05-06  1:29       ` Nikhil Rao
2011-05-06  6:59         ` Ingo Molnar
2011-05-11  0:14           ` Nikhil Rao
2011-05-11  6:59             ` Ingo Molnar
2011-05-12  8:56               ` Nikhil Rao
2011-05-12 10:55                 ` Ingo Molnar
2011-05-12 18:44                   ` Nikhil Rao
2011-05-12  9:08         ` Peter Zijlstra
2011-05-12 17:30           ` Nikhil Rao
2011-05-13  7:19             ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1304299157-25769-3-git-send-email-ncrao@google.com \
    --to=ncrao@google.com \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=stephan.baerwolf@tu-ilmenau.de \
    --cc=vatsa@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox