From: Nikhil Rao <ncrao@google.com>
To: Ingo Molnar <mingo@elte.hu>, Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>, Mike Galbraith <efault@gmx.de>,
linux-kernel@vger.kernel.org, Nikhil Rao <ncrao@google.com>
Subject: [RFC][Patch 18/18] sched: update shares distribution to use u64
Date: Wed, 20 Apr 2011 13:51:37 -0700 [thread overview]
Message-ID: <1303332697-16426-19-git-send-email-ncrao@google.com> (raw)
In-Reply-To: <1303332697-16426-1-git-send-email-ncrao@google.com>
Update the shares distribution code to use u64. We still maintain tg->shares as
an unsigned long since sched entity weights can't exceed MAX_SHARES (2^28). This
patch updates all the calculations required to estimate shares to use u64.
Signed-off-by: Nikhil Rao <ncrao@google.com>
---
kernel/sched.c | 2 +-
kernel/sched_debug.c | 6 +++---
kernel/sched_fair.c | 13 ++++++++-----
3 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 7c1f3fc..a9e85a0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -367,7 +367,7 @@ struct cfs_rq {
u64 load_period;
u64 load_stamp, load_last, load_unacc_exec_time;
- unsigned long load_contribution;
+ u64 load_contribution;
#endif
#endif
};
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
index d22b666..b809651 100644
--- a/kernel/sched_debug.c
+++ b/kernel/sched_debug.c
@@ -204,11 +204,11 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
SEQ_printf(m, " .%-30s: %lld\n", "load", cfs_rq->load.weight);
#ifdef CONFIG_FAIR_GROUP_SCHED
#ifdef CONFIG_SMP
- SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "load_avg",
+ SEQ_printf(m, " .%-30s: %lld.%06ld\n", "load_avg",
SPLIT_NS(cfs_rq->load_avg));
- SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "load_period",
+ SEQ_printf(m, " .%-30s: %lld.%06ld\n", "load_period",
SPLIT_NS(cfs_rq->load_period));
- SEQ_printf(m, " .%-30s: %ld\n", "load_contrib",
+ SEQ_printf(m, " .%-30s: %lld\n", "load_contrib",
cfs_rq->load_contribution);
SEQ_printf(m, " .%-30s: %d\n", "load_tg",
atomic_read(&cfs_rq->tg->load_weight));
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 33c36f1..6808f26 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -708,12 +708,13 @@ static void update_cfs_rq_load_contribution(struct cfs_rq *cfs_rq,
int global_update)
{
struct task_group *tg = cfs_rq->tg;
- long load_avg;
+ s64 load_avg;
load_avg = div64_u64(cfs_rq->load_avg, cfs_rq->load_period+1);
load_avg -= cfs_rq->load_contribution;
if (global_update || abs(load_avg) > cfs_rq->load_contribution / 8) {
+ /* TODO: fix atomics for 64-bit additions */
atomic_add(load_avg, &tg->load_weight);
cfs_rq->load_contribution += load_avg;
}
@@ -723,7 +724,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
{
u64 period = sysctl_sched_shares_window;
u64 now, delta;
- unsigned long load = cfs_rq->load.weight;
+ u64 load = cfs_rq->load.weight;
if (cfs_rq->tg == &root_task_group)
return;
@@ -745,6 +746,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
if (load) {
cfs_rq->load_last = now;
cfs_rq->load_avg += delta * load;
+ /* TODO: detect overflow and fix */
}
/* consider updating load contribution on each fold or truncate */
@@ -769,24 +771,25 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
{
- long load_weight, load, shares;
+ s64 load_weight, load, shares;
load = cfs_rq->load.weight;
+ /* TODO: fixup atomics to handle u64 in 32-bit */
load_weight = atomic_read(&tg->load_weight);
load_weight += load;
load_weight -= cfs_rq->load_contribution;
shares = (tg->shares * load);
if (load_weight)
- shares /= load_weight;
+ shares = div64_u64(shares, load_weight);
if (shares < MIN_SHARES)
shares = MIN_SHARES;
if (shares > tg->shares)
shares = tg->shares;
- return shares;
+ return (long)shares;
}
static void update_entity_shares_tick(struct cfs_rq *cfs_rq)
--
1.7.3.1
next prev parent reply other threads:[~2011-04-20 20:52 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-20 20:51 [RFC][PATCH 00/18] Increase resolution of load weights Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 01/18] sched: introduce SCHED_POWER_SCALE to scale cpu_power calculations Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 02/18] sched: increase SCHED_LOAD_SCALE resolution Nikhil Rao
2011-04-28 9:54 ` Nikunj A. Dadhania
2011-04-28 17:11 ` Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 03/18] sched: use u64 for load_weight fields Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 04/18] sched: update cpu_load to be u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 05/18] sched: update this_cpu_load() to return u64 value Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 06/18] sched: update source_load(), target_load() and weighted_cpuload() to use u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 07/18] sched: update find_idlest_cpu() to use u64 for load Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 08/18] sched: update find_idlest_group() to use u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 09/18] sched: update division in cpu_avg_load_per_task to use div_u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 10/18] sched: update wake_affine path to use u64, s64 for weights Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 11/18] sched: update update_sg_lb_stats() to use u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 12/18] sched: Update update_sd_lb_stats() " Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 13/18] sched: update f_b_g() to use u64 for weights Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 14/18] sched: change type of imbalance to be u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 15/18] sched: update h_load to use u64 Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 16/18] sched: update move_task() and helper functions to use u64 for weights Nikhil Rao
2011-04-20 20:51 ` [RFC][Patch 17/18] sched: update f_b_q() to use u64 for weighted cpuload Nikhil Rao
2011-04-20 20:51 ` Nikhil Rao [this message]
2011-04-21 6:16 ` [RFC][PATCH 00/18] Increase resolution of load weights Ingo Molnar
2011-04-21 16:32 ` Peter Zijlstra
2011-04-26 16:11 ` Nikhil Rao
2011-04-21 16:40 ` Peter Zijlstra
2011-04-28 7:07 ` Nikunj A. Dadhania
2011-04-28 11:48 ` Nikunj A. Dadhania
2011-04-28 12:12 ` Srivatsa Vaddagiri
2011-04-28 18:33 ` Nikhil Rao
2011-04-28 18:51 ` Paul Turner
2011-04-28 18:53 ` Paul Turner
2011-04-28 21:27 ` Nikhil Rao
2011-04-29 16:55 ` Paul Turner
2011-04-28 18:20 ` Nikhil Rao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1303332697-16426-19-git-send-email-ncrao@google.com \
--to=ncrao@google.com \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox