From: preeti@linux.vnet.ibm.com (Preeti U Murthy)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH 13/13] sched: Modifying wake_affine to use PJT's metric
Date: Thu, 25 Oct 2012 15:56:22 +0530 [thread overview]
Message-ID: <20121025102622.21022.59828.stgit@preeti.in.ibm.com> (raw)
In-Reply-To: <20121025102045.21022.92489.stgit@preeti.in.ibm.com>
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
---
kernel/sched/fair.c | 34 +++++++++++++++-------------------
1 file changed, 15 insertions(+), 19 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 15ec528..b4b572c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2931,19 +2931,6 @@ static unsigned long power_of(int cpu)
return cpu_rq(cpu)->cpu_power;
}
-static unsigned long cpu_avg_load_per_task(int cpu)
-{
- struct rq *rq = cpu_rq(cpu);
- unsigned long nr_running = ACCESS_ONCE(rq->nr_running);
-
- if (nr_running) {
- return rq->load.weight / nr_running;
- }
-
- return 0;
-}
-
-
static void task_waking_fair(struct task_struct *p)
{
struct sched_entity *se = &p->se;
@@ -3085,16 +3072,18 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
{
s64 this_load, load;
int idx, this_cpu, prev_cpu;
- unsigned long tl_per_task;
+ u64 tl_per_task; /* Modified to reflect PJT's metric */
struct task_group *tg;
- unsigned long weight;
+ unsigned long weight, nr_running;
int balanced;
idx = sd->wake_idx;
this_cpu = smp_processor_id();
prev_cpu = task_cpu(p);
- load = source_load(prev_cpu, idx);
- this_load = target_load(this_cpu, idx);
+ /* Both of the below have been modified to use PJT's metric */
+ load = cpu_rq(prev_cpu)->cfs.runnable_load_avg;
+ this_load = cpu_rq(this_cpu)->cfs.runnable_load_avg;
+ nr_running = cpu_rq(this_cpu)->nr_running;
/*
* If sync wakeup then subtract the (maximum possible)
@@ -3104,6 +3093,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
if (sync) {
tg = task_group(current);
weight = current->se.load.weight;
+ weight = current->se.avg.load_avg_contrib;
this_load += effective_load(tg, this_cpu, -weight, -weight);
load += effective_load(tg, prev_cpu, 0, -weight);
@@ -3111,6 +3101,8 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
tg = task_group(p);
weight = p->se.load.weight;
+ /* The below change to reflect PJT's metric */
+ weight = p->se.avg.load_avg_contrib;
/*
* In low-load situations, where prev_cpu is idle and this_cpu is idle
@@ -3146,11 +3138,15 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
return 1;
schedstat_inc(p, se.statistics.nr_wakeups_affine_attempts);
- tl_per_task = cpu_avg_load_per_task(this_cpu);
+ /* Below modification to use PJT's metric */
+ if (nr_running)
+ tl_per_task = cpu_rq(this_cpu)->cfs.runnable_load_avg / nr_running;
+ else
+ tl_per_task = 0;
if (balanced ||
(this_load <= load &&
- this_load + target_load(prev_cpu, idx) <= tl_per_task)) {
+ this_load + cpu_rq(prev_cpu)->cfs.runnable_load_avg <= tl_per_task)) {
/*
* This domain has SD_WAKE_AFFINE and
* p is cache cold in this domain, and
next prev parent reply other threads:[~2012-10-25 10:26 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-25 10:24 [RFC PATCH 00/13] sched: Integrating Per-entity-load-tracking with the core scheduler Preeti U Murthy
2012-10-25 10:24 ` [RFC PATCH 01/13] sched:Prevent movement of short running tasks during load balancing Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 02/13] sched:Pick the apt busy sched group " Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 03/13] sched:Decide whether there be transfer of loads based on the PJT's metric Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 04/13] sched:Decide group_imb using " Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 05/13] sched:Calculate imbalance " Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 06/13] sched: Changing find_busiest_queue to use " Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 07/13] sched: Change move_tasks " Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 08/13] sched: Some miscallaneous changes in load_balance Preeti U Murthy
2012-10-25 10:25 ` [RFC PATCH 09/13] sched: Modify check_asym_packing to use PJT's metric Preeti U Murthy
2012-10-25 10:26 ` [RFC PATCH 10/13] sched: Modify fix_small_imbalance " Preeti U Murthy
2012-10-25 10:26 ` [RFC PATCH 11/13] sched: Modify find_idlest_group " Preeti U Murthy
2012-10-25 10:26 ` [RFC PATCH 12/13] sched: Modify find_idlest_cpu " Preeti U Murthy
2012-10-25 10:26 ` Preeti U Murthy [this message]
2012-10-25 10:33 ` [RFC PATCH 00/13] sched: Integrating Per-entity-load-tracking with the core scheduler Preeti Murthy
2012-10-25 15:56 ` Peter Zijlstra
2012-10-25 18:00 ` Preeti U Murthy
2012-10-25 18:12 ` Preeti U Murthy
2012-10-26 12:29 ` Peter Zijlstra
2012-10-26 13:07 ` Ingo Molnar
2012-10-27 3:36 ` Preeti U Murthy
2012-10-27 3:33 ` Preeti U Murthy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121025102622.21022.59828.stgit@preeti.in.ibm.com \
--to=preeti@linux.vnet.ibm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).