From: Peter Zijlstra <peterz@infradead.org>
To: mingo@kernel.org, linux-kernel@vger.kernel.org
Cc: subhra.mazumdar@oracle.com, steven.sistare@oracle.com,
dhaval.giani@oracle.com, rohit.k.jain@oracle.com,
umgwanakikbuti@gmail.com, matt@codeblueprint.co.uk,
riel@surriel.com, peterz@infradead.org
Subject: [RFC 02/11] sched/fair: Age the average idle time
Date: Wed, 30 May 2018 16:22:38 +0200 [thread overview]
Message-ID: <20180530143105.977759909@infradead.org> (raw)
In-Reply-To: 20180530142236.667774973@infradead.org
[-- Attachment #1: peterz-sis-again-2.patch --]
[-- Type: text/plain, Size: 2908 bytes --]
Currently select_idle_cpu()'s proportional scheme uses the average
idle time *for when we are idle*, that is temporally challenged.
When we're not at all idle, we'll happily continue using whatever
value we did see when we did go idle. To fix this, introduce a seprate
average idle and age it (the existing value still makes sense for
things like new-idle balancing, which happens when we do go idle).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/sched/core.c | 5 +++++
kernel/sched/fair.c | 29 ++++++++++++++++++++++++-----
kernel/sched/features.h | 2 ++
kernel/sched/sched.h | 3 +++
4 files changed, 34 insertions(+), 5 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1674,6 +1674,9 @@ static void ttwu_do_wakeup(struct rq *rq
if (rq->avg_idle > max)
rq->avg_idle = max;
+ rq->wake_stamp = jiffies;
+ rq->wake_avg = rq->avg_idle / 2;
+
rq->idle_stamp = 0;
}
#endif
@@ -6051,6 +6054,8 @@ void __init sched_init(void)
rq->online = 0;
rq->idle_stamp = 0;
rq->avg_idle = 2*sysctl_sched_migration_cost;
+ rq->wake_stamp = jiffies;
+ rq->wake_avg = rq->avg_idle;
rq->max_idle_balance_cost = sysctl_sched_migration_cost;
INIT_LIST_HEAD(&rq->cfs_tasks);
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6378,11 +6378,30 @@ static int select_idle_cpu(struct task_s
if (!this_sd)
return -1;
- /*
- * Due to large variance we need a large fuzz factor; hackbench in
- * particularly is sensitive here.
- */
- avg_idle = this_rq()->avg_idle / 512;
+ if (sched_feat(SIS_AGE)) {
+ unsigned long now = jiffies;
+ struct rq *this_rq = this_rq();
+
+ /*
+ * If we're busy, the assumption that the last idle period
+ * predicts the future is flawed; age away the remaining
+ * predicted idle time.
+ */
+ if (unlikely(this_rq->wake_stamp < now)) {
+ while (this_rq->wake_stamp < now && this_rq->wake_avg) {
+ this_rq->wake_stamp++;
+ this_rq->wake_avg >>= 1;
+ }
+ }
+
+ avg_idle = this_rq->wake_avg;
+ } else {
+ /*
+ * Due to large variance we need a large fuzz factor; hackbench
+ * in particularly is sensitive here.
+ */
+ avg_idle = this_rq()->avg_idle / 512;
+ }
avg_cost = this_sd->avg_scan_cost + 1;
if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost)
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -58,6 +58,8 @@ SCHED_FEAT(TTWU_QUEUE, true)
SCHED_FEAT(SIS_AVG_CPU, false)
SCHED_FEAT(SIS_PROP, true)
+SCHED_FEAT(SIS_AGE, true)
+
/*
* Issue a WARN when we do multiple update_rq_clock() calls
* in a single rq->lock section. Default disabled because the
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -831,6 +831,9 @@ struct rq {
u64 idle_stamp;
u64 avg_idle;
+ unsigned long wake_stamp;
+ u64 wake_avg;
+
/* This is used to determine avg_idle's max value */
u64 max_idle_balance_cost;
#endif
next prev parent reply other threads:[~2018-05-30 14:37 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-30 14:22 [RFC 00/11] select_idle_sibling rework Peter Zijlstra
2018-05-30 14:22 ` [RFC 01/11] sched/fair: Fix select_idle_cpu()s cost accounting Peter Zijlstra
2018-05-30 14:22 ` Peter Zijlstra [this message]
2018-05-30 14:22 ` [RFC 03/11] sched/fair: Only use time once Peter Zijlstra
2018-05-30 14:22 ` [RFC 04/11] sched/topology: Introduce sched_domain_cores() Peter Zijlstra
2018-05-30 14:22 ` [RFC 05/11] sched/fair: Re-arrange select_idle_cpu() Peter Zijlstra
2018-05-30 14:22 ` [RFC 06/11] sched/fair: Make select_idle_cpu() proportional to cores Peter Zijlstra
2018-05-30 14:22 ` [RFC 07/11] sched/fair: Fold the select_idle_sibling() scans Peter Zijlstra
2018-05-30 14:22 ` [RFC 08/11] sched/fair: Optimize SIS_FOLD Peter Zijlstra
2018-05-30 14:22 ` [RFC 09/11] sched/fair: Remove SIS_AVG_PROP Peter Zijlstra
2018-05-30 14:22 ` [RFC 10/11] sched/fair: Remove SIS_AGE/SIS_ONCE Peter Zijlstra
2018-05-30 14:22 ` [RFC 11/11] sched/fair: Remove SIS_FOLD Peter Zijlstra
2018-06-19 22:06 ` [RFC 00/11] select_idle_sibling rework Matt Fleming
2018-06-20 22:20 ` Steven Sistare
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180530143105.977759909@infradead.org \
--to=peterz@infradead.org \
--cc=dhaval.giani@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matt@codeblueprint.co.uk \
--cc=mingo@kernel.org \
--cc=riel@surriel.com \
--cc=rohit.k.jain@oracle.com \
--cc=steven.sistare@oracle.com \
--cc=subhra.mazumdar@oracle.com \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox