From: Mel Gorman <mgorman@suse.de>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
dietmar.eggemann@arm.com, rostedt@goodmis.org,
bsegall@google.com, linux-kernel@vger.kernel.org,
pauld@redhat.com, parth@linux.ibm.com,
valentin.schneider@arm.com, hdanton@sina.com
Subject: Re: [PATCH v4 0/5] remove runnable_load_avg and improve group_classify
Date: Fri, 21 Feb 2020 15:09:05 +0000 [thread overview]
Message-ID: <20200221150904.GU3420@suse.de> (raw)
In-Reply-To: <20200221132715.20648-1-vincent.guittot@linaro.org>
On Fri, Feb 21, 2020 at 02:27:10PM +0100, Vincent Guittot wrote:
> This new version stays quite close to the previous one and should
> replace without problems the previous one that part of Mel's patchset:
> https://lkml.org/lkml/2020/2/14/156
>
Thanks Vincent, just in time for tests to run over the weekend!
I can confirm the patches slotted in easily into a yet-to-be-relased v6
of my series that still has my fix inserted after patch 2. After looking
at your series, I see only patches 3-5 need to be retested as well as
my own patches on top. This should take less time as I can reuse some of
the old results. I'll post v6 if the tests complete successfully.
The overall diff is as follows in case you want to double check it is
what you expect.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3060ba94e813..3f51586365f3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -740,10 +740,8 @@ void init_entity_runnable_average(struct sched_entity *se)
* Group entities are initialized with zero load to reflect the fact that
* nothing has been attached to the task group yet.
*/
- if (entity_is_task(se)) {
- sa->runnable_avg = SCHED_CAPACITY_SCALE;
+ if (entity_is_task(se))
sa->load_avg = scale_load_down(se->load.weight);
- }
/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
}
@@ -796,6 +794,8 @@ void post_init_entity_util_avg(struct task_struct *p)
}
}
+ sa->runnable_avg = cpu_scale;
+
if (p->sched_class != &fair_sched_class) {
/*
* For !fair tasks do:
@@ -3083,9 +3083,9 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
#endif
enqueue_load_avg(cfs_rq, se);
- if (se->on_rq) {
+ if (se->on_rq)
account_entity_enqueue(cfs_rq, se);
- }
+
}
void reweight_task(struct task_struct *p, int prio)
@@ -5613,6 +5613,24 @@ static unsigned long cpu_runnable(struct rq *rq)
return cfs_rq_runnable_avg(&rq->cfs);
}
+static unsigned long cpu_runnable_without(struct rq *rq, struct task_struct *p)
+{
+ struct cfs_rq *cfs_rq;
+ unsigned int runnable;
+
+ /* Task has no contribution or is new */
+ if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+ return cpu_runnable(rq);
+
+ cfs_rq = &rq->cfs;
+ runnable = READ_ONCE(cfs_rq->avg.runnable_avg);
+
+ /* Discount task's runnable from CPU's runnable */
+ lsub_positive(&runnable, p->se.avg.runnable_avg);
+
+ return runnable;
+}
+
static unsigned long capacity_of(int cpu)
{
return cpu_rq(cpu)->cpu_capacity;
@@ -8521,6 +8539,7 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
sgs->group_load += cpu_load_without(rq, p);
sgs->group_util += cpu_util_without(i, p);
+ sgs->group_runnable += cpu_runnable_without(rq, p);
local = task_running_on_cpu(i, p);
sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 2cc88d9e3b38..c40d57a2a248 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -267,8 +267,6 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
* load_sum := runnable
* load_avg = se_weight(se) * load_sum
*
- * XXX collapse load_sum and runnable_load_sum
- *
* cfq_rq:
*
* runnable_sum = \Sum se->avg.runnable_sum
@@ -325,7 +323,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
* util_sum = cpu_scale * load_sum
* runnable_sum = util_sum
*
- * load_avg and runnable_load_avg are not supported and meaningless.
+ * load_avg and runnable_avg are not supported and meaningless.
*
*/
@@ -351,7 +349,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running)
* util_sum = cpu_scale * load_sum
* runnable_sum = util_sum
*
- * load_avg and runnable_load_avg are not supported and meaningless.
+ * load_avg and runnable_avg are not supported and meaningless.
*
*/
@@ -378,7 +376,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
* util_sum = cpu_scale * load_sum
* runnable_sum = util_sum
*
- * load_avg and runnable_load_avg are not supported and meaningless.
+ * load_avg and runnable_avg are not supported and meaningless.
*
*/
next prev parent reply other threads:[~2020-02-21 15:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-21 13:27 [PATCH v4 0/5] remove runnable_load_avg and improve group_classify Vincent Guittot
2020-02-21 13:27 ` [PATCH v4 1/5] sched/fair: Reorder enqueue/dequeue_task_fair path Vincent Guittot
2020-02-21 13:27 ` [PATCH v4 2/5] sched/numa: Replace runnable_load_avg by load_avg Vincent Guittot
2020-02-23 6:32 ` Parth Shah
2020-02-21 13:27 ` [PATCH v4 3/5] sched/pelt: Remove unused runnable load average Vincent Guittot
2020-02-21 13:27 ` [PATCH v4 4/5] sched/pelt: Add a new runnable average signal Vincent Guittot
2020-02-24 10:05 ` Parth Shah
2020-02-21 13:27 ` [PATCH v4 5/5] sched/fair: Take into account runnable_avg to classify group Vincent Guittot
2020-02-21 15:09 ` Mel Gorman [this message]
2020-02-23 6:08 ` [PATCH v4 0/5] remove runnable_load_avg and improve group_classify Parth Shah
[not found] ` <20200222055522.9548-1-hdanton@sina.com>
2020-02-24 8:32 ` [PATCH v4 5/5] sched/fair: Take into account runnable_avg to classify group Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200221150904.GU3420@suse.de \
--to=mgorman@suse.de \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=hdanton@sina.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=parth@linux.ibm.com \
--cc=pauld@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=valentin.schneider@arm.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox