From: Vincent Guittot <vincent.guittot@linaro.org>
To: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@kernel.org>,
linux-kernel <linux-kernel@vger.kernel.org>,
Yuyang Du <yuyang.du@intel.com>,
Morten Rasmussen <Morten.Rasmussen@arm.com>,
Linaro Kernel Mailman List <linaro-kernel@lists.linaro.org>,
Paul Turner <pjt@google.com>,
Benjamin Segall <bsegall@google.com>
Subject: Re: [PATCH 4/7 v3] sched: propagate load during synchronous attach/detach
Date: Thu, 15 Sep 2016 16:31:24 +0200 [thread overview]
Message-ID: <CAKfTPtC0dpLYsfn7Li__nr3MX45DX2fBL=ZTDqSuDCYKe2XS_w@mail.gmail.com> (raw)
In-Reply-To: <896df1f8-c5ee-ae4c-46f0-4f4e76ad19b1@arm.com>
On 15 September 2016 at 15:11, Dietmar Eggemann
<dietmar.eggemann@arm.com> wrote:
> On 12/09/16 08:47, Vincent Guittot wrote:
>> When a task moves from/to a cfs_rq, we set a flag which is then used to
>> propagate the change at parent level (sched_entity and cfs_rq) during
>> next update. If the cfs_rq is throttled, the flag will stay pending until
>> the cfs_rw is unthrottled.
>>
>> For propagating the utilization, we copy the utilization of child cfs_rq to
>
> s/child/group ?
>
>> the sched_entity.
>>
>> For propagating the load, we have to take into account the load of the
>> whole task group in order to evaluate the load of the sched_entity.
>> Similarly to what was done before the rewrite of PELT, we add a correction
>> factor in case the task group's load is less than its share so it will
>> contribute the same load of a task of equal weight.
>
> What about cfs_rq->runnable_load_avg?
sched_entity's load is updated before being enqueued so the up to date
value will be added to cfs_rq->runnable_load_avg... Unless se is
already enqueued ... so cfs_rq->runnable_load_avg should also be
updated is se is already on_rq. I'm going to add this case
Thanks for pointing this case
>
> [...]
>
>> +/* Take into account change of load of a child task group */
>> +static inline void
>> +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
>> +{
>> + struct cfs_rq *gcfs_rq = group_cfs_rq(se);
>> + long delta, load = gcfs_rq->avg.load_avg;
>> +
>> + /* If the load of group cfs_rq is null, the load of the
>> + * sched_entity will also be null so we can skip the formula
>> + */
>> + if (load) {
>> + long tg_load;
>> +
>> + /* Get tg's load and ensure tg_load > 0 */
>> + tg_load = atomic_long_read(&gcfs_rq->tg->load_avg) + 1;
>> +
>> + /* Ensure tg_load >= load and updated with current load*/
>> + tg_load -= gcfs_rq->tg_load_avg_contrib;
>> + tg_load += load;
>> +
>> + /* scale gcfs_rq's load into tg's shares*/
>> + load *= scale_load_down(gcfs_rq->tg->shares);
>> + load /= tg_load;
>> +
>> + /*
>> + * we need to compute a correction term in the case that the
>> + * task group is consuming <1 cpu so that we would contribute
>> + * the same load as a task of equal weight.
>
> Wasn't 'consuming <1' related to 'NICE_0_LOAD' and not
> scale_load_down(gcfs_rq->tg->shares) before the rewrite of PELT (v4.2,
> __update_group_entity_contrib())?
Yes before the rewrite, the condition (tg->runnable_avg < NICE_0_LOAD) was used.
I have used the following examples to choose the condition:
A task group with only one always running task TA with a weight equals
to tg->shares, will have a tg's load (cfs_rq->tg->load_avg) equals to
TA's weight == scale_load_down(tg->shares): The load of the CPU on
which the task runs, will be scale_load_down(task's weight) ==
scale_load_down(tg->shares) and the load of others CPUs will be null.
In this case, all shares will be given to cfs_rq CFS1 on which TA runs
and the load of the sched_entity SB that represents CFS1 at parent
level will be scale_load_down(SB's weight) =
scale_load_down(tg->shares).
If the TA is not an always running task, its load will be less than
its weight and less than scale_load_down(tg->shares) and as a result
tg->load_avg will be less than scale_load_down(tg->shares).
Nevertheless, the weight of SB is still scale_load_down(tg->shares)
and its load should be the same as TA. But the 1st part of the
calculation gives a load of scale_load_down(gcfs_rq->tg->shares)
because tg_load == gcfs_rq->tg_load_avg_contrib == load. So if tg_load
< scale_load_down(gcfs_rq->tg->shares), we have to correct the load
that we set to SEB
>
>> + */
>> + if (tg_load < scale_load_down(gcfs_rq->tg->shares)) {
>> + load *= tg_load;
>> + load /= scale_load_down(gcfs_rq->tg->shares);
>> + }
>> + }
>
> [...]
next prev parent reply other threads:[~2016-09-15 14:32 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-12 7:47 [PATCH 0/7 v3] sched: reflect sched_entity move into task_group's load Vincent Guittot
2016-09-12 7:47 ` [PATCH 1/7 v3] sched: factorize attach entity Vincent Guittot
2016-09-12 7:47 ` [PATCH 2/7 v3] sched: fix hierarchical order in rq->leaf_cfs_rq_list Vincent Guittot
2016-09-21 10:14 ` Dietmar Eggemann
2016-09-21 12:34 ` Vincent Guittot
2016-09-21 17:25 ` Dietmar Eggemann
2016-09-21 18:02 ` Vincent Guittot
2016-09-12 7:47 ` [PATCH 3/7 v3] sched: factorize PELT update Vincent Guittot
2016-09-15 13:09 ` Peter Zijlstra
2016-09-15 13:30 ` Vincent Guittot
2016-09-12 7:47 ` [PATCH 4/7 v3] sched: propagate load during synchronous attach/detach Vincent Guittot
2016-09-15 12:55 ` Peter Zijlstra
2016-09-15 13:01 ` Vincent Guittot
2016-09-15 12:59 ` Peter Zijlstra
2016-09-15 13:11 ` Vincent Guittot
2016-09-15 13:11 ` Dietmar Eggemann
2016-09-15 14:31 ` Vincent Guittot [this message]
2016-09-15 17:20 ` Dietmar Eggemann
2016-09-15 15:14 ` Peter Zijlstra
2016-09-15 17:36 ` Dietmar Eggemann
2016-09-15 17:54 ` Peter Zijlstra
2016-09-15 14:43 ` Peter Zijlstra
2016-09-15 14:51 ` Vincent Guittot
2016-09-19 3:19 ` Wanpeng Li
2016-09-12 7:47 ` [PATCH 5/7 v3] sched: propagate asynchrous detach Vincent Guittot
2016-09-12 7:47 ` [PATCH 6/7 v3] sched: fix task group initialization Vincent Guittot
2016-09-12 7:47 ` [PATCH 7/7 v3] sched: fix wrong utilization accounting when switching to fair class Vincent Guittot
2016-09-15 13:18 ` Peter Zijlstra
2016-09-15 15:36 ` Vincent Guittot
2016-09-16 12:16 ` Peter Zijlstra
2016-09-16 14:23 ` Vincent Guittot
2016-09-20 11:54 ` Peter Zijlstra
2016-09-20 13:06 ` Vincent Guittot
2016-09-22 12:25 ` Peter Zijlstra
2016-09-26 14:53 ` Peter Zijlstra
2016-09-20 16:59 ` bsegall
2016-09-22 8:33 ` Peter Zijlstra
2016-09-22 17:10 ` bsegall
2016-09-16 10:51 ` Peter Zijlstra
2016-09-16 12:45 ` Vincent Guittot
2016-09-30 12:01 ` [tip:sched/core] sched/core: Fix incorrect " tip-bot for Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAKfTPtC0dpLYsfn7Li__nr3MX45DX2fBL=ZTDqSuDCYKe2XS_w@mail.gmail.com' \
--to=vincent.guittot@linaro.org \
--cc=Morten.Rasmussen@arm.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=linaro-kernel@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=yuyang.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).