From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Yuyang Du <yuyang.du@intel.com>,
Morten Rasmussen <Morten.Rasmussen@arm.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Rabin Vincent <rabin.vincent@axis.com>,
"mingo@redhat.com" <mingo@redhat.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Paul Turner <pjt@google.com>, Ben Segall <bsegall@google.com>
Subject: Re: [PATCH?] Livelock in pick_next_task_fair() / idle_balance()
Date: Mon, 06 Jul 2015 18:36:56 +0100 [thread overview]
Message-ID: <559ABCB8.6020209@arm.com> (raw)
In-Reply-To: <20150705201241.GE5197@intel.com>
Hi Yuyang,
On 05/07/15 21:12, Yuyang Du wrote:
> Hi Morten,
>
> On Fri, Jul 03, 2015 at 10:34:41AM +0100, Morten Rasmussen wrote:
>>>> IOW, since task groups include blocked load in the load_avg_contrib (see
>>>> __update_group_entity_contrib() and __update_cfs_rq_tg_load_contrib()) the
>>>> imbalance includes blocked load and hence env->imbalance >=
>>>> sum(task_h_load(p)) for all tasks p on the rq. Which leads to
>>>> detach_tasks() emptying the rq completely in the reported scenario where
>>>> blocked load > runnable load.
>>>
>>> Whenever I want to know the load avg concerning task group, I need to
>>> walk through the complete codes again, I prefer not to do it this time.
>>> But it should not be that simply to say "the 118 comes from the blocked load".
>>
>> But the whole hierarchy of group entities is updated each time we enqueue
>> or dequeue a task. I don't see how the group entity load_avg_contrib is
>> not up to date? Why do you need to update it again?
>>
>> In any case, we have one task in the group hierarchy which has a
>> load_avg_contrib of 0 and the grand-grand parent group entity has a
>> load_avg_contrib of 118 and no additional tasks. That load contribution
>> must be from tasks which are no longer around on the rq? No?
>
> load_avg_contrib has WEIGHT inside, so the most I can say is:
> SE: 8f456e00's load_avg_contrib 118 = (its cfs_rq's runnable + blocked) / (tg->load_avg + 1) * tg->shares
>
> The tg->shares is probably 1024 (at least 911). So we are just left with:
>
> cfs_rq / tg = 11.5%
>
> I myself did question the sudden jump from 0 to 118 (see a previous reply).
Do you mean the jump from system-rngd.slice (0) (tg.css.id=3) to
system.slice (118) (tg.css.id=2)?
Maybe, the 118 might come from another tg hierarchy (w/ tg.css.id >= 3)
inside the system.slice group representing another service.
Rabin, could you share the content of your
/sys/fs/cgroup/cpu/system.slice directory and of /proc/cgroups ?
Whether 118 comes from the cfs_rq->blocked_load_avg of one of the tg
levels of one of the other system.slice tg hierarchies or it results
from not updating the se.avg.load_avg_contrib values of se's
representing tg's immediately is not that important I guess.
Even if we're able to sync both things (task en/dequeue and tg
se.avg.load_avg_contrib update) perfectly (by calling
update_cfs_rq_blocked_load() always w/ force_update=1 and immediately
after that update_entity_load_avg() for all tg se's in one hierarchy, we
would still have to deal w/ the blocked load part if the tg se
representing system.slice contributes to
cpu_rq(cpu)->cfs.runnable_load_avg.
-- Dietmar
>
> But anyway, this really is irrelevant to the discusstion.
>
[...]
next prev parent reply other threads:[~2015-07-06 17:37 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-30 14:30 [PATCH?] Livelock in pick_next_task_fair() / idle_balance() Rabin Vincent
2015-07-01 5:36 ` Mike Galbraith
2015-07-01 14:55 ` Rabin Vincent
2015-07-01 15:47 ` Mike Galbraith
2015-07-01 20:44 ` Peter Zijlstra
2015-07-01 23:25 ` Yuyang Du
2015-07-02 8:05 ` Mike Galbraith
2015-07-02 1:05 ` Yuyang Du
2015-07-02 10:25 ` Mike Galbraith
2015-07-02 11:40 ` Morten Rasmussen
2015-07-02 19:37 ` Yuyang Du
2015-07-03 9:34 ` Morten Rasmussen
2015-07-03 16:38 ` Peter Zijlstra
2015-07-05 22:31 ` Yuyang Du
2015-07-09 14:32 ` Morten Rasmussen
2015-07-09 23:24 ` Yuyang Du
2015-07-05 20:12 ` Yuyang Du
2015-07-06 17:36 ` Dietmar Eggemann [this message]
2015-07-07 11:17 ` Rabin Vincent
2015-07-13 17:43 ` Dietmar Eggemann
2015-07-09 13:53 ` Morten Rasmussen
2015-07-09 22:34 ` Yuyang Du
2015-07-02 10:53 ` Peter Zijlstra
2015-07-02 11:44 ` Morten Rasmussen
2015-07-02 18:42 ` Yuyang Du
2015-07-03 4:42 ` Mike Galbraith
2015-07-03 16:39 ` Peter Zijlstra
2015-07-05 22:11 ` Yuyang Du
2015-07-09 6:15 ` Stefan Ekenberg
2015-07-26 18:57 ` Yuyang Du
2015-08-03 17:05 ` [tip:sched/core] sched/fair: Avoid pulling all tasks in idle balancing tip-bot for Yuyang Du
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=559ABCB8.6020209@arm.com \
--to=dietmar.eggemann@arm.com \
--cc=Morten.Rasmussen@arm.com \
--cc=bsegall@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=rabin.vincent@axis.com \
--cc=umgwanakikbuti@gmail.com \
--cc=yuyang.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox