From: Yuyang Du <yuyang.du@intel.com>
To: Boqun Feng <boqun.feng@gmail.com>
Cc: mingo@kernel.org, peterz@infradead.org,
linux-kernel@vger.kernel.org, pjt@google.com, bsegall@google.com,
morten.rasmussen@arm.com, vincent.guittot@linaro.org,
dietmar.eggemann@arm.com, len.brown@intel.com,
rafael.j.wysocki@intel.com, fengguang.wu@intel.com,
srikar@linux.vnet.ibm.com
Subject: Re: [PATCH v8 2/4] sched: Rewrite runnable load and utilization average tracking
Date: Fri, 19 Jun 2015 11:11:16 +0800 [thread overview]
Message-ID: <20150619031116.GA3933@intel.com> (raw)
In-Reply-To: <20150619075724.GA5331@fixme-laptop.cn.ibm.com>
On Fri, Jun 19, 2015 at 03:57:24PM +0800, Boqun Feng wrote:
> >
> > This rewrite patch does not NEED to aggregate entity's load to cfs_rq,
> > but rather directly update the cfs_rq's load (both runnable and blocked),
> > so there is NO NEED to iterate all of the cfs_rqs.
>
> Actually, I'm not sure whether we NEED to aggregate or NOT.
>
> >
> > So simply updating the top cfs_rq is already equivalent to the stock.
> >
Ok. By aggregate, the rewrite patch does not need it, because the cfs_rq's
load is calculated at once with all its runnable and blocked tasks counted,
assuming the all children's weights are up-to-date, of course. Please refer
to the changelog to get an idea.
>
> The stock does have a bottom up update, so simply updating the top
> cfs_rq is not equivalent to it. Simply updateing the top cfs_rq is
> equivalent to the rewrite patch, because the rewrite patch lacks of the
> aggregation.
It is not the rewrite patch "lacks" aggregation, it is needless. The stock
has to do a bottom-up update and aggregate, because 1) it updates the
load at an entity granularity, 2) the blocked load is separate.
> > It is better if we iterate the cfs_rq to update the actually weight
> > (update_cfs_share), because the weight may have already changed, which
> > would in turn change the load. But update_cfs_share is not cheap.
> >
> > Right?
>
> You get me right for most part ;-)
>
> My points are:
>
> 1. We *may not* need to aggregate entity's load to cfs_rq in
> update_blocked_averages(), simply updating the top cfs_rq may be just
> fine, but I'm not sure, so scheduler experts' insights are needed here.
Then I don't need to say anything about this.
> 2. Whether we need to aggregate or not, the update_blocked_averages() in
> the rewrite patch could be improved. If we need to aggregate, we have to
> add something like update_cfs_shares(). If we don't need, we can just
> replace the loop with one update_cfs_rq_load_avg() on root cfs_rq.
If update_cfs_shares() is done here, it is good, but probably not necessary
though. However, we do need to update_tg_load_avg() here, because if cfs_rq's
load change, the parent tg's load_avg should change too. I will upload a next
version soon.
In addition, an update to the stress + dbench test case:
I have a Core i7, not a Xeon Nehalem, and I have a patch that may not impact
the result. Then, the dbench runs at very low CPU utilization ~1%. Boqun said
this may result from cgroup control, the dbench I/O is low.
Anyway, I can't reproduce the results, the CPU0's util is 92+%, and other CPUs
have ~100% util.
Thanks,
Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
next prev parent reply other threads:[~2015-06-19 11:03 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-15 19:26 [Resend PATCH v8 0/4] sched: Rewrite runnable load and utilization average tracking Yuyang Du
2015-06-15 19:26 ` [PATCH v8 1/4] sched: Remove rq's runnable avg Yuyang Du
2015-06-19 18:27 ` Dietmar Eggemann
2015-06-21 22:26 ` Yuyang Du
2015-06-22 18:18 ` Dietmar Eggemann
2015-06-15 19:26 ` [PATCH v8 2/4] sched: Rewrite runnable load and utilization average tracking Yuyang Du
2015-06-19 6:00 ` Boqun Feng
2015-06-18 23:05 ` Yuyang Du
2015-06-19 7:57 ` Boqun Feng
2015-06-19 3:11 ` Yuyang Du [this message]
2015-06-19 12:22 ` Boqun Feng
2015-06-21 22:43 ` Yuyang Du
2015-06-15 19:26 ` [PATCH v8 3/4] sched: Init cfs_rq's sched_entity load average Yuyang Du
2015-06-15 19:26 ` [PATCH v8 4/4] sched: Remove task and group entity load when they are dead Yuyang Du
[not found] ` <20150617030650.GB5695@fixme-laptop.cn.ibm.com>
2015-06-17 5:15 ` [Resend PATCH v8 0/4] sched: Rewrite runnable load and utilization average tracking Boqun Feng
2015-06-17 3:11 ` Yuyang Du
2015-06-17 13:06 ` Boqun Feng
2015-06-17 19:04 ` Yuyang Du
2015-06-18 6:31 ` Wanpeng Li
2015-06-17 22:46 ` Yuyang Du
2015-06-18 11:48 ` Wanpeng Li
2015-06-18 18:25 ` Yuyang Du
2015-06-19 3:33 ` Wanpeng Li
[not found] <1432518587-114210-1-git-send-email-yuyang.du@intel.com>
[not found] ` <1432518587-114210-3-git-send-email-yuyang.du@intel.com>
2015-05-26 16:06 ` [PATCH v8 2/4] " Vincent Guittot
2015-05-27 22:36 ` Yuyang Du
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150619031116.GA3933@intel.com \
--to=yuyang.du@intel.com \
--cc=boqun.feng@gmail.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=fengguang.wu@intel.com \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=rafael.j.wysocki@intel.com \
--cc=srikar@linux.vnet.ibm.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).