From: Yuyang Du <yuyang.du@intel.com>
To: Boqun Feng <boqun.feng@gmail.com>
Cc: mingo@kernel.org, peterz@infradead.org,
linux-kernel@vger.kernel.org, pjt@google.com, bsegall@google.com,
morten.rasmussen@arm.com, vincent.guittot@linaro.org,
dietmar.eggemann@arm.com, len.brown@intel.com,
rafael.j.wysocki@intel.com, fengguang.wu@intel.com,
srikar@linux.vnet.ibm.com
Subject: Re: [Resend PATCH v8 0/4] sched: Rewrite runnable load and utilization average tracking
Date: Thu, 18 Jun 2015 03:04:02 +0800 [thread overview]
Message-ID: <20150617190402.GD1244@intel.com> (raw)
In-Reply-To: <20150617130617.GB7154@fixme-laptop.cn.ibm.com>
On Wed, Jun 17, 2015 at 09:06:17PM +0800, Boqun Feng wrote:
>
> > So the problem is:
> >
> > 1) The tasks in the workload have too small weight (only 79), because
> > they share a task group.
> >
> > 2) Probably some "high" weight task even runnable a small time
> > contribute "big" to cfs_rq's load_avg.
>
> Thank you for your analysis.
>
> Some updates:
>
> I created a task group /g and set /g/cpu.shares to 13312 (1024 * 13),
> and then ran `stress --cpu 12` and `dbench 1` simultaneously in that
> group. The situation is much better, only one CPU is not fully loaded,
> and its utilization rate stays around 85%.
>
Hi,
That is good. You can as well disable autogroup, or "nicer" the autogroup,
or exec the dbench from another shell, etc...
Thank you for the tests. This may not be intuitive, but actually the results
showcased that:
1) the patchset improves the task group share management, accomplishes what it is
said to be in terms of fair share, finally.
2) the seamlessly combined runnable + blocked load_avg improves the share
of the sometimes runnable sometimes blocked tasks by preserving the blocked
load in the avg, fairness is achieved as the dbench has the same weight as
the 12 stress tasks, and the dbench (buried in CPU hogging tasks) performance
is thus improved.
Peter?
In addition, to correct the util_avg odd value, the following patch should work.
Send it here before I send another version.
Thanks,
Yuyang
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8fd7b9..2b0907c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -687,7 +687,7 @@ void init_entity_runnable_average(struct sched_entity *se)
sa->load_avg = scale_load_down(se->load.weight);
sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
sa->util_avg = scale_load_down(SCHED_LOAD_SCALE);
- sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
+ sa->util_sum = LOAD_AVG_MAX;
/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
}
#else
next prev parent reply other threads:[~2015-06-18 2:56 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-15 19:26 [Resend PATCH v8 0/4] sched: Rewrite runnable load and utilization average tracking Yuyang Du
2015-06-15 19:26 ` [PATCH v8 1/4] sched: Remove rq's runnable avg Yuyang Du
2015-06-19 18:27 ` Dietmar Eggemann
2015-06-21 22:26 ` Yuyang Du
2015-06-22 18:18 ` Dietmar Eggemann
2015-06-15 19:26 ` [PATCH v8 2/4] sched: Rewrite runnable load and utilization average tracking Yuyang Du
2015-06-19 6:00 ` Boqun Feng
2015-06-18 23:05 ` Yuyang Du
2015-06-19 7:57 ` Boqun Feng
2015-06-19 3:11 ` Yuyang Du
2015-06-19 12:22 ` Boqun Feng
2015-06-21 22:43 ` Yuyang Du
2015-06-15 19:26 ` [PATCH v8 3/4] sched: Init cfs_rq's sched_entity load average Yuyang Du
2015-06-15 19:26 ` [PATCH v8 4/4] sched: Remove task and group entity load when they are dead Yuyang Du
[not found] ` <20150617030650.GB5695@fixme-laptop.cn.ibm.com>
2015-06-17 5:15 ` [Resend PATCH v8 0/4] sched: Rewrite runnable load and utilization average tracking Boqun Feng
2015-06-17 3:11 ` Yuyang Du
2015-06-17 13:06 ` Boqun Feng
2015-06-17 19:04 ` Yuyang Du [this message]
2015-06-18 6:31 ` Wanpeng Li
2015-06-17 22:46 ` Yuyang Du
2015-06-18 11:48 ` Wanpeng Li
2015-06-18 18:25 ` Yuyang Du
2015-06-19 3:33 ` Wanpeng Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150617190402.GD1244@intel.com \
--to=yuyang.du@intel.com \
--cc=boqun.feng@gmail.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=fengguang.wu@intel.com \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=rafael.j.wysocki@intel.com \
--cc=srikar@linux.vnet.ibm.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).