From: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
To: Vladimir Davydov <vdavydov@parallels.com>
Cc: Paul Turner <pjt@google.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Bharata B Rao <bharata@linux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@gmail.com>,
Balbir Singh <balbir@linux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>,
Srivatsa Vaddagiri <vatsa@in.ibm.com>,
Ingo Molnar <mingo@elte.hu>,
Pavel Emelianov <xemul@parallels.com>
Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinned
Date: Wed, 8 Jun 2011 22:02:34 +0530 [thread overview]
Message-ID: <20110608163234.GA23031@linux.vnet.ibm.com> (raw)
In-Reply-To: <1307529966.4928.8.camel@dhcp-10-30-22-158.sw.ru>
* Vladimir Davydov <vdavydov@parallels.com> [2011-06-08 14:46:06]:
> On Tue, 2011-06-07 at 19:45 +0400, Kamalesh Babulal wrote:
> > Hi All,
> >
> > In our test environment, while testing the CFS Bandwidth V6 patch set
> > on top of 55922c9d1b84. We observed that the CPU's idle time is seen
> > between 30% to 40% while running CPU bound test, with the cgroups tasks
> > not pinned to the CPU's. Whereas in the inverse case, where the cgroups
> > tasks are pinned to the CPU's, the idle time seen is nearly zero.
>
> (snip)
>
> > load_tasks()
> > {
> > for (( i=1; i<=5; i++ ))
> > do
> > jj=$(eval echo "\$NR_TASKS$i")
> > shares="1024"
> > if [ $PRO_SHARES -eq 1 ]
> > then
> > eval shares=$(echo "$jj * 1024" | bc)
> > fi
> > echo $hares > $MOUNT/$i/cpu.shares
> ^^^^^
> a fatal misprint? must be shares, I guess
>
> (Setting cpu.shares to "", i.e. to the minimal possible value, will
> definitely confuse the load balancer)
My bad. It was fatal typo, thanks for pointing it out. It made a big difference
in the idle time reported. After correcting to $shares, now the CPU idle time
reported is 20% to 22%. Which is 10% less from the previous reported number.
(snip)
There have been questions on how to interpret the results. Consider the
following test run without pinning of the cgroups tasks
Average CPU Idle percentage 20%
Bandwidth shared with remaining non-Idle 80%
Bandwidth of Group 1 = 7.9700% i.e = 6.3700% of non-Idle CPU time 80%
|...... subgroup 1/1 = 50.0200 i.e = 3.1800% of 6.3700% Groups non-Idle CPU time
|...... subgroup 1/2 = 49.9700 i.e = 3.1800% of 6.3700% Groups non-Idle CPU time
For example let consider the cgroup1 and sum_exec time is the 7 field
captured from the /proc/sched_debug
while1 27273 30665.912793 1988 120 30665.912793 30909.566767 0.021951 /1/2
while1 27272 30511.105690 1995 120 30511.105690 30942.998099 0.017369 /1/1
-----------------
61852.564866
-----------------
- The bandwidth for sub-cgroup1 of cgroup1 is calculated = (30909.566767 * 100) / 61852.564866
= ~50%
and sub-cgroup2 of cgroup1 is calculated = (30942.998099 * 100) / 61852.564866
= ~50%
In the similar way If we add up the sum_exec of all the groups its
------------------------------------------------------------------------------------------------
Group1 Group2 Group3 Group4 Group5 sum_exec
------------------------------------------------------------------------------------------------
61852.564866 + 61686.604930 + 122840.294858 + 232576.303937 +296166.889155 = 775122.657746
again taking the example of cgroup1
Total percentage of bandwidth allocated to cgroup1 = (61852.564866 * 100) / 775122.657746
= ~ 7.9% of total bandwidth of all the cgroups
Calculating the non-idle time is done with
Total (execution time * 100) / (no of cpus * 60000 ms) [script is run for a 60 seconds]
i.e. = (775122.657746 * 100) / (16 * 60000)
= ~80% of non-idle time
Percentage of bandwidth allocated to cgroup1 of the non-idle is derived as
= (cgroup bandwith percentage * non-idle time) / 100
= for cgroup1 = (7.9700 * 80) / 100
= 6.376% bandwidth allocated of non-Idle CPU time.
Bandwidth of Group 2 = 7.9500% i.e = 6.3600% of non-Idle CPU time 80%
|...... subgroup 2/1 = 49.9900 i.e = 3.1700% of 6.3600% Groups non-Idle CPU time
|...... subgroup 2/2 = 50.0000 i.e = 3.1800% of 6.3600% Groups non-Idle CPU time
Bandwidth of Group 3 = 15.8400% i.e = 12.6700% of non-Idle CPU time 80%
|...... subgroup 3/1 = 24.9900 i.e = 3.1600% of 12.6700% Groups non-Idle CPU time
|...... subgroup 3/2 = 24.9900 i.e = 3.1600% of 12.6700% Groups non-Idle CPU time
|...... subgroup 3/3 = 25.0600 i.e = 3.1700% of 12.6700% Groups non-Idle CPU time
|...... subgroup 3/4 = 24.9400 i.e = 3.1500% of 12.6700% Groups non-Idle CPU time
Bandwidth of Group 4 = 30.0000% i.e = 24.0000% of non-Idle CPU time 80%
|...... subgroup 4/1 = 13.1600 i.e = 3.1500% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/2 = 11.3800 i.e = 2.7300% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/3 = 13.1100 i.e = 3.1400% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/4 = 12.3100 i.e = 2.9500% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/5 = 12.8200 i.e = 3.0700% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/6 = 11.0600 i.e = 2.6500% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/7 = 13.0600 i.e = 3.1300% of 24.0000% Groups non-Idle CPU time
|...... subgroup 4/8 = 13.0600 i.e = 3.1300% of 24.0000% Groups non-Idle CPU time
Bandwidth of Group 5 = 38.2000% i.e = 30.5600% of non-Idle CPU time 80%
|...... subgroup 5/1 = 48.1000 i.e = 14.6900% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/2 = 6.7900 i.e = 2.0700% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/3 = 6.3700 i.e = 1.9400% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/4 = 5.1800 i.e = 1.5800% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/5 = 5.0400 i.e = 1.5400% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/6 = 10.1400 i.e = 3.0900% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/7 = 5.0700 i.e = 1.5400% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/8 = 6.3900 i.e = 1.9500% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/9 = 6.8800 i.e = 2.1000% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/10 = 6.4700 i.e = 1.9700% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/11 = 6.5600 i.e = 2.0000% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/12 = 4.6400 i.e = 1.4100% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/13 = 7.4900 i.e = 2.2800% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/14 = 5.8200 i.e = 1.7700% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/15 = 6.5500 i.e = 2.0000% of 30.5600% Groups non-Idle CPU time
|...... subgroup 5/16 = 5.2700 i.e = 1.6100% of 30.5600% Groups non-Idle CPU time
Thanks,
Kamalesh.
next prev parent reply other threads:[~2011-06-08 16:33 UTC|newest]
Thread overview: 129+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-03 9:28 [patch 00/15] CFS Bandwidth Control V6 Paul Turner
2011-05-03 9:28 ` [patch 01/15] sched: (fixlet) dont update shares twice on on_rq parent Paul Turner
2011-05-10 7:14 ` Hidetoshi Seto
2011-05-10 8:32 ` Mike Galbraith
2011-05-11 7:55 ` Hidetoshi Seto
2011-05-11 8:13 ` Paul Turner
2011-05-11 8:45 ` Mike Galbraith
2011-05-11 8:59 ` Hidetoshi Seto
2011-05-03 9:28 ` [patch 02/15] sched: hierarchical task accounting for SCHED_OTHER Paul Turner
2011-05-10 7:17 ` Hidetoshi Seto
2011-05-03 9:28 ` [patch 03/15] sched: introduce primitives to account for CFS bandwidth tracking Paul Turner
2011-05-10 7:18 ` Hidetoshi Seto
2011-05-03 9:28 ` [patch 04/15] sched: validate CFS quota hierarchies Paul Turner
2011-05-10 7:20 ` Hidetoshi Seto
2011-05-11 9:37 ` Paul Turner
2011-05-16 9:30 ` Peter Zijlstra
2011-05-16 9:43 ` Peter Zijlstra
2011-05-16 12:32 ` Paul Turner
2011-05-17 15:26 ` Peter Zijlstra
2011-05-18 7:16 ` Paul Turner
2011-05-18 11:57 ` Peter Zijlstra
2011-05-03 9:28 ` [patch 05/15] sched: add a timer to handle CFS bandwidth refresh Paul Turner
2011-05-10 7:21 ` Hidetoshi Seto
2011-05-11 9:27 ` Paul Turner
2011-05-16 10:18 ` Peter Zijlstra
2011-05-16 12:56 ` Paul Turner
2011-05-03 9:28 ` [patch 06/15] sched: accumulate per-cfs_rq cpu usage and charge against bandwidth Paul Turner
2011-05-10 7:22 ` Hidetoshi Seto
2011-05-11 9:25 ` Paul Turner
2011-05-16 10:27 ` Peter Zijlstra
2011-05-16 12:59 ` Paul Turner
2011-05-17 15:28 ` Peter Zijlstra
2011-05-18 7:02 ` Paul Turner
2011-05-16 10:32 ` Peter Zijlstra
2011-05-03 9:28 ` [patch 07/15] sched: expire invalid runtime Paul Turner
2011-05-10 7:22 ` Hidetoshi Seto
2011-05-16 11:05 ` Peter Zijlstra
2011-05-16 11:07 ` Peter Zijlstra
2011-05-03 9:28 ` [patch 08/15] sched: throttle cfs_rq entities which exceed their local runtime Paul Turner
2011-05-10 7:23 ` Hidetoshi Seto
2011-05-16 15:58 ` Peter Zijlstra
2011-05-16 16:05 ` Peter Zijlstra
2011-05-03 9:28 ` [patch 09/15] sched: unthrottle cfs_rq(s) who ran out of quota at period refresh Paul Turner
2011-05-10 7:24 ` Hidetoshi Seto
2011-05-11 9:24 ` Paul Turner
2011-05-03 9:28 ` [patch 10/15] sched: allow for positional tg_tree walks Paul Turner
2011-05-10 7:24 ` Hidetoshi Seto
2011-05-17 13:31 ` Peter Zijlstra
2011-05-18 7:18 ` Paul Turner
2011-05-03 9:28 ` [patch 11/15] sched: prevent interactions between throttled entities and load-balance Paul Turner
2011-05-10 7:26 ` Hidetoshi Seto
2011-05-11 9:11 ` Paul Turner
2011-05-03 9:28 ` [patch 12/15] sched: migrate throttled tasks on HOTPLUG Paul Turner
2011-05-10 7:27 ` Hidetoshi Seto
2011-05-11 9:10 ` Paul Turner
2011-05-03 9:28 ` [patch 13/15] sched: add exports tracking cfs bandwidth control statistics Paul Turner
2011-05-10 7:27 ` Hidetoshi Seto
2011-05-11 7:56 ` Hidetoshi Seto
2011-05-11 9:09 ` Paul Turner
2011-05-03 9:29 ` [patch 14/15] sched: return unused runtime on voluntary sleep Paul Turner
2011-05-10 7:28 ` Hidetoshi Seto
2011-05-03 9:29 ` [patch 15/15] sched: add documentation for bandwidth control Paul Turner
2011-05-10 7:29 ` Hidetoshi Seto
2011-05-11 9:09 ` Paul Turner
2011-06-07 15:45 ` CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinned Kamalesh Babulal
2011-06-08 3:09 ` Paul Turner
2011-06-08 10:46 ` Vladimir Davydov
2011-06-08 16:32 ` Kamalesh Babulal [this message]
2011-06-09 3:25 ` Paul Turner
2011-06-10 18:17 ` Kamalesh Babulal
2011-06-14 0:00 ` Paul Turner
2011-06-15 5:37 ` Kamalesh Babulal
2011-06-21 19:48 ` Paul Turner
2011-06-24 15:05 ` Kamalesh Babulal
2011-09-07 11:00 ` Srivatsa Vaddagiri
2011-09-07 14:54 ` Srivatsa Vaddagiri
2011-09-07 15:20 ` CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede Srivatsa Vaddagiri
2011-09-07 19:22 ` Peter Zijlstra
2011-09-08 15:15 ` Srivatsa Vaddagiri
2011-09-09 12:31 ` Peter Zijlstra
2011-09-09 13:26 ` Srivatsa Vaddagiri
2011-09-12 10:17 ` Srivatsa Vaddagiri
2011-09-12 12:35 ` Peter Zijlstra
2011-09-13 4:15 ` Srivatsa Vaddagiri
2011-09-13 5:03 ` Srivatsa Vaddagiri
2011-09-13 5:05 ` Srivatsa Vaddagiri
2011-09-13 9:39 ` Peter Zijlstra
2011-09-13 11:28 ` Srivatsa Vaddagiri
2011-09-13 14:07 ` Peter Zijlstra
2011-09-13 16:21 ` Srivatsa Vaddagiri
2011-09-13 16:33 ` Peter Zijlstra
2011-09-13 17:41 ` Srivatsa Vaddagiri
2011-09-13 16:36 ` Peter Zijlstra
2011-09-13 17:54 ` Srivatsa Vaddagiri
2011-09-13 18:03 ` Peter Zijlstra
2011-09-13 18:12 ` Srivatsa Vaddagiri
2011-09-13 18:07 ` Peter Zijlstra
2011-09-13 18:19 ` Peter Zijlstra
2011-09-13 18:28 ` Srivatsa Vaddagiri
2011-09-13 18:30 ` Peter Zijlstra
2011-09-13 18:35 ` Srivatsa Vaddagiri
2011-09-15 17:55 ` Kamalesh Babulal
2011-09-15 21:48 ` Peter Zijlstra
2011-09-19 17:51 ` Kamalesh Babulal
2011-09-20 0:38 ` Venki Pallipadi
2011-09-20 11:09 ` Kamalesh Babulal
2011-09-20 13:56 ` Peter Zijlstra
2011-09-20 14:04 ` Peter Zijlstra
2011-09-20 12:55 ` Peter Zijlstra
2011-09-21 17:34 ` Kamalesh Babulal
2011-09-13 14:19 ` Peter Zijlstra
2011-09-13 18:01 ` Srivatsa Vaddagiri
2011-09-13 18:23 ` Peter Zijlstra
2011-09-16 8:14 ` Paul Turner
2011-09-16 8:28 ` Peter Zijlstra
2011-09-19 16:35 ` Srivatsa Vaddagiri
2011-09-16 8:22 ` Paul Turner
2011-06-14 10:16 ` CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinned Hidetoshi Seto
2011-06-14 6:58 ` [patch 00/15] CFS Bandwidth Control V6 Hu Tao
2011-06-14 7:29 ` Hidetoshi Seto
2011-06-14 7:44 ` Hu Tao
2011-06-15 8:37 ` Hu Tao
2011-06-16 0:57 ` Hidetoshi Seto
2011-06-16 9:45 ` Hu Tao
2011-06-17 1:22 ` Hidetoshi Seto
2011-06-17 6:05 ` Hu Tao
2011-06-17 6:25 ` Paul Turner
2011-06-17 9:13 ` Hidetoshi Seto
2011-06-18 0:28 ` Paul Turner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110608163234.GA23031@linux.vnet.ibm.com \
--to=kamalesh@linux.vnet.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=balbir@linux.vnet.ibm.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=dhaval.giani@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=pjt@google.com \
--cc=svaidy@linux.vnet.ibm.com \
--cc=vatsa@in.ibm.com \
--cc=vdavydov@parallels.com \
--cc=xemul@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).