From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Ingo Molnar <mingo@elte.hu>
Cc: Paul Turner <pjt@google.com>,
linux-kernel@vger.kernel.org,
Bharata B Rao <bharata@linux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@gmail.com>,
Balbir Singh <balbir@linux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>,
Srivatsa Vaddagiri <vatsa@in.ibm.com>,
Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>,
Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>,
Pavel Emelyanov <xemul@openvz.org>, Hu Tao <hutao@cn.fujitsu.com>
Subject: Re: [patch 00/17] CFS Bandwidth Control v7.1
Date: Thu, 07 Jul 2011 13:28:21 +0200 [thread overview]
Message-ID: <1310038101.3282.557.camel@twins> (raw)
In-Reply-To: <20110707112302.GB8227@elte.hu>
On Thu, 2011-07-07 at 13:23 +0200, Ingo Molnar wrote:
> Well, the most recent run Hu Tao sent (with lockdep disabled) are
> different:
>
> table 2. shows the differences between patch and no-patch. quota is set
> to a large value to avoid processes being throttled.
>
> quota/period cycles instructions branches
> --------------------------------------------------------------------------------------------------
> base 1,146,384,132 1,151,216,688 212,431,532
> patch cgroup disabled 1,163,717,547 (1.51%) 1,165,238,015 ( 1.22%) 215,092,327 ( 1.25%)
> patch 10000000000/1000 1,244,889,136 (8.59%) 1,299,128,502 (12.85%) 243,162,542 (14.47%)
> patch 10000000000/10000 1,253,305,706 (9.33%) 1,299,167,897 (12.85%) 243,175,027 (14.47%)
> patch 10000000000/100000 1,252,374,134 (9.25%) 1,299,314,357 (12.86%) 243,203,923 (14.49%)
> patch 10000000000/1000000 1,254,165,824 (9.40%) 1,299,751,347 (12.90%) 243,288,600 (14.53%)
> --------------------------------------------------------------------------------------------------
>
>
> The +1.5% increase in vanilla kernel context switching performance is
> unfortunate - where does that overhead come from?
>
> The +9% increase in cgroups context-switching overhead looks rather
> brutal.
As to those, do they run pipe-test in a cgroup or are you always using
the root cgroup?
next prev parent reply other threads:[~2011-07-07 11:29 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-07 5:30 [patch 00/17] CFS Bandwidth Control v7.1 Paul Turner
2011-07-07 5:30 ` [patch 01/17] sched: (fixlet) dont update shares twice on on_rq parent Paul Turner
2011-07-21 18:28 ` [tip:sched/core] sched: Don't " tip-bot for Paul Turner
2011-07-07 5:30 ` [patch 02/17] sched: hierarchical task accounting for SCHED_OTHER Paul Turner
2011-07-07 5:30 ` [patch 03/17] sched: introduce primitives to account for CFS bandwidth tracking Paul Turner
2011-07-07 13:48 ` Peter Zijlstra
2011-07-07 21:30 ` Paul Turner
2011-07-07 5:30 ` [patch 04/17] sched: validate CFS quota hierarchies Paul Turner
2011-07-07 5:30 ` [patch 05/17] sched: accumulate per-cfs_rq cpu usage and charge against bandwidth Paul Turner
2011-07-07 5:30 ` [patch 06/17] sched: add a timer to handle CFS bandwidth refresh Paul Turner
2011-07-07 5:30 ` [patch 07/17] sched: expire invalid runtime Paul Turner
2011-07-07 5:30 ` [patch 08/17] sched: add support for throttling group entities Paul Turner
2011-07-07 5:30 ` [patch 09/17] sched: add support for unthrottling " Paul Turner
2011-07-07 5:30 ` [patch 10/17] sched: allow for positional tg_tree walks Paul Turner
2011-07-07 5:30 ` [patch 11/17] sched: prevent interactions with throttled entities Paul Turner
2011-07-07 5:30 ` [patch 12/17] sched: prevent buddy " Paul Turner
2011-07-07 5:30 ` [patch 13/17] sched: migrate throttled tasks on HOTPLUG Paul Turner
2011-07-07 5:30 ` [patch 14/17] sched: throttle entities exceeding their allowed bandwidth Paul Turner
2011-07-07 5:30 ` [patch 15/17] sched: add exports tracking cfs bandwidth control statistics Paul Turner
2011-07-07 5:30 ` [patch 16/17] sched: return unused runtime on group dequeue Paul Turner
2011-07-07 5:30 ` [patch 17/17] sched: add documentation for bandwidth control Paul Turner
2011-07-07 11:13 ` [patch 00/17] CFS Bandwidth Control v7.1 Peter Zijlstra
2011-07-11 1:22 ` Hu Tao
2011-07-07 11:23 ` Ingo Molnar
2011-07-07 11:28 ` Peter Zijlstra [this message]
2011-07-07 14:38 ` Peter Zijlstra
2011-07-07 14:51 ` Ingo Molnar
2011-07-07 14:54 ` Peter Zijlstra
2011-07-07 14:56 ` Ingo Molnar
2011-07-07 16:23 ` Jason Baron
2011-07-07 17:20 ` Peter Zijlstra
2011-07-07 18:15 ` Jason Baron
2011-07-07 20:36 ` jump_label defaults (was Re: [patch 00/17] CFS Bandwidth Control v7.1) Peter Zijlstra
2011-07-08 9:20 ` Peter Zijlstra
2011-07-08 15:47 ` Jason Baron
2011-07-07 16:52 ` [patch 00/17] CFS Bandwidth Control v7.1 Andi Kleen
2011-07-07 17:08 ` Peter Zijlstra
2011-07-07 17:59 ` Peter Zijlstra
2011-07-07 19:36 ` Jason Baron
2011-07-08 7:45 ` Paul Turner
2011-07-08 7:39 ` Paul Turner
2011-07-08 10:32 ` Peter Zijlstra
2011-07-09 7:34 ` Paul Turner
2011-07-10 18:12 ` Ingo Molnar
2011-07-07 14:06 ` Peter Zijlstra
2011-07-08 7:35 ` Paul Turner
2011-07-11 1:22 ` Hu Tao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1310038101.3282.557.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=balbir@linux.vnet.ibm.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=dhaval.giani@gmail.com \
--cc=hutao@cn.fujitsu.com \
--cc=kamalesh@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=pjt@google.com \
--cc=seto.hidetoshi@jp.fujitsu.com \
--cc=svaidy@linux.vnet.ibm.com \
--cc=vatsa@in.ibm.com \
--cc=xemul@openvz.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox