From: William Cohen <wcohen@redhat.com>
To: Andi Kleen <andi@firstfloor.org>
Cc: linux-perf-users <linux-perf-users@vger.kernel.org>
Subject: Re: Using perf with cgroups and containers
Date: Wed, 26 Nov 2014 16:29:13 -0500 [thread overview]
Message-ID: <54764629.9050601@redhat.com> (raw)
In-Reply-To: <87wq6hhegq.fsf@tassilo.jf.intel.com>
On 11/26/2014 03:52 PM, Andi Kleen wrote:
> William Cohen <wcohen@redhat.com> writes:
>
>> Hi,
>>
>> I have been looking at how perf supports cgroups and containers. The
>> "-G" option allows limiting the data collected to a particular cgroup.
>> Thus, one can use the option to collect some information about a
>> particular cgroup with something like:
>>
>> $ sudo perf stat -a -e cycles -G
>> machine.slice/machine-qemu\\x2drhel7\\x2dx86_64.scope -e instructions
>> -G machine.slice/machine-qemu\\x2drhel7\\x2dx86_64.scope -- sleep 1
>
> You can specify multiple events with -e. Typically you should anyways,
> to define appropiate groups with {}
>
> perf record -a -e cycles,instructions -G cgroup ...
>
> -Andi
>
Hi Andi,
Is there some where that explain use of the "{}" for event grouping? The various perf man pages I have looked at (perf-record, perf-stat, and perf) don't seem to mention it. When reading the following from "man perf-record" it sounded like the comman separated event list wouldn't work:
-G name,..., --cgroup name,...
monitor only in the container (cgroup) called "name". This option
is available only in per-cpu mode. The cgroup filesystem must be
mounted. All threads belonging to container "name" are monitored
when they run on the monitored CPUs. Multiple cgroups can be
provided. Each cgroup is applied to the corresponding event, i.e.,
first cgroup to first event, second cgroup to second event and so
on. It is possible to provide an empty cgroup (monitor all the
time) using, e.g., -G foo,,bar. Cgroups must have corresponding
events, i.e., they always refer to events defined earlier on the
command line.
The results looks pretty questionable on my machine with the version of perf and kernel I am using:
$ uname -a
Linux santana 3.17.3-200.fc20.x86_64 #1 SMP Fri Nov 14 19:45:42 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ rpm -q perf
perf-3.17.3-200.fc20.x86_64
$ sudo perf stat -a -e cycles,instructions -G machine.slice/machine-qemu\\x2drhel7\\x2dx86_64.scope -- sleep .2
Performance counter stats for 'system wide':
2,983,776 cycles machine.slice/machine-qemu\x2drhel7\x2dx86_64.scope [74.96%]
87,972,874 instructions # 29.48 insns per cycle [100.00%]
0.201151985 seconds time elapsed
$ sudo perf stat -a -e "{cycles,instructions}" -G machine.slice/machine-qemu\\x2drhel7\\x2dx86_64.scope -- sleep .2
Performance counter stats for 'system wide':
2,512,934 cycles machine.slice/machine-qemu\x2drhel7\x2dx86_64.scope [82.53%]
813,334,082 instructions # 323.66 insns per cycle [ 0.09%]
0.201360285 seconds time elapsed
-Will
next prev parent reply other threads:[~2014-11-26 21:29 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-26 16:59 Using perf with cgroups and containers William Cohen
2014-11-26 20:52 ` Andi Kleen
2014-11-26 21:29 ` William Cohen [this message]
2014-11-28 18:00 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54764629.9050601@redhat.com \
--to=wcohen@redhat.com \
--cc=andi@firstfloor.org \
--cc=linux-perf-users@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).