linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Ahern <dsahern@gmail.com>
To: Manuel Selva <selva.manuel@gmail.com>
Cc: Michael Ellerman <michael@ellerman.id.au>,
	linux-perf-users@vger.kernel.org,
	Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Subject: Re: How does perf collects per thread/process events ?
Date: Mon, 22 Jul 2013 07:44:35 -0600	[thread overview]
Message-ID: <51ED3743.4010802@gmail.com> (raw)
In-Reply-To: <51ECD63E.5070305@gmail.com>

On 7/22/13 12:50 AM, Manuel Selva wrote:
> Thanks for the answer Michael.
>
> I just created an account to be able to edit the perf wiki page. Before
> doing that, I am asking here if someone knows the policy to update this
> wiki or who are the maintainers to ask them ?

Arnaldo is the current one I believe.
David


>
> On 07/22/2013 06:19 AM, Michael Ellerman wrote:
>> On Tue, 2013-07-16 at 17:29 +0200, Manuel Selva wrote:
>>> Hi,
>>>
>>> My question regards a platform equipped with 2 Intel Xeon X5650.
>>> According to the perf wiki page
>>> (https://perf.wiki.kernel.org/index.php/Tutorial), "by default perf stat
>>> counts for all threads of the process and subsequent child processes and
>>> threads" and "By default, perf stat counts in per-thread mode".
>>>
>>> So a first question is what is the default: per thread or per process ?
>>
>> It's per process, which is as described in the first quote above. The
>> second quote is just wrong AFAICS.
>>
>>> Then, independently of the answer, I am wondering how does perf handles
>>> per thread or per process regarding the scheduler and migrations. I
>>> didn't find it explicitly in the Intel documentation but it seems
>>> natural that hardware performance counters located on a given core are
>>> only capable of counting event on this core and not on other cores. Is
>>> it true ?
>>>
>>> Moreover, the wiki page says that "When a thread migrated from one
>>> processor to another, counters are saved on the current processor and
>>> are restored on the new one" (this seems to confirm the answer to my
>>> previous question above). It means that the scheduler is aware about
>>> "perf" or that perf is able to register a hook into the scheduler. So I
>>> guess this is done in the kernel part of perf (in the implementation of
>>> the perf_event_open system call) and not in the user land part, is it
>>> true ?
>>
>> Yes.
>>
>> cheers
>>
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-perf-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2013-07-22 13:44 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-16 15:29 How does perf collects per thread/process events ? Manuel Selva
2013-07-22  4:19 ` Michael Ellerman
2013-07-22  6:50   ` Manuel Selva
2013-07-22 13:44     ` David Ahern [this message]
2013-07-22 15:25       ` Arnaldo Carvalho de Melo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51ED3743.4010802@gmail.com \
    --to=dsahern@gmail.com \
    --cc=acme@ghostprotocols.net \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=michael@ellerman.id.au \
    --cc=selva.manuel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).