public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Corey Ashford <cjashfor@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Lin Ming <ming.m.lin@intel.com>, Ingo Molnar <mingo@elte.hu>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	"eranian@gmail.com" <eranian@gmail.com>,
	"Gary.Mohr@Bull.com" <Gary.Mohr@bull.com>,
	"arjan@linux.intel.com" <arjan@linux.intel.com>,
	"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
	Paul Mackerras <paulus@samba.org>,
	"David S. Miller" <davem@davemloft.net>,
	Russell King <rmk+kernel@arm.linux.org.uk>,
	Paul Mundt <lethal@linux-sh.org>,
	lkml <linux-kernel@vger.kernel.org>
Subject: Re: [RFC][PATCH 3/9] perf: export registerred pmus via sysfs
Date: Mon, 10 May 2010 16:54:16 -0700	[thread overview]
Message-ID: <4BE89CA8.3020801@linux.vnet.ibm.com> (raw)
In-Reply-To: <1273490824.5605.3379.camel@twins>



On 5/10/2010 4:27 AM, Peter Zijlstra wrote:
> On Mon, 2010-05-10 at 18:26 +0800, Lin Ming wrote:
> 
>>> No, I'm assuming there is only 1 PMU per CPU. Corey is the expert on
>>> crazy hardware though, 

:-)

>>> but I think the sanest way is to extend the CPU
>>> topology if there's more structure to it.
>>
>> But our goal is to support multiple pmus, don't we need to assume there
>> are more than 1 PMU per CPU?
> 
> No, because as I said, then its ambiguous what pmu you want. If you have
> that, you need to extend your topology information.
> 
> Anyway, I talked with Ingo on this and he'd like to see this somewhat
> extended.
> 
> Instead of a pmu_id field, which we pass into a new
> perf_event_attr::pmu_id field, how about creating an event_source sysfs
> class. Then each class can have an event_source_id and a hierarchy of
> 'generic' events.
> 
> We'd start using the PERF_TYPE_ space for this and express the
> PERF_COUNT_ space in the event attributes found inside that class.
> 
> That way we can include all the existing event enumerations into this as
> well.
> 
> This way we can create:
> 
> /sys/devices/system/cpu/cpuN/cpu_hardware_events 
>                              cpu_hardware_events/event_source_id
>                              cpu_hardware_events/cpu_cycles
>                              cpu_hardware_events/instructions
>                                                 /...
> 
> /sys/devices/system/cpu/cpuN/cpu_raw_events
>                              cpu_raw_events/event_source_id
> 
> 
> These would match the current PERF_TYPE_* values for compatibility
> 
> For new PMUs we can start a dynamic range of PERF_TYPE_ (say at 64k but
> that's not ABI and can be changed at any time, we've got u32 to play
> with).
> 
> For uncore this would result in:
> 
> /sys/devices/system/node/nodeN/node_raw_events
>                                node_raw_events/event_source_id
> 
> and maybe:
> 
> /sys/devices/system/node/nodeN/node_events
>                                node_events/event_source_id
>                                node_events/local_misses
>                                           /local_hits
>                                           /remote_misses
>                                           /remote_hits
>                                           /...


Just to give a concrete example, the IBM Wire-Speed Processor has four AT-"nodes" per chip, each containing four PowerPC cores.

Those four nodes together share a number of nest PMU accelerators, I/O devices, buses etc. which each have their own PMUs.  Further adding to the structure is that some of the nodes are replicated.  For example, we have two memory controllers, each with a pair of PMUs.

/sys/devices/system/node/node0/mem_ctlr0/
                                        event_source_id
                                        events/
                                              partial_cacheline_read_retried/
                                              partial_cacheline_write_retried/
                                              ...
                               mem_ctlr1/
                                        event_source_id
                                        events/
                                              partial_cacheline_read_retried/
                                              ...

So it's a bit ugly to replicate the event information across identical pmus, but that can be done via links, without too much memory cost, I assume.

Does this seem workable?


-- 
Regards,

- Corey


  parent reply	other threads:[~2010-05-10 23:54 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-10  9:27 [RFC][PATCH 3/9] perf: export registerred pmus via sysfs Lin Ming
2010-05-10  9:40 ` Peter Zijlstra
2010-05-10 10:11   ` Lin Ming
2010-05-10 10:18     ` Peter Zijlstra
2010-05-10 10:26       ` Lin Ming
2010-05-10 10:35         ` Paul Mundt
2010-05-10 10:58           ` Lin Ming
2010-05-10 11:04             ` Peter Zijlstra
2010-05-10 11:11               ` Lin Ming
2010-05-10 11:18                 ` Lin Ming
2010-05-10 11:27         ` Peter Zijlstra
2010-05-10 11:36           ` Peter Zijlstra
2010-05-10 11:48             ` Ingo Molnar
2010-05-10 11:39           ` Russell King
2010-05-10 11:42           ` Peter Zijlstra
2010-05-10 20:25             ` Will Deacon
2010-05-11  6:34               ` Peter Zijlstra
2010-05-10 11:43           ` Ingo Molnar
2010-05-10 11:49             ` Peter Zijlstra
2010-05-10 11:53               ` Ingo Molnar
2010-05-10 23:13                 ` Corey Ashford
2010-05-11  6:46                   ` Peter Zijlstra
2010-05-11  7:21                     ` Ingo Molnar
2010-05-11  8:20                       ` Lin Ming
2010-05-11  8:50                         ` Peter Zijlstra
2010-05-11  9:03                           ` Lin Ming
2010-05-11  9:05                             ` Lin Ming
2010-05-11  9:12                             ` Peter Zijlstra
2010-05-11  9:18                               ` Ingo Molnar
2010-05-11  9:24                                 ` Peter Zijlstra
2010-05-11  9:31                                   ` Ingo Molnar
2010-05-11 10:28                                     ` Lin Ming
2010-05-13  8:28                                 ` Lin Ming
2010-05-13  8:38                                   ` Ingo Molnar
2010-05-13  9:22                                     ` Lin Ming
2010-05-11  9:40                               ` Lin Ming
2010-05-11  9:48                                 ` Peter Zijlstra
2010-05-11  9:53                                   ` Lin Ming
2010-05-11 15:17                                   ` Greg KH
2010-05-12  5:51                                   ` Paul Mundt
2010-05-12  8:37                                     ` Peter Zijlstra
2010-05-14  7:04                                       ` Paul Mundt
2010-05-11 10:09                   ` stephane eranian
2010-05-11 14:15             ` Borislav Petkov
2010-05-11 14:25               ` Peter Zijlstra
2010-05-11 15:37                 ` Borislav Petkov
2010-05-11 15:46                   ` Peter Zijlstra
2010-05-10 23:54           ` Corey Ashford [this message]
2010-05-11  6:50             ` Peter Zijlstra
2010-05-11  2:43           ` Lin Ming
2010-05-11  6:35             ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4BE89CA8.3020801@linux.vnet.ibm.com \
    --to=cjashfor@linux.vnet.ibm.com \
    --cc=Gary.Mohr@bull.com \
    --cc=arjan@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=eranian@gmail.com \
    --cc=fweisbec@gmail.com \
    --cc=lethal@linux-sh.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.m.lin@intel.com \
    --cc=mingo@elte.hu \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=rmk+kernel@arm.linux.org.uk \
    --cc=yanmin_zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox