public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Corey Ashford <cjashfor@linux.vnet.ibm.com>
To: Ingo Molnar <mingo@elte.hu>, Peter Zijlstra <peterz@infradead.org>
Cc: Lin Ming <ming.m.lin@intel.com>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	"eranian@gmail.com" <eranian@gmail.com>,
	"Gary.Mohr@Bull.com" <Gary.Mohr@bull.com>,
	"arjan@linux.intel.com" <arjan@linux.intel.com>,
	"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
	Paul Mackerras <paulus@samba.org>,
	"David S. Miller" <davem@davemloft.net>,
	Russell King <rmk+kernel@arm.linux.org.uk>,
	Paul Mundt <lethal@linux-sh.org>,
	lkml <linux-kernel@vger.kernel.org>,
	Arnaldo Carvalho de Melo <acme@redhat.com>,
	Will Deacon <will.deacon@arm.com>,
	Maynard Johnson <mpjohn@us.ibm.com>, Carl Love <carll@us.ibm.com>,
	Paul Mackerras <paulus@samba.org>
Subject: Re: [RFC][PATCH 3/9] perf: export registerred pmus via sysfs
Date: Mon, 10 May 2010 16:13:32 -0700	[thread overview]
Message-ID: <4BE8931C.9070106@linux.vnet.ibm.com> (raw)
In-Reply-To: <20100510115344.GA11238@elte.hu>

On 5/10/2010 4:53 AM, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
>> On Mon, 2010-05-10 at 13:43 +0200, Ingo Molnar wrote:
>>>
>>> Yeah, we really want a mechanism like this in place instead of continuing with 
>>> the somewhat ad-hoc extensions to the event enumeration space.
>>>
>>> One detail: i think we want one more level. Instead of:
>>>
>>>  /sys/devices/system/node/nodeN/node_events
>>>                                 node_events/event_source_id
>>>                                 node_events/local_misses
>>>                                            /local_hits
>>>                                            /remote_misses
>>>                                            /remote_hits
>>>                                            /...
>>>
>>> We want the individual events to be a directory, containing the event_id:
>>>
>>>  /sys/devices/system/node/nodeN/node_events
>>>                                 node_events/event_source_id
>>>                                 node_events/local_misses/event_id
>>>                                            /local_hits/event_id
>>>                                            /remote_misses/event_id
>>>                                            /remote_hits/event_id
>>>                                            /...
>>>
>>> The reason is that we want to keep our options open to add more attributes to 
>>> individual events. (In fact extended attributes already exist for certain 
>>> event classes - such as the 'format' info for tracepoints.)

Having extra fields for each event would allow us to describe hardware-specific event attributes.  For example:
/sys/devices/system/node/nodeN/node_events
                               node_events/event_source_id
                               node_events/local_misses/event_id
                                          /local_hits/event_id
                                          /crypto_datamover  <- specific node PMU
                                              /marked_crb_rcv_des
                                                  /event_id
                                                  /attrib
                                                         /lpid <- attribute name
                                                         /lpid/type <- type of attribute (boolean, integer, etc.)
                                                         /lpid/min  <- min value of int attribute
                                                         /lpid/max  <- max value of int attribute
                                                         /lpid/bit_offset <- amount to shift attribute value before OR'ing into the raw event code
                                                         /marking_mode <- attribute name
                                                         /marking_mode/type
                                                         /...

Of course, these nodes would need to be replicated for each event that needs them or other attributes.


>>
>> Sure, sounds like a sensible suggestion.
>>
>> One thing I'd also like to clarify is that !raw events should not be 
>> exhaustive hardware event lists, those are best left for userspace, but 
>> instead are generally useful events that can be expected to be implemented 
>> by any hardware of that particular class.

Why exactly is this?  I got the impression this was something you and Ingo wanted earlier.  As big of an impact as it will be, it would be nice to unify the two event spaces (generic and raw) into one space that can be explored by a user space tool (or even crudely by /bin/ls).

>>
>> So a GPU might have things like 'vsync' and 'cmd_pipeline_stall' or whatever 
>> is a generic GPU feature, but not very implementation specific things that 
>> the next generation of hardware won't ever have.
> 
> Definitely so.
> 
> 	Ingo

Hi Ingo,

In the past, you said that you didn't want to have user space anything enumerate raw hardware events that are supported by the kernel.  So does the above represent a re-thinking of that position?

We'd like to have the capability of hardware-specific symbolic event names in the perf tool by some mechanism, unified or otherwise.  Right now, for the IBM Wire-Speed processor, we are currently not able to use the perf tool because of its lack of symbolic raw event name support.

In the mean time, we are using a pair of demo programs from Stephane Eranian's libpfm4 source tree called "task" and "syst".  These tools use the symbolic event names provided by libpfm4, and use the kernel support from perf_events.

Regards,

- Corey


  reply	other threads:[~2010-05-10 23:14 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-10  9:27 [RFC][PATCH 3/9] perf: export registerred pmus via sysfs Lin Ming
2010-05-10  9:40 ` Peter Zijlstra
2010-05-10 10:11   ` Lin Ming
2010-05-10 10:18     ` Peter Zijlstra
2010-05-10 10:26       ` Lin Ming
2010-05-10 10:35         ` Paul Mundt
2010-05-10 10:58           ` Lin Ming
2010-05-10 11:04             ` Peter Zijlstra
2010-05-10 11:11               ` Lin Ming
2010-05-10 11:18                 ` Lin Ming
2010-05-10 11:27         ` Peter Zijlstra
2010-05-10 11:36           ` Peter Zijlstra
2010-05-10 11:48             ` Ingo Molnar
2010-05-10 11:39           ` Russell King
2010-05-10 11:42           ` Peter Zijlstra
2010-05-10 20:25             ` Will Deacon
2010-05-11  6:34               ` Peter Zijlstra
2010-05-10 11:43           ` Ingo Molnar
2010-05-10 11:49             ` Peter Zijlstra
2010-05-10 11:53               ` Ingo Molnar
2010-05-10 23:13                 ` Corey Ashford [this message]
2010-05-11  6:46                   ` Peter Zijlstra
2010-05-11  7:21                     ` Ingo Molnar
2010-05-11  8:20                       ` Lin Ming
2010-05-11  8:50                         ` Peter Zijlstra
2010-05-11  9:03                           ` Lin Ming
2010-05-11  9:05                             ` Lin Ming
2010-05-11  9:12                             ` Peter Zijlstra
2010-05-11  9:18                               ` Ingo Molnar
2010-05-11  9:24                                 ` Peter Zijlstra
2010-05-11  9:31                                   ` Ingo Molnar
2010-05-11 10:28                                     ` Lin Ming
2010-05-13  8:28                                 ` Lin Ming
2010-05-13  8:38                                   ` Ingo Molnar
2010-05-13  9:22                                     ` Lin Ming
2010-05-11  9:40                               ` Lin Ming
2010-05-11  9:48                                 ` Peter Zijlstra
2010-05-11  9:53                                   ` Lin Ming
2010-05-11 15:17                                   ` Greg KH
2010-05-12  5:51                                   ` Paul Mundt
2010-05-12  8:37                                     ` Peter Zijlstra
2010-05-14  7:04                                       ` Paul Mundt
2010-05-11 10:09                   ` stephane eranian
2010-05-11 14:15             ` Borislav Petkov
2010-05-11 14:25               ` Peter Zijlstra
2010-05-11 15:37                 ` Borislav Petkov
2010-05-11 15:46                   ` Peter Zijlstra
2010-05-10 23:54           ` Corey Ashford
2010-05-11  6:50             ` Peter Zijlstra
2010-05-11  2:43           ` Lin Ming
2010-05-11  6:35             ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4BE8931C.9070106@linux.vnet.ibm.com \
    --to=cjashfor@linux.vnet.ibm.com \
    --cc=Gary.Mohr@bull.com \
    --cc=acme@redhat.com \
    --cc=arjan@linux.intel.com \
    --cc=carll@us.ibm.com \
    --cc=davem@davemloft.net \
    --cc=eranian@gmail.com \
    --cc=fweisbec@gmail.com \
    --cc=lethal@linux-sh.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.m.lin@intel.com \
    --cc=mingo@elte.hu \
    --cc=mpjohn@us.ibm.com \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=rmk+kernel@arm.linux.org.uk \
    --cc=will.deacon@arm.com \
    --cc=yanmin_zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox