public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Corey Ashford <cjashfor@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@elte.hu>, Paul Mackerras <paulus@samba.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 5/6] perf_counter: add more context information
Date: Mon, 06 Apr 2009 13:16:36 -0700	[thread overview]
Message-ID: <49DA6324.9080801@linux.vnet.ibm.com> (raw)
In-Reply-To: <1239044818.798.4775.camel@twins>

Peter Zijlstra wrote:
> On Mon, 2009-04-06 at 11:53 -0700, Corey Ashford wrote:
>> Peter Zijlstra wrote:
>>> On Mon, 2009-04-06 at 13:01 +0200, Peter Zijlstra wrote:
>>>> On Fri, 2009-04-03 at 11:25 -0700, Corey Ashford wrote:
>>>>> Peter Zijlstra wrote:
>>>>>> On Thu, 2009-04-02 at 11:12 +0200, Peter Zijlstra wrote:
>>>>>>> plain text document attachment (perf_counter_callchain_context.patch)
>>>>>>> Put in counts to tell which ips belong to what context.
>>>>>>>
>>>>>>>   -----
>>>>>>>    | |  hv
>>>>>>>    | --
>>>>>>> nr | |  kernel
>>>>>>>    | --
>>>>>>>    | |  user
>>>>>>>   -----
>>>>>> Right, just realized that PERF_RECORD_IP needs something similar if one
>>>>>> if not able to derive the context from the IP itself..
>>>>>>
>>>>> Three individual bits would suffice, or you could use a two-bit code -
>>>>> 00 = user
>>>>> 01 = kernel
>>>>> 10 = hypervisor
>>>>> 11 = reserved (or perhaps unknown)
>>>>>
>>>>> Unfortunately, because of alignment, it would need to take up another 64 
>>>>> bit word, wouldn't it?  Too bad you cannot sneak the bits into the IP in 
>>>>> a machine independent way.
>>>>>
>>>>> And since you probably need a separate word, that effectively doubles 
>>>>> the amount of space taken up by IP samples (if we add a "no event 
>>>>> header" option).  Should we add another bit in the record_type field - 
>>>>> PERF_RECORD_IP_LEVEL (or similar) so that user-space apps don't have to 
>>>>> get this if they don't need it?
>>>> If we limit the event size to 64k (surely enough, right? :-), then we
>>>> have 16 more bits to play with in the header, and we could do something
>>>> like the below.
>>>>
>>>> A further possibility would also be to add an overflow bit in there,
>>>> making the full 32bit PERF_RECORD space available to output events as
>>>> well.
>>>>
>>>> Index: linux-2.6/include/linux/perf_counter.h
>>>> ===================================================================
>>>> --- linux-2.6.orig/include/linux/perf_counter.h
>>>> +++ linux-2.6/include/linux/perf_counter.h
>>>> @@ -201,9 +201,17 @@ struct perf_counter_mmap_page {
>>>>  	__u32   data_head;		/* head in the data section */
>>>>  };
>>>>  
>>>> +enum {
>>>> +	PERF_EVENT_LEVEL_HV	= 0,
>>>> +	PERF_EVENT_LEVEL_KERNEL = 1,
>>>> +	PERF_EVENT_LEVEL_USER	= 2,
>>>> +};
>>>> +
>>>>  struct perf_event_header {
>>>>  	__u32	type;
>>>> -	__u32	size;
>>>> +	__u16	level		:  2,
>>>> +		__reserved	: 14;
>>>> +	__u16	size;
>>>>  };
>>> Except we should probably use masks again instead of bitfields so that
>>> the thing is portable when streamed to disk, such as would be common
>>> with splice().
>> One downside of this approach is that you if you specify "no header" 
>> (currently not possible, but maybe later?), you will not be able to get 
>> the level bits.
> 
> Would this be desirable? I know we've mentioned it before, but it would
> mean one cannot mix various event types (currently that means !mmap and
> callchain with difficulty).

I think it would.  For one use case I'm working on right now, simple 
profiling, all I need are ip's.  If I could omit the header, that would 
reduce the frequency of sigio's by a factor of three, and make it faster 
to read up the ip's when the SIGIO's occur.

I realize that it makes it impossible to mix record types with the 
header removed, and skipping over the call chain data a bit more 
difficult (but not rocket science).

It could be made an error for the caller to specify both "no header" and 
perf_coiunter_hw_event.mmap|munmap


> 
> As long as we mandate this header, we can have 16 misc bits.
> 

True.

- Corey


  reply	other threads:[~2009-04-06 20:16 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-02  9:11 [PATCH 0/6] more perf_counter stuff Peter Zijlstra
2009-04-02  9:11 ` [PATCH 1/6] perf_counter: move the event overflow output bits to record_type Peter Zijlstra
2009-04-02 11:28   ` Ingo Molnar
2009-04-02 11:43   ` Ingo Molnar
2009-04-02 11:47     ` Peter Zijlstra
2009-04-02 12:03   ` [tip:perfcounters/core] " Peter Zijlstra
2009-04-02 22:33   ` [PATCH 1/6] " Corey Ashford
2009-04-02 23:27     ` Corey Ashford
2009-04-03  6:50       ` Peter Zijlstra
2009-04-03  7:30         ` Corey Ashford
2009-04-02  9:12 ` [PATCH 2/6] RFC perf_counter: singleshot support Peter Zijlstra
2009-04-02 10:51   ` Ingo Molnar
2009-04-02 11:48     ` Peter Zijlstra
2009-04-02 12:26       ` Ingo Molnar
2009-04-02 21:23         ` Paul Mackerras
2009-04-02 12:18   ` Peter Zijlstra
2009-04-02 18:10     ` Ingo Molnar
2009-04-02 18:33       ` Peter Zijlstra
2009-04-02  9:12 ` [PATCH 3/6] perf_counter: per event wakeups Peter Zijlstra
2009-04-02 11:32   ` Ingo Molnar
2009-04-02 12:03   ` [tip:perfcounters/core] " Peter Zijlstra
2009-04-02  9:12 ` [PATCH 4/6] perf_counter: kerneltop: update to new ABI Peter Zijlstra
2009-04-02 12:03   ` [tip:perfcounters/core] " Peter Zijlstra
2009-04-02 13:35     ` Jaswinder Singh Rajput
2009-04-02 13:59       ` Jaswinder Singh Rajput
2009-04-02 18:11         ` Ingo Molnar
2009-04-02 18:22           ` Jaswinder Singh Rajput
2009-04-02 18:28             ` Ingo Molnar
2009-04-02 18:38               ` Jaswinder Singh Rajput
2009-04-02 19:20                 ` Ingo Molnar
2009-04-02 18:51               ` Jaswinder Singh Rajput
2009-04-02 18:32             ` Jaswinder Singh Rajput
2009-04-02  9:12 ` [PATCH 5/6] perf_counter: add more context information Peter Zijlstra
2009-04-02 11:36   ` Ingo Molnar
2009-04-02 11:46     ` Peter Zijlstra
2009-04-02 18:16       ` Ingo Molnar
2009-04-02 11:48     ` Peter Zijlstra
2009-04-02 18:18       ` Ingo Molnar
2009-04-02 18:29         ` Peter Zijlstra
2009-04-02 18:34           ` Ingo Molnar
2009-04-02 18:42             ` Peter Zijlstra
2009-04-02 19:19               ` Ingo Molnar
2009-04-02 12:04   ` [tip:perfcounters/core] " Peter Zijlstra
2009-04-03 12:50   ` [PATCH 5/6] " Peter Zijlstra
2009-04-03 18:25     ` Corey Ashford
2009-04-06 11:01       ` Peter Zijlstra
2009-04-06 11:07         ` Peter Zijlstra
2009-04-06 18:53           ` Corey Ashford
2009-04-06 19:06             ` Peter Zijlstra
2009-04-06 20:16               ` Corey Ashford [this message]
2009-04-06 20:46                 ` Peter Zijlstra
2009-04-06 21:15                   ` Corey Ashford
2009-04-06 21:21                     ` Peter Zijlstra
2009-04-06 21:33                       ` Corey Ashford
2009-04-07  7:11                         ` Peter Zijlstra
2009-04-07 16:27                           ` Corey Ashford
2009-04-02  9:12 ` [PATCH 6/6] perf_counter: update mmap() counter read Peter Zijlstra
2009-04-02 12:04   ` [tip:perfcounters/core] " Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49DA6324.9080801@linux.vnet.ibm.com \
    --to=cjashfor@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox