linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: ashwinc@codeaurora.org (Ashwin Chaugule)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC] Extending ARM perf-events for multiple PMUs
Date: Mon, 11 Apr 2011 13:29:21 -0400	[thread overview]
Message-ID: <4DA33A71.6010804@codeaurora.org> (raw)
In-Reply-To: <1302282912.5758.25.camel@e102144-lin.cambridge.arm.com>

Hi Will,
Thanks for the starting the discussion here.

On 4/8/2011 1:15 PM, Will Deacon wrote:
> 
>   (1) CPU-aware PMUs
> 
>       This type of PMU is typically per-CPU and accessed via co-processor
>       instructions. Actions may be delivered as PPIs. Events scheduled onto
>       a CPU-aware PMU can be grouped, possibly with events scheduled for other
>       per-CPU PMUs on the same CPU. An action delivered by one of these PMUs
>       can *always* be attributed to a specific CPU but not necessarily a
>       specific task. Accessing a CPU-aware PMU is a synchronous operation.
>

I didn't understand when would an action not be attributed to a task in
this category ? If we know which CPU "enabled" the event, this should be
possible ?


>   (2) System PMUs
> 
>       System PMUs are typically outside of the CPU domain. Bus monitors, GPU
>       counters and external L2 cache controller monitors are all system PMUs.
>       Actions delivered by these PMUs cannot be attributed to a particular CPU
>       and certainly cannot be associated with a particular piece of code. They
>       are memory-mapped and cannot be grouped with other PMUs of any type.
>       Accesses to a system PMU may be asynchronous.
> 
>       System PMUs can be further split up into `counting' and `filtering'
>       PMUs:
> 
>       (i) Counting PMUs
> 
>           Counting PMUs increment a counter whenever a particular event occurs
> 	  and can deliver an action periodically (for example, on overflow or
> 	  after a certain amount of time has passed). The event types are
> 	  hardwired as particular, discrete events such as `cycles' or
> 	  `misses'.
> 
>       (ii) Filtering PMUs
> 
>           Filtering PMUs respond to a query. For example, `generate an action
> 	  whenever you see a bus access which fits the following criteria'. The
> 	  action may simply be to increment a counter, in which case this PMU
> 	  can act as a highly configurable counting PMU, where the event types
> 	  are dynamic.
> 
> Now, we currently support the core CPU PMU, which is obviously a CPU-aware PMU
> that generates interrupts as actions. Another example of a CPU-aware PMU is
> the VFP PMU in Qualcomm's Scorpion. The next step (moving outwards from the
> core) is to add support for L2 cache controllers. I expect most of these to be
> Counting System PMUs, although I can envisage them being CPU-aware if built
> into the core with enough extra hardware.

For the Qcom L2CC, the PMU can be configured to filter events based on
specific masters. This fact would make it a CPU-aware PMU, although its
NOT per-core and triggers SPI's.

In such a case, I found it to be quite ugly trying to reuse the per-cpu
data structures esp in the interrupt handler, since the interrupt can
trigger on a CPU where the event wasn't enabled. A cleaner approach was to
use a separate struct pmu. However, I agree that this approach would lead
to several pmu's popping up in arch/arm.

So, I think we could add another category for such highly configurable
PMUs, which are not per-core, but have enough extra h/w to make them
cpu-aware. These need to be treated differently by arm perf, because they
can't really use the per-cpu data structures of the cpu-aware pmu's and as
such can't easily re-use of many of the functions.

In fact, most of Qcomm PMU's (bus, fabric etc.) will fall under this new
category. At first glance, these would appear to fall under the System PMU
(counting) category, but they don't because of the extra h/w logic that
allows origin filtering of events.

Also, having all this origin filtering logic helps us track per-process
events on these PMU's, for which we need extra functions to decide how to
allocate and configure counters based on which context (task, cpu) the
event is enabled in.

Cheers,
Ashwin


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

  parent reply	other threads:[~2011-04-11 17:29 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-08 17:15 [RFC] Extending ARM perf-events for multiple PMUs Will Deacon
2011-04-08 18:10 ` Linus Walleij
2011-04-11 11:12   ` Will Deacon
2011-04-09 11:40 ` Peter Zijlstra
2011-04-11 11:29   ` Will Deacon
2011-04-11 12:47     ` Peter Zijlstra
2011-04-11 17:44     ` Ashwin Chaugule
2011-04-12 17:45       ` Will Deacon
2011-04-11 18:00   ` Ashwin Chaugule
2011-04-12  7:39   ` Ming Lei
2011-04-12 10:30     ` Peter Zijlstra
2011-04-12 11:12       ` Ming Lei
2011-04-11 17:29 ` Ashwin Chaugule [this message]
2011-04-11 18:00   ` Will Deacon
2011-04-11 20:46     ` Ashwin Chaugule
2011-04-12 18:08       ` Will Deacon
2011-04-13  5:09         ` Ashwin Chaugule

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DA33A71.6010804@codeaurora.org \
    --to=ashwinc@codeaurora.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).