From: Lin Ming <ming.m.lin@intel.com>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>,
Ingo Molnar <mingo@elte.hu>, Andi Kleen <andi@firstfloor.org>,
lkml <linux-kernel@vger.kernel.org>,
Frederic Weisbecker <fweisbec@gmail.com>,
Arjan van de Ven <arjan@infradead.org>
Subject: Re: [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu
Date: Wed, 01 Dec 2010 11:28:43 +0800 [thread overview]
Message-ID: <1291174123.2405.228.camel@minggr.sh.intel.com> (raw)
In-Reply-To: <1290771419.2145.137.camel@laptop>
On Fri, 2010-11-26 at 19:36 +0800, Peter Zijlstra wrote:
> On Fri, 2010-11-26 at 12:25 +0100, Stephane Eranian wrote:
> > On Fri, Nov 26, 2010 at 12:24 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > > On Fri, 2010-11-26 at 09:18 +0100, Stephane Eranian wrote:
> > >
> > >> In the perf_event model, given that any one of the 4 cores can be used
> > >> to program uncore events, you have no choice but to broadcast to all
> > >> 4 cores. Each has to demultiplex and figure out which of its counters
> > >> have overflowed.
> > >
> > > Not really, you can redirect all these events to the first online cpu of
> > > the node.
> > >
> > > You can re-write event->cpu in pmu::event_init(), and register cpu
> > > hotplug notifiers to migrate the state around.
> > >
> > I am sure you could. But then the user thinks the event is controlled
> > from CPUx when it's actually from CPUz. I am sure it can work but
> > that's confusing, especially interrupt-wise.
>
> Well, its either that or keeping a node wide state like we do for AMD
> and serialize everything from there.
>
> And I'm not sure what's most expensive, steering the interrupt to one
> core only, or broadcasting every interrupt, I'd favour the first
> approach.
>
> The whole thing is a node-wide resource, so the user needs to think in
> nodes anyway, we already do a cpu->node mapping for identifying the
> thing.
How about a new sub-command for node-wide events statistics?
perf node -n <node> -e <event>?
next prev parent reply other threads:[~2010-12-01 3:26 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-21 12:01 [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu Lin Ming
2010-11-21 12:46 ` Andi Kleen
2010-11-21 14:04 ` Lin Ming
2010-11-21 17:00 ` Andi Kleen
2010-11-21 17:44 ` Peter Zijlstra
2010-11-23 10:00 ` Stephane Eranian
2010-11-25 0:24 ` Lin Ming
2010-11-25 6:09 ` Peter Zijlstra
2010-11-25 6:27 ` Lin Ming
2010-11-25 8:48 ` Stephane Eranian
2010-11-25 18:20 ` Andi Kleen
2010-11-25 21:10 ` Stephane Eranian
2010-11-24 9:55 ` Lin Ming
2010-11-23 10:17 ` Stephane Eranian
2010-11-24 1:33 ` Lin Ming
2010-11-26 5:15 ` Lin Ming
2010-11-26 8:18 ` Stephane Eranian
2010-11-26 8:29 ` Lin Ming
2010-11-26 8:33 ` Stephane Eranian
2010-11-26 9:00 ` Lin Ming
2010-11-26 10:06 ` Stephane Eranian
2010-12-01 3:21 ` Lin Ming
2010-12-01 13:04 ` Stephane Eranian
2010-12-02 5:26 ` Lin Ming
2010-11-26 11:24 ` Peter Zijlstra
2010-11-26 11:25 ` Stephane Eranian
2010-11-26 11:36 ` Peter Zijlstra
2010-11-26 11:41 ` Stephane Eranian
2010-11-26 16:25 ` Lin Ming
2010-12-01 3:28 ` Lin Ming [this message]
2010-12-01 11:37 ` Peter Zijlstra
2010-12-01 14:08 ` Andi Kleen
2010-12-01 14:18 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1291174123.2405.228.camel@minggr.sh.intel.com \
--to=ming.m.lin@intel.com \
--cc=a.p.zijlstra@chello.nl \
--cc=andi@firstfloor.org \
--cc=arjan@infradead.org \
--cc=eranian@google.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox