From: Lin Ming <ming.m.lin@intel.com>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andi Kleen <andi@firstfloor.org>, Ingo Molnar <mingo@elte.hu>,
Stephane Eranian <eranian@google.com>,
lkml <linux-kernel@vger.kernel.org>,
Frederic Weisbecker <fweisbec@gmail.com>,
Arjan van de Ven <arjan@infradead.org>
Subject: Re: [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu
Date: Wed, 24 Nov 2010 17:55:13 +0800 [thread overview]
Message-ID: <1290592513.2405.78.camel@minggr.sh.intel.com> (raw)
In-Reply-To: <1290361473.2153.39.camel@laptop>
On Mon, 2010-11-22 at 01:44 +0800, Peter Zijlstra wrote:
> On Sun, 2010-11-21 at 22:04 +0800, Lin Ming wrote:
> > On Sun, 2010-11-21 at 20:46 +0800, Andi Kleen wrote:
> > > >
> > > > 2. Uncore pmu NMI handling
> > > >
> > > > All the 4 cores are programmed to receive uncore counter overflow
> > > > interrupt. The NMI handler(running on 1 of the 4 cores) handle all
> > > > counters enabled by all 4 cores.
> > >
> > > Really for uncore monitoring there is no need to use an NMI handler.
> > > You can't profile a core anyways, so you can just delay the reporting
> > > a little bit. It may simplify the code to not use one here
> > > and just use an ordinary handler.
> >
> > OK, I can use on ordinary interrupt handler here.
>
> Does the hardware actually allow using a different interrupt source?
>
> > >
> > > In general since there is already much trouble with overloaded
> > > NMI events avoiding new NMIs is a good idea.
> > >
> > >
> > >
> > > > +
> > > > +static struct node_hw_events *uncore_events[MAX_NUMNODES];
> > >
> > > Don't declare static arrays with MAX_NUMNODES, that number can be
> > > very large and cause unnecessary bloat. Better use per CPU data or similar
> > > (e.g. with alloc_percpu)
> >
> > I really need is a per physical cpu data here, is alloc_percpu enough?
>
> Nah, simply manually allocate bits using kmalloc_node(), that's
> something I still need to fix in Andi's patches as well.
I'm writing this like AMD NB events allocation.
Thanks,
Lin Ming
>
> > > > + /*
> > > > + * The hw event starts counting from this event offset,
> > > > + * mark it to be able to extra future deltas:
> > > > + */
> > > > + local64_set(&hwc->prev_count, (u64)-left);
> > >
> > > Your use of local* seems dubious. That is only valid if it's really
> > > all on the same CPU. Is that really true?
> >
> > Good catch! That is not true.
> >
> > The interrupt handler is running on one core and the
> > data(hwc->prev_count) maybe on another core.
> >
> > Any idea to set this cross-core data?
>
> IIRC you can steer the uncore interrupts (it has a mask somewhere)
> simply steer everything to the first cpu in the nodemask?
>
>
>
next prev parent reply other threads:[~2010-11-24 9:54 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-21 12:01 [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu Lin Ming
2010-11-21 12:46 ` Andi Kleen
2010-11-21 14:04 ` Lin Ming
2010-11-21 17:00 ` Andi Kleen
2010-11-21 17:44 ` Peter Zijlstra
2010-11-23 10:00 ` Stephane Eranian
2010-11-25 0:24 ` Lin Ming
2010-11-25 6:09 ` Peter Zijlstra
2010-11-25 6:27 ` Lin Ming
2010-11-25 8:48 ` Stephane Eranian
2010-11-25 18:20 ` Andi Kleen
2010-11-25 21:10 ` Stephane Eranian
2010-11-24 9:55 ` Lin Ming [this message]
2010-11-23 10:17 ` Stephane Eranian
2010-11-24 1:33 ` Lin Ming
2010-11-26 5:15 ` Lin Ming
2010-11-26 8:18 ` Stephane Eranian
2010-11-26 8:29 ` Lin Ming
2010-11-26 8:33 ` Stephane Eranian
2010-11-26 9:00 ` Lin Ming
2010-11-26 10:06 ` Stephane Eranian
2010-12-01 3:21 ` Lin Ming
2010-12-01 13:04 ` Stephane Eranian
2010-12-02 5:26 ` Lin Ming
2010-11-26 11:24 ` Peter Zijlstra
2010-11-26 11:25 ` Stephane Eranian
2010-11-26 11:36 ` Peter Zijlstra
2010-11-26 11:41 ` Stephane Eranian
2010-11-26 16:25 ` Lin Ming
2010-12-01 3:28 ` Lin Ming
2010-12-01 11:37 ` Peter Zijlstra
2010-12-01 14:08 ` Andi Kleen
2010-12-01 14:18 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1290592513.2405.78.camel@minggr.sh.intel.com \
--to=ming.m.lin@intel.com \
--cc=a.p.zijlstra@chello.nl \
--cc=andi@firstfloor.org \
--cc=arjan@infradead.org \
--cc=eranian@google.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox