public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Lin Ming <ming.m.lin@intel.com>
To: Andi Kleen <andi@firstfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Ingo Molnar <mingo@elte.hu>,
	Stephane Eranian <eranian@google.com>,
	lkml <linux-kernel@vger.kernel.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Arjan van de Ven <arjan@infradead.org>
Subject: Re: [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu
Date: Sun, 21 Nov 2010 22:04:19 +0800	[thread overview]
Message-ID: <1290348259.2245.172.camel@localhost> (raw)
In-Reply-To: <8e9ff9280b0c4a059bc82b5c4a629897.squirrel@www.firstfloor.org>

On Sun, 2010-11-21 at 20:46 +0800, Andi Kleen wrote:
> >
> > 2. Uncore pmu NMI handling
> >
> > All the 4 cores are programmed to receive uncore counter overflow
> > interrupt. The NMI handler(running on 1 of the 4 cores) handle all
> > counters enabled by all 4 cores.
> 
> Really for uncore monitoring there is no need to use an NMI handler.
> You can't profile a core anyways, so you can just delay the reporting
> a little bit. It may simplify the code to not use one here
> and just use an ordinary handler.

OK, I can use on ordinary interrupt handler here.

> 
> In general since there is already much trouble with overloaded
> NMI events avoiding new NMIs is a good idea.
> 
> 
> 
> > +
> > +static struct node_hw_events *uncore_events[MAX_NUMNODES];
> 
> Don't declare static arrays with MAX_NUMNODES, that number can be
> very large and cause unnecessary bloat. Better use per CPU data or similar
> (e.g. with  alloc_percpu)

I really need is a per physical cpu data here, is alloc_percpu enough?

> 
> > +	/*
> > +	 * The hw event starts counting from this event offset,
> > +	 * mark it to be able to extra future deltas:
> > +	 */
> > +	local64_set(&hwc->prev_count, (u64)-left);
> 
> Your use of local* seems dubious. That is only valid if it's really
> all on the same CPU. Is that really true?

Good catch! That is not true.

The interrupt handler is running on one core and the
data(hwc->prev_count) maybe on another core.

Any idea to set this cross-core data?

> 
> > +static int uncore_pmu_add(struct perf_event *event, int flags)
> > +{
> > +	int node = numa_node_id();
> 
> this should be still package id

Understand, this is in my TODO.

> 
> > +	/* Check CPUID signatures: 06_1AH, 06_1EH, 06_1FH */
> > +	model = eax.split.model | (eax.split.ext_model << 4);
> > +	if (eax.split.family != 6 || (model != 0x1A && model != 0x1E && model !=
> > 0x1F))
> > +		return;
> 
> You can just get that from boot_cpu_data, no need to call cpuid

Nice, will use it.

> 
> > +#include <linux/perf_event.h>
> > +#include <linux/capability.h>
> > +#include <linux/notifier.h>
> > +#include <linux/hardirq.h>
> > +#include <linux/kprobes.h>
> > +#include <linux/module.h>
> > +#include <linux/kdebug.h>
> > +#include <linux/sched.h>
> > +#include <linux/uaccess.h>
> > +#include <linux/slab.h>
> > +#include <linux/highmem.h>
> > +#include <linux/cpu.h>
> > +#include <linux/bitops.h>
> 
> Do you really need all these includes?

Only

#include <linux/perf_event.h>
#include <linux/kprobes.h>
#include <linux/hardirq.h>
#include <linux/slab.h>

are needed.

Thanks for the comments.
Lin Ming



  reply	other threads:[~2010-11-21 14:04 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-21 12:01 [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu Lin Ming
2010-11-21 12:46 ` Andi Kleen
2010-11-21 14:04   ` Lin Ming [this message]
2010-11-21 17:00     ` Andi Kleen
2010-11-21 17:44     ` Peter Zijlstra
2010-11-23 10:00       ` Stephane Eranian
2010-11-25  0:24         ` Lin Ming
2010-11-25  6:09           ` Peter Zijlstra
2010-11-25  6:27             ` Lin Ming
2010-11-25  8:48             ` Stephane Eranian
2010-11-25 18:20             ` Andi Kleen
2010-11-25 21:10               ` Stephane Eranian
2010-11-24  9:55       ` Lin Ming
2010-11-23 10:17 ` Stephane Eranian
2010-11-24  1:33   ` Lin Ming
2010-11-26  5:15   ` Lin Ming
2010-11-26  8:18     ` Stephane Eranian
2010-11-26  8:29       ` Lin Ming
2010-11-26  8:33       ` Stephane Eranian
2010-11-26  9:00         ` Lin Ming
2010-11-26 10:06           ` Stephane Eranian
2010-12-01  3:21             ` Lin Ming
2010-12-01 13:04               ` Stephane Eranian
2010-12-02  5:26                 ` Lin Ming
2010-11-26 11:24       ` Peter Zijlstra
2010-11-26 11:25         ` Stephane Eranian
2010-11-26 11:36           ` Peter Zijlstra
2010-11-26 11:41             ` Stephane Eranian
2010-11-26 16:25               ` Lin Ming
2010-12-01  3:28             ` Lin Ming
2010-12-01 11:37               ` Peter Zijlstra
2010-12-01 14:08               ` Andi Kleen
2010-12-01 14:18                 ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1290348259.2245.172.camel@localhost \
    --to=ming.m.lin@intel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=andi@firstfloor.org \
    --cc=arjan@infradead.org \
    --cc=eranian@google.com \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox