public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10  3:49 [PATCH 0/5] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
@ 2010-06-10  3:49 ` Frederic Weisbecker
  2010-06-10 10:46   ` Peter Zijlstra
  0 siblings, 1 reply; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-10  3:49 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Stephane Eranian,
	Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

In order to introduce new context exclusions, software events will
have to eventually stop when needed. We'll want perf_event_stop() to
act on every events.

To achieve this, remove the stub stop/start pmu callbacks of software
and tracepoint events.

This may even optimize the case of hardware and software events
running at the same time: now we only stop/start all hardware
events if we reset a hardware event period, not anymore with
software events.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/perf_event.c |   29 ++++++++++++++++-------------
 1 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index c772a3d..5c004f7 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1541,11 +1541,23 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)
 	hwc->sample_period = sample_period;
 
 	if (local64_read(&hwc->period_left) > 8*sample_period) {
-		perf_disable();
-		perf_event_stop(event);
+		bool software_event = is_software_event(event);
+
+		/*
+		 * Only hardware events need their irq period to be
+		 * reprogrammed
+		 */
+		if (!software_event) {
+			perf_disable();
+			perf_event_stop(event);
+		}
+
 		local64_set(&hwc->period_left, 0);
-		perf_event_start(event);
-		perf_enable();
+
+		if (!software_event) {
+			perf_event_start(event);
+			perf_enable();
+		}
 	}
 }
 
@@ -4286,16 +4298,9 @@ static void perf_swevent_void(struct perf_event *event)
 {
 }
 
-static int perf_swevent_int(struct perf_event *event)
-{
-	return 0;
-}
-
 static const struct pmu perf_ops_generic = {
 	.enable		= perf_swevent_enable,
 	.disable	= perf_swevent_disable,
-	.start		= perf_swevent_int,
-	.stop		= perf_swevent_void,
 	.read		= perf_swevent_read,
 	.unthrottle	= perf_swevent_void, /* hwc->interrupts already reset */
 };
@@ -4578,8 +4583,6 @@ static int swevent_hlist_get(struct perf_event *event)
 static const struct pmu perf_ops_tracepoint = {
 	.enable		= perf_trace_enable,
 	.disable	= perf_trace_disable,
-	.start		= perf_swevent_int,
-	.stop		= perf_swevent_void,
 	.read		= perf_swevent_read,
 	.unthrottle	= perf_swevent_void,
 };
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10  3:49 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
@ 2010-06-10 10:46   ` Peter Zijlstra
  2010-06-10 11:10     ` Peter Zijlstra
  2010-06-10 12:06     ` Ingo Molnar
  0 siblings, 2 replies; 20+ messages in thread
From: Peter Zijlstra @ 2010-06-10 10:46 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, 2010-06-10 at 05:49 +0200, Frederic Weisbecker wrote:
> In order to introduce new context exclusions, software events will
> have to eventually stop when needed. We'll want perf_event_stop() to
> act on every events.
> 
> To achieve this, remove the stub stop/start pmu callbacks of software
> and tracepoint events.
> 
> This may even optimize the case of hardware and software events
> running at the same time: now we only stop/start all hardware
> events if we reset a hardware event period, not anymore with
> software events.
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Stephane Eranian <eranian@google.com>
> Cc: Cyrill Gorcunov <gorcunov@gmail.com>
> Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/perf_event.c |   29 ++++++++++++++++-------------
>  1 files changed, 16 insertions(+), 13 deletions(-)
> 
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index c772a3d..5c004f7 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -1541,11 +1541,23 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)
>  	hwc->sample_period = sample_period;
>  
>  	if (local64_read(&hwc->period_left) > 8*sample_period) {
> -		perf_disable();
> -		perf_event_stop(event);
> +		bool software_event = is_software_event(event);
> +
> +		/*
> +		 * Only hardware events need their irq period to be
> +		 * reprogrammed
> +		 */
> +		if (!software_event) {
> +			perf_disable();
> +			perf_event_stop(event);
> +		}
> +
>  		local64_set(&hwc->period_left, 0);
> -		perf_event_start(event);
> -		perf_enable();
> +
> +		if (!software_event) {
> +			perf_event_start(event);
> +			perf_enable();
> +		}
>  	}
>  }
>  
> @@ -4286,16 +4298,9 @@ static void perf_swevent_void(struct perf_event *event)
>  {
>  }
>  
> -static int perf_swevent_int(struct perf_event *event)
> -{
> -	return 0;
> -}
> -
>  static const struct pmu perf_ops_generic = {
>  	.enable		= perf_swevent_enable,
>  	.disable	= perf_swevent_disable,
> -	.start		= perf_swevent_int,
> -	.stop		= perf_swevent_void,
>  	.read		= perf_swevent_read,
>  	.unthrottle	= perf_swevent_void, /* hwc->interrupts already reset */
>  };
> @@ -4578,8 +4583,6 @@ static int swevent_hlist_get(struct perf_event *event)
>  static const struct pmu perf_ops_tracepoint = {
>  	.enable		= perf_trace_enable,
>  	.disable	= perf_trace_disable,
> -	.start		= perf_swevent_int,
> -	.stop		= perf_swevent_void,
>  	.read		= perf_swevent_read,
>  	.unthrottle	= perf_swevent_void,
>  };

I really don't like this.. we should be removing differences between
software and hardware pmu implementations, not add more :/

Something like the below would work, the only 'problem' is that it grows
hw_perf_event.

---
 include/linux/perf_event.h |    1 +
 kernel/perf_event.c        |   27 ++++++++++++++++++---------
 2 files changed, 19 insertions(+), 9 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 9073bde..2292659 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -531,6 +531,7 @@ struct hw_perf_event {
 		struct { /* software */
 			s64		remaining;
 			struct hrtimer	hrtimer;
+			int		stopped;
 		};
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
 		/* breakpoint */
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 403d180..14b691e 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4113,6 +4113,9 @@ static int perf_swevent_match(struct perf_event *event,
 				struct perf_sample_data *data,
 				struct pt_regs *regs)
 {
+	if (event->hw.stopped)
+		return 0;
+
 	if (event->attr.type != type)
 		return 0;
 
@@ -4282,22 +4285,28 @@ static void perf_swevent_disable(struct perf_event *event)
 	hlist_del_rcu(&event->hlist_entry);
 }
 
-static void perf_swevent_void(struct perf_event *event)
+static void perf_swevent_throttle(struct perf_event *event)
 {
+	/* hwc->interrupts already reset */
 }
 
-static int perf_swevent_int(struct perf_event *event)
+static int perf_swevent_start(struct perf_event *event)
 {
-	return 0;
+	event->hw.stopped = 0;
+}
+
+static void perf_swevent_throttle(struct perf_event *event)
+{
+	event->hw.stopped = 1;
 }
 
 static const struct pmu perf_ops_generic = {
 	.enable		= perf_swevent_enable,
 	.disable	= perf_swevent_disable,
-	.start		= perf_swevent_int,
-	.stop		= perf_swevent_void,
+	.start		= perf_swevent_start,
+	.stop		= perf_swevent_stop,
 	.read		= perf_swevent_read,
-	.unthrottle	= perf_swevent_void, /* hwc->interrupts already reset */
+	.unthrottle	= perf_swevent_throttle,
 };
 
 /*
@@ -4578,10 +4587,10 @@ static int swevent_hlist_get(struct perf_event *event)
 static const struct pmu perf_ops_tracepoint = {
 	.enable		= perf_trace_enable,
 	.disable	= perf_trace_disable,
-	.start		= perf_swevent_int,
-	.stop		= perf_swevent_void,
+	.start		= perf_swevent_start,
+	.stop		= perf_swevent_stop,
 	.read		= perf_swevent_read,
-	.unthrottle	= perf_swevent_void,
+	.unthrottle	= perf_swevent_throttle,
 };
 
 static int perf_tp_filter_match(struct perf_event *event,



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 10:46   ` Peter Zijlstra
@ 2010-06-10 11:10     ` Peter Zijlstra
  2010-06-10 16:12       ` Frederic Weisbecker
  2010-06-10 12:06     ` Ingo Molnar
  1 sibling, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2010-06-10 11:10 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, 2010-06-10 at 12:46 +0200, Peter Zijlstra wrote:
> 
> Something like the below would work, the only 'problem' is that it grows
> hw_perf_event.

If we do the whole PAUSEd thing right, we'd not need this I think.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 10:46   ` Peter Zijlstra
  2010-06-10 11:10     ` Peter Zijlstra
@ 2010-06-10 12:06     ` Ingo Molnar
  1 sibling, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2010-06-10 12:06 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Arnaldo Carvalho de Melo,
	Paul Mackerras, Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin,
	Steven Rostedt


* Peter Zijlstra <peterz@infradead.org> wrote:

> Something like the below would work, the only 'problem' is that it grows
> hw_perf_event.

> @@ -531,6 +531,7 @@ struct hw_perf_event {
>  		struct { /* software */
>  			s64		remaining;
>  			struct hrtimer	hrtimer;
> +			int		stopped;

IMO that's ok.

	Ingo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 11:10     ` Peter Zijlstra
@ 2010-06-10 16:12       ` Frederic Weisbecker
  2010-06-10 16:16         ` Peter Zijlstra
  0 siblings, 1 reply; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-10 16:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, Jun 10, 2010 at 01:10:42PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-06-10 at 12:46 +0200, Peter Zijlstra wrote:
> > 
> > Something like the below would work, the only 'problem' is that it grows
> > hw_perf_event.
> 
> If we do the whole PAUSEd thing right, we'd not need this I think.


It's not needed, and moreover software_pmu:stop/start() can be the same
than software:pmu:disable/enable() without the need to add another check
in the fast path.

But we need perf_event_stop/start() to work on software events. And in fact
now that we use the hlist_del_init, it's safe, but a bit wasteful in
the period reset path. That's another problem that is not critical, but
if you want to solve this by ripping the differences between software and
hardware (which I agree with), we need a ->reset_period callback.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 16:12       ` Frederic Weisbecker
@ 2010-06-10 16:16         ` Peter Zijlstra
  2010-06-10 16:29           ` Frederic Weisbecker
  2010-06-10 19:54           ` Frederic Weisbecker
  0 siblings, 2 replies; 20+ messages in thread
From: Peter Zijlstra @ 2010-06-10 16:16 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, 2010-06-10 at 18:12 +0200, Frederic Weisbecker wrote:
> On Thu, Jun 10, 2010 at 01:10:42PM +0200, Peter Zijlstra wrote:
> > On Thu, 2010-06-10 at 12:46 +0200, Peter Zijlstra wrote:
> > > 
> > > Something like the below would work, the only 'problem' is that it grows
> > > hw_perf_event.
> > 
> > If we do the whole PAUSEd thing right, we'd not need this I think.
> 
> 
> It's not needed, and moreover software_pmu:stop/start() can be the same
> than software:pmu:disable/enable() without the need to add another check
> in the fast path.
> 
> But we need perf_event_stop/start() to work on software events. And in fact
> now that we use the hlist_del_init, it's safe, but a bit wasteful in
> the period reset path. That's another problem that is not critical, but
> if you want to solve this by ripping the differences between software and
> hardware (which I agree with), we need a ->reset_period callback.
> 
Why? ->start() should reprogram the hardware, so a
->stop()/poke-at-state/->start() cycle is much more flexible.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 16:16         ` Peter Zijlstra
@ 2010-06-10 16:29           ` Frederic Weisbecker
  2010-06-10 16:38             ` Peter Zijlstra
  2010-06-10 19:54           ` Frederic Weisbecker
  1 sibling, 1 reply; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-10 16:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, Jun 10, 2010 at 06:16:16PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-06-10 at 18:12 +0200, Frederic Weisbecker wrote:
> > On Thu, Jun 10, 2010 at 01:10:42PM +0200, Peter Zijlstra wrote:
> > > On Thu, 2010-06-10 at 12:46 +0200, Peter Zijlstra wrote:
> > > > 
> > > > Something like the below would work, the only 'problem' is that it grows
> > > > hw_perf_event.
> > > 
> > > If we do the whole PAUSEd thing right, we'd not need this I think.
> > 
> > 
> > It's not needed, and moreover software_pmu:stop/start() can be the same
> > than software:pmu:disable/enable() without the need to add another check
> > in the fast path.
> > 
> > But we need perf_event_stop/start() to work on software events. And in fact
> > now that we use the hlist_del_init, it's safe, but a bit wasteful in
> > the period reset path. That's another problem that is not critical, but
> > if you want to solve this by ripping the differences between software and
> > hardware (which I agree with), we need a ->reset_period callback.
>
>
> Why? ->start() should reprogram the hardware, so a
> ->stop()/poke-at-state/->start() cycle is much more flexible.


Imagine you have several software and hardware events running on the
same cpu. Each time you reset this period for a software event, you do
a hw_pmu_disable() / hw_pmu_enable(), which writes/read the hardware
register for each hardware events, amongst other wasteful things.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 16:29           ` Frederic Weisbecker
@ 2010-06-10 16:38             ` Peter Zijlstra
  2010-06-10 17:04               ` Frederic Weisbecker
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2010-06-10 16:38 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, 2010-06-10 at 18:29 +0200, Frederic Weisbecker wrote:

> Imagine you have several software and hardware events running on the
> same cpu. Each time you reset this period for a software event, you do
> a hw_pmu_disable() / hw_pmu_enable(), which writes/read the hardware
> register for each hardware events, amongst other wasteful things.

hw_perf_disable/enable() are on their way out. They should be replaced
with a struct pmu callback. We must remove all these weak functions if
we want to support multiple pmus.



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 16:38             ` Peter Zijlstra
@ 2010-06-10 17:04               ` Frederic Weisbecker
  0 siblings, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-10 17:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, Jun 10, 2010 at 06:38:34PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-06-10 at 18:29 +0200, Frederic Weisbecker wrote:
> 
> > Imagine you have several software and hardware events running on the
> > same cpu. Each time you reset this period for a software event, you do
> > a hw_pmu_disable() / hw_pmu_enable(), which writes/read the hardware
> > register for each hardware events, amongst other wasteful things.
> 
> hw_perf_disable/enable() are on their way out. They should be replaced
> with a struct pmu callback. We must remove all these weak functions if
> we want to support multiple pmus.


Ok, that's a good news.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-10 16:16         ` Peter Zijlstra
  2010-06-10 16:29           ` Frederic Weisbecker
@ 2010-06-10 19:54           ` Frederic Weisbecker
  1 sibling, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-10 19:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Thu, Jun 10, 2010 at 06:16:16PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-06-10 at 18:12 +0200, Frederic Weisbecker wrote:
> > On Thu, Jun 10, 2010 at 01:10:42PM +0200, Peter Zijlstra wrote:
> > > On Thu, 2010-06-10 at 12:46 +0200, Peter Zijlstra wrote:
> > > > 
> > > > Something like the below would work, the only 'problem' is that it grows
> > > > hw_perf_event.
> > > 
> > > If we do the whole PAUSEd thing right, we'd not need this I think.
> > 
> > 
> > It's not needed, and moreover software_pmu:stop/start() can be the same
> > than software:pmu:disable/enable() without the need to add another check
> > in the fast path.
> > 
> > But we need perf_event_stop/start() to work on software events. And in fact
> > now that we use the hlist_del_init, it's safe, but a bit wasteful in
> > the period reset path. That's another problem that is not critical, but
> > if you want to solve this by ripping the differences between software and
> > hardware (which I agree with), we need a ->reset_period callback.
> > 
> Why? ->start() should reprogram the hardware, so a
> ->stop()/poke-at-state/->start() cycle is much more flexible.


Reconsidering the situation after remembering the race with software
events on period adjusting:

In fact, if we want to support start/stop on software events, we still
need the if (!software event) in perf_adjust_period(), otherwise
start and stop may race on a software event with the hlist ops.

So it's now both useless and dangerous.

What about keeping this software event check for now?
Once we'll have a pmu:disable_all()/enable_all(), this
can serve as a more appropriate check later.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion
@ 2010-06-12  7:34 Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12  7:34 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Stephane Eranian,
	Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

Hi,

In this new version, the weird hangs have been fixed. They were
due to some ACTIVE state checks that didn't handle the paused
mode.

And also comes a new pmu->reserve callback to schedule an event
on the cpu without actually starting it (as if ->stop() was
just called in it).
In x86 it is the same than the enable() callback, the decision
to activate the event beeing eventually handled by checking the
PERF_EVENT_STATE_PAUSED. On software events it is a stub as
they can be activated anytime in a lightweight fashion, without
the need to fight against a finite resource.

BTW, there is a quite handy way to make a diff between task
context and task + irq context profiling.

Just run:

sudo ./perf stat -r 10 -e task-clock -e task-clock:t -e cs -e cs:t \
	-e migrations -e migrations:t -e faults -e faults:t -e cycles \
	-e cycles:t -e instructions -e instructions:t -e branches \
	-e branches:t -e branch-misses -e branch-misses:t taskset 1 hackbench 1

(Did I just say handy?)

Example of result:

 Performance counter stats for 'taskset 1 hackbench 1' (10 runs):

         604,727182  task-clock-msecs         #      0,969 CPUs    ( +-   6,176% )
         604,727182  task-clock-msecs         #      0,969 CPUs    ( +-   6,176% )
              11584  context-switches         #      0,019 M/sec   ( +-  26,945% )
              11593  context-switches         #      0,019 M/sec   ( +-  26,909% )
                  1  CPU-migrations           #      0,000 M/sec   ( +-  61,464% )
                  0  CPU-migrations           #      0,000 M/sec   ( +- 100,000% )
               1844  page-faults              #      0,003 M/sec   ( +-   1,425% )
               1847  page-faults              #      0,003 M/sec   ( +-   1,423% )
          917442262  cycles                   #   1517,118 M/sec   ( +-   6,814% )  (scaled from 69,40%)
          908980892  cycles                   #   1503,126 M/sec   ( +-   5,807% )  (scaled from 68,51%)
          335812687  instructions             #      0,368 IPC     ( +-   6,977% )  (scaled from 73,77%)
          321284628  instructions             #      0,352 IPC     ( +-   6,377% )  (scaled from 20,59%)
           48956776  branches                 #     80,957 M/sec   ( +-   5,936% )  (scaled from 22,67%)
           48144741  branches                 #     79,614 M/sec   ( +-   6,480% )  (scaled from 21,68%)
            2310259  branch-misses            #      4,758 %       ( +-   9,698% )  (scaled from 15,11%)
            2200507  branch-misses            #      4,532 %       ( +-   9,294% )  (scaled from 15,35%)

        0,624082951  seconds time elapsed   ( +-   5,939% )


Most of the time, the instruction counter diff shows that irqs take 0.01% of noise
with hackbench. Something that does more IO would probably be a more
interesting example.

The thing is pullable there:

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
	perf/exclusion-4

Thanks.


Frederic Weisbecker (5):
  perf: Provide a proper stop action for software events
  perf: Support disable() after stop() on software events
  perf: Ability to enable in a paused mode
  perf: Introduce task, softirq and hardirq contexts exclusion
  perf: Support for task/softirq/hardirq exclusion on tools

 arch/x86/kernel/cpu/perf_event.c |    7 +-
 include/linux/perf_event.h       |   52 ++++++++-
 kernel/hw_breakpoint.c           |    1 +
 kernel/perf_event.c              |  257 +++++++++++++++++++++++++++++++-------
 kernel/softirq.c                 |    6 +
 kernel/trace/trace_event_perf.c  |    2 +-
 tools/perf/util/parse-events.c   |   37 ++++--
 7 files changed, 302 insertions(+), 60 deletions(-)


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-12  7:34 [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
@ 2010-06-12  7:34 ` Frederic Weisbecker
  2010-06-12  9:43   ` Peter Zijlstra
  2010-06-12  7:34 ` [PATCH 2/5] perf: Support disable() after stop() on " Frederic Weisbecker
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12  7:34 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Stephane Eranian,
	Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

In order to introduce new context exclusions, software events will
have to eventually stop when needed. We'll want perf_event_stop() to
act on every events.

To achieve this, remove the stub stop/start pmu callbacks of software
and tracepoint events that fixed a race in perf_adjust_period, and do
an explicit check to only reset the hardware event using the
start/stop callbacks.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/perf_event.c |   30 +++++++++++++++++-------------
 1 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index c772a3d..95a56ed 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1541,11 +1541,24 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)
 	hwc->sample_period = sample_period;
 
 	if (local64_read(&hwc->period_left) > 8*sample_period) {
-		perf_disable();
-		perf_event_stop(event);
+		bool software_event = is_software_event(event);
+
+		/*
+		 * Only hardware events need their irq period to be
+		 * reprogrammed. And stopping and restarting software
+		 * events here would be dangerously racy.
+		 */
+		if (!software_event) {
+			perf_disable();
+			perf_event_stop(event);
+		}
+
 		local64_set(&hwc->period_left, 0);
-		perf_event_start(event);
-		perf_enable();
+
+		if (!software_event) {
+			perf_event_start(event);
+			perf_enable();
+		}
 	}
 }
 
@@ -4286,16 +4299,9 @@ static void perf_swevent_void(struct perf_event *event)
 {
 }
 
-static int perf_swevent_int(struct perf_event *event)
-{
-	return 0;
-}
-
 static const struct pmu perf_ops_generic = {
 	.enable		= perf_swevent_enable,
 	.disable	= perf_swevent_disable,
-	.start		= perf_swevent_int,
-	.stop		= perf_swevent_void,
 	.read		= perf_swevent_read,
 	.unthrottle	= perf_swevent_void, /* hwc->interrupts already reset */
 };
@@ -4578,8 +4584,6 @@ static int swevent_hlist_get(struct perf_event *event)
 static const struct pmu perf_ops_tracepoint = {
 	.enable		= perf_trace_enable,
 	.disable	= perf_trace_disable,
-	.start		= perf_swevent_int,
-	.stop		= perf_swevent_void,
 	.read		= perf_swevent_read,
 	.unthrottle	= perf_swevent_void,
 };
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/5] perf: Support disable() after stop() on software events
  2010-06-12  7:34 [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
@ 2010-06-12  7:34 ` Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 3/5] perf: Ability to enable in a paused mode Frederic Weisbecker
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12  7:34 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Stephane Eranian,
	Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

If we call perf_event_stop() on a software event and then the
disable() pmu callback on them after that, we'll call twice
hlist_del_rcu() on the same hlist node and then bring a crash
by dereferencing LIST_POISON2.

Just use hlist_del_init_rcu() instead to fix this problem.

This preparates for new context exclusions.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/perf_event.c             |    2 +-
 kernel/trace/trace_event_perf.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 95a56ed..c5f2306 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4292,7 +4292,7 @@ static int perf_swevent_enable(struct perf_event *event)
 
 static void perf_swevent_disable(struct perf_event *event)
 {
-	hlist_del_rcu(&event->hlist_entry);
+	hlist_del_init_rcu(&event->hlist_entry);
 }
 
 static void perf_swevent_void(struct perf_event *event)
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 4799d70..7bc1f26 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -122,7 +122,7 @@ int perf_trace_enable(struct perf_event *p_event)
 
 void perf_trace_disable(struct perf_event *p_event)
 {
-	hlist_del_rcu(&p_event->hlist_entry);
+	hlist_del_init_rcu(&p_event->hlist_entry);
 }
 
 void perf_trace_destroy(struct perf_event *p_event)
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/5] perf: Ability to enable in a paused mode
  2010-06-12  7:34 [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 2/5] perf: Support disable() after stop() on " Frederic Weisbecker
@ 2010-06-12  7:34 ` Frederic Weisbecker
  2010-06-12  9:44   ` Peter Zijlstra
  2010-06-12  7:34 ` [PATCH 4/5] perf: Introduce task, softirq and hardirq contexts exclusion Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 5/5] perf: Support for task/softirq/hardirq exclusion on tools Frederic Weisbecker
  4 siblings, 1 reply; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12  7:34 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Stephane Eranian,
	Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

In order to provide task context exclusion, we need to be able
to schedule an event in a "paused" mode. This is what does the
new pmu->reserve callback. It means the event must have its place
reserved in the cpu but it won't actually start until an explicit
call to the pmu->start() callback.

To maintain this paused state, we also introduce a new
PERF_EVENT_STATE_PAUSED internal state.

PMUs that don't implement the reserve callback won't fully support
the task context exclusion.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/cpu/perf_event.c |    7 +++++--
 include/linux/perf_event.h       |   10 +++++++++-
 kernel/hw_breakpoint.c           |    1 +
 kernel/perf_event.c              |   34 ++++++++++++++++++++++------------
 4 files changed, 37 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index f2da20f..7ee299f 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -839,7 +839,8 @@ void hw_perf_enable(void)
 			    match_prev_assignment(hwc, cpuc, i))
 				continue;
 
-			x86_pmu_stop(event);
+			if (event->state != PERF_EVENT_STATE_PAUSED)
+				x86_pmu_stop(event);
 		}
 
 		for (i = 0; i < cpuc->n_events; i++) {
@@ -851,7 +852,8 @@ void hw_perf_enable(void)
 			else if (i < n_running)
 				continue;
 
-			x86_pmu_start(event);
+			if (event->state != PERF_EVENT_STATE_PAUSED)
+				x86_pmu_start(event);
 		}
 		cpuc->n_added = 0;
 		perf_events_lapic_init();
@@ -1452,6 +1454,7 @@ static int x86_pmu_commit_txn(const struct pmu *pmu)
 
 static const struct pmu pmu = {
 	.enable		= x86_pmu_enable,
+	.reserve	= x86_pmu_enable,
 	.disable	= x86_pmu_disable,
 	.start		= x86_pmu_start,
 	.stop		= x86_pmu_stop,
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 63b5aa5..cea69c9 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -560,6 +560,12 @@ struct perf_event;
  */
 struct pmu {
 	int (*enable)			(struct perf_event *event);
+	/*
+	 * Reserve acts like enable, except the event must go in a "pause"
+	 * state. Ie: it is scheduled but waiting to be started
+	 * with the ->start() callback.
+	 */
+	int (*reserve)			(struct perf_event *event);
 	void (*disable)			(struct perf_event *event);
 	int (*start)			(struct perf_event *event);
 	void (*stop)			(struct perf_event *event);
@@ -598,7 +604,8 @@ enum perf_event_active_state {
 	PERF_EVENT_STATE_ERROR		= -2,
 	PERF_EVENT_STATE_OFF		= -1,
 	PERF_EVENT_STATE_INACTIVE	=  0,
-	PERF_EVENT_STATE_ACTIVE		=  1,
+	PERF_EVENT_STATE_PAUSED		=  1,
+	PERF_EVENT_STATE_ACTIVE		=  2,
 };
 
 struct file;
@@ -931,6 +938,7 @@ static inline int is_software_event(struct perf_event *event)
 extern atomic_t perf_swevent_enabled[PERF_COUNT_SW_MAX];
 
 extern void __perf_sw_event(u32, u64, int, struct pt_regs *, u64);
+extern int perf_swevent_int(struct perf_event *event);
 
 #ifndef perf_arch_fetch_caller_regs
 static inline void
diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c
index 7a56b22..739a8e6 100644
--- a/kernel/hw_breakpoint.c
+++ b/kernel/hw_breakpoint.c
@@ -587,6 +587,7 @@ core_initcall(init_hw_breakpoint);
 
 struct pmu perf_ops_bp = {
 	.enable		= arch_install_hw_breakpoint,
+	.reserve	= perf_swevent_int,
 	.disable	= arch_uninstall_hw_breakpoint,
 	.read		= hw_breakpoint_pmu_read,
 };
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index c5f2306..e440f21 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -407,7 +407,7 @@ event_sched_out(struct perf_event *event,
 		  struct perf_cpu_context *cpuctx,
 		  struct perf_event_context *ctx)
 {
-	if (event->state != PERF_EVENT_STATE_ACTIVE)
+	if (event->state < PERF_EVENT_STATE_PAUSED)
 		return;
 
 	event->state = PERF_EVENT_STATE_INACTIVE;
@@ -433,7 +433,7 @@ group_sched_out(struct perf_event *group_event,
 {
 	struct perf_event *event;
 
-	if (group_event->state != PERF_EVENT_STATE_ACTIVE)
+	if (group_event->state < PERF_EVENT_STATE_PAUSED)
 		return;
 
 	event_sched_out(group_event, cpuctx, ctx);
@@ -617,7 +617,7 @@ void perf_event_disable(struct perf_event *event)
 	/*
 	 * If the event is still active, we need to retry the cross-call.
 	 */
-	if (event->state == PERF_EVENT_STATE_ACTIVE) {
+	if (event->state >= PERF_EVENT_STATE_PAUSED) {
 		raw_spin_unlock_irq(&ctx->lock);
 		goto retry;
 	}
@@ -810,7 +810,7 @@ static void __perf_install_in_context(void *info)
 	 * it is in a group and the group isn't on.
 	 */
 	if (event->state != PERF_EVENT_STATE_INACTIVE ||
-	    (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE))
+	    (leader != event && leader->state < PERF_EVENT_STATE_PAUSED))
 		goto unlock;
 
 	/*
@@ -955,7 +955,7 @@ static void __perf_event_enable(void *info)
 	 * If the event is in a group and isn't the group leader,
 	 * then don't put it on unless the group is on.
 	 */
-	if (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE)
+	if (leader != event && leader->state < PERF_EVENT_STATE_PAUSED)
 		goto unlock;
 
 	if (!group_can_go_on(event, cpuctx, 1)) {
@@ -1135,7 +1135,7 @@ static void __perf_event_sync_stat(struct perf_event *event,
 	case PERF_EVENT_STATE_ACTIVE:
 		event->pmu->read(event);
 		/* fall-through */
-
+	case PERF_EVENT_STATE_PAUSED:
 	case PERF_EVENT_STATE_INACTIVE:
 		update_event_times(event);
 		break;
@@ -1541,21 +1541,22 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)
 	hwc->sample_period = sample_period;
 
 	if (local64_read(&hwc->period_left) > 8*sample_period) {
-		bool software_event = is_software_event(event);
+		bool reprogram = !is_software_event(event) &&
+				 event->state != PERF_EVENT_STATE_PAUSED;
 
 		/*
 		 * Only hardware events need their irq period to be
 		 * reprogrammed. And stopping and restarting software
 		 * events here would be dangerously racy.
 		 */
-		if (!software_event) {
+		if (reprogram) {
 			perf_disable();
 			perf_event_stop(event);
 		}
 
 		local64_set(&hwc->period_left, 0);
 
-		if (!software_event) {
+		if (reprogram) {
 			perf_event_start(event);
 			perf_enable();
 		}
@@ -1763,7 +1764,7 @@ static u64 perf_event_read(struct perf_event *event)
 	if (event->state == PERF_EVENT_STATE_ACTIVE) {
 		smp_call_function_single(event->oncpu,
 					 __perf_event_read, event, 1);
-	} else if (event->state == PERF_EVENT_STATE_INACTIVE) {
+	} else if (event->state >= PERF_EVENT_STATE_INACTIVE) {
 		struct perf_event_context *ctx = event->ctx;
 		unsigned long flags;
 
@@ -2339,7 +2340,7 @@ int perf_event_task_disable(void)
 
 static int perf_event_index(struct perf_event *event)
 {
-	if (event->state != PERF_EVENT_STATE_ACTIVE)
+	if (event->state < PERF_EVENT_STATE_PAUSED)
 		return 0;
 
 	return event->hw.idx + 1 - PERF_EVENT_INDEX_OFFSET;
@@ -2371,7 +2372,7 @@ void perf_event_update_userpage(struct perf_event *event)
 	barrier();
 	userpg->index = perf_event_index(event);
 	userpg->offset = perf_event_count(event);
-	if (event->state == PERF_EVENT_STATE_ACTIVE)
+	if (event->state >= PERF_EVENT_STATE_PAUSED)
 		userpg->offset -= local64_read(&event->hw.prev_count);
 
 	userpg->time_enabled = event->total_time_enabled +
@@ -4299,8 +4300,14 @@ static void perf_swevent_void(struct perf_event *event)
 {
 }
 
+int perf_swevent_int(struct perf_event *event)
+{
+	return 0;
+}
+
 static const struct pmu perf_ops_generic = {
 	.enable		= perf_swevent_enable,
+	.reserve	= perf_swevent_int,
 	.disable	= perf_swevent_disable,
 	.read		= perf_swevent_read,
 	.unthrottle	= perf_swevent_void, /* hwc->interrupts already reset */
@@ -4412,6 +4419,7 @@ static void cpu_clock_perf_event_read(struct perf_event *event)
 
 static const struct pmu perf_ops_cpu_clock = {
 	.enable		= cpu_clock_perf_event_enable,
+	.reserve	= perf_swevent_int,
 	.disable	= cpu_clock_perf_event_disable,
 	.read		= cpu_clock_perf_event_read,
 };
@@ -4469,6 +4477,7 @@ static void task_clock_perf_event_read(struct perf_event *event)
 
 static const struct pmu perf_ops_task_clock = {
 	.enable		= task_clock_perf_event_enable,
+	.reserve	= perf_swevent_int,
 	.disable	= task_clock_perf_event_disable,
 	.read		= task_clock_perf_event_read,
 };
@@ -4583,6 +4592,7 @@ static int swevent_hlist_get(struct perf_event *event)
 
 static const struct pmu perf_ops_tracepoint = {
 	.enable		= perf_trace_enable,
+	.reserve	= perf_swevent_int,
 	.disable	= perf_trace_disable,
 	.read		= perf_swevent_read,
 	.unthrottle	= perf_swevent_void,
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/5] perf: Introduce task, softirq and hardirq contexts exclusion
  2010-06-12  7:34 [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2010-06-12  7:34 ` [PATCH 3/5] perf: Ability to enable in a paused mode Frederic Weisbecker
@ 2010-06-12  7:34 ` Frederic Weisbecker
  2010-06-12  7:34 ` [PATCH 5/5] perf: Support for task/softirq/hardirq exclusion on tools Frederic Weisbecker
  4 siblings, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12  7:34 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Stephane Eranian,
	Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

This brings the possibility to exclude task and irq context from
the instrumentation, so that one can either filter any kind of
context or just confine the profiling to a single one.

In order to achieve that, this hooks into the irq_enter(),
irq_exit() and also the softirq paths. Each time we enter or exit
a new non-nested context, we determine the events that need to be
paused or resumed.

Here we use the ->stop() and ->start() callbacks that provide
lightweight pause/resume modes to the events.

The off-case (no running events having these new exclude properties
set) only adds a single atomic_read() in each hooks: two in the irq
path and two in the softirq path.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/perf_event.h |   42 +++++++++-
 kernel/perf_event.c        |  205 ++++++++++++++++++++++++++++++++++++++------
 kernel/softirq.c           |    6 ++
 3 files changed, 227 insertions(+), 26 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index cea69c9..185a295 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -215,8 +215,11 @@ struct perf_event_attr {
 				 */
 				precise_ip     :  2, /* skid constraint       */
 				mmap_data      :  1, /* non-exec mmap data    */
+				exclude_task   :  1, /* exclude task contexts */
+				exclude_softirq:  1, /* exclude softirq contexts */
+				exclude_hardirq:  1, /* exclude hardirq contexts */
 
-				__reserved_1   : 46;
+				__reserved_1   : 43;
 
 	union {
 		__u32		wakeup_events;	  /* wakeup every n events */
@@ -936,10 +939,16 @@ static inline int is_software_event(struct perf_event *event)
 }
 
 extern atomic_t perf_swevent_enabled[PERF_COUNT_SW_MAX];
+extern atomic_t nr_excluded_events;
 
 extern void __perf_sw_event(u32, u64, int, struct pt_regs *, u64);
 extern int perf_swevent_int(struct perf_event *event);
 
+extern void __perf_event_hardirq_enter(void);
+extern void __perf_event_hardirq_exit(void);
+extern void __perf_event_softirq_enter(void);
+extern void __perf_event_softirq_exit(void);
+
 #ifndef perf_arch_fetch_caller_regs
 static inline void
 perf_arch_fetch_caller_regs(struct regs *regs, unsigned long ip) { }
@@ -975,6 +984,31 @@ perf_sw_event(u32 event_id, u64 nr, int nmi, struct pt_regs *regs, u64 addr)
 }
 
 extern void perf_event_mmap(struct vm_area_struct *vma);
+
+static inline void perf_event_hardirq_enter(void)
+{
+	if (atomic_read(&nr_excluded_events))
+		__perf_event_hardirq_enter();
+}
+
+static inline void perf_event_hardirq_exit(void)
+{
+	if (atomic_read(&nr_excluded_events))
+		__perf_event_hardirq_exit();
+}
+
+static inline void perf_event_softirq_enter(void)
+{
+	if (atomic_read(&nr_excluded_events))
+		__perf_event_softirq_enter();
+}
+
+static inline void perf_event_softirq_exit(void)
+{
+	if (atomic_read(&nr_excluded_events))
+		__perf_event_softirq_exit();
+}
+
 extern struct perf_guest_info_callbacks *perf_guest_cbs;
 extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
 extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
@@ -1046,6 +1080,12 @@ static inline int perf_event_task_enable(void)				{ return -EINVAL; }
 static inline void
 perf_sw_event(u32 event_id, u64 nr, int nmi,
 		     struct pt_regs *regs, u64 addr)			{ }
+
+static inline void perf_event_hardirq_enter(void)			{ }
+static inline void perf_event_hardirq_exit(void)			{ }
+static inline void perf_event_softirq_enter(void)			{ }
+static inline void perf_event_softirq_exit(void)			{ }
+
 static inline void
 perf_bp_event(struct perf_event *event, void *data)			{ }
 
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index e440f21..cb8c3f6 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -48,6 +48,7 @@ static atomic_t nr_events __read_mostly;
 static atomic_t nr_mmap_events __read_mostly;
 static atomic_t nr_comm_events __read_mostly;
 static atomic_t nr_task_events __read_mostly;
+atomic_t nr_excluded_events __read_mostly;
 
 /*
  * perf event paranoia level:
@@ -642,17 +643,23 @@ event_sched_in(struct perf_event *event,
 	if (event->state <= PERF_EVENT_STATE_OFF)
 		return 0;
 
-	event->state = PERF_EVENT_STATE_ACTIVE;
 	event->oncpu = smp_processor_id();
-	/*
-	 * The new state must be visible before we turn it on in the hardware:
-	 */
-	smp_wmb();
 
-	if (event->pmu->enable(event)) {
-		event->state = PERF_EVENT_STATE_INACTIVE;
-		event->oncpu = -1;
-		return -EAGAIN;
+
+	if (event->attr.exclude_task && event->pmu->reserve) {
+		event->state = PERF_EVENT_STATE_PAUSED;
+		/*
+		 * The new state must be visible before we turn it on in
+		 * the hardware:
+		 */
+		smp_wmb();
+		if (event->pmu->reserve(event))
+			goto failed;
+	} else {
+		event->state = PERF_EVENT_STATE_ACTIVE;
+		smp_wmb();
+		if (event->pmu->enable(event))
+			goto failed;
 	}
 
 	event->tstamp_running += ctx->time - event->tstamp_stopped;
@@ -665,6 +672,11 @@ event_sched_in(struct perf_event *event,
 		cpuctx->exclusive = 1;
 
 	return 0;
+
+ failed:
+	event->state = PERF_EVENT_STATE_INACTIVE;
+	event->oncpu = -1;
+	return -EAGAIN;
 }
 
 static int
@@ -1191,6 +1203,159 @@ static void perf_event_sync_stat(struct perf_event_context *ctx,
 	}
 }
 
+static void perf_event_stop(struct perf_event *event)
+{
+	if (!event->pmu->stop)
+		return event->pmu->disable(event);
+
+	return event->pmu->stop(event);
+}
+
+static int perf_event_start(struct perf_event *event)
+{
+	if (!event->pmu->start)
+		return event->pmu->enable(event);
+
+	return event->pmu->start(event);
+}
+
+enum enter_context_t {
+	CONTEXT_HARDIRQ,
+	CONTEXT_SOFTIRQ,
+	CONTEXT_TASK,
+};
+
+static int event_enter_context(enum enter_context_t context,
+			       struct perf_event *event)
+{
+	int exclude;
+	int ret = 0;
+
+	switch (context) {
+	case CONTEXT_HARDIRQ:
+		exclude = event->attr.exclude_hardirq;
+		break;
+	case CONTEXT_SOFTIRQ:
+		exclude = event->attr.exclude_softirq;
+		break;
+	case CONTEXT_TASK:
+		exclude = event->attr.exclude_task;
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		return -EINVAL;
+	}
+
+	if (exclude && event->state == PERF_EVENT_STATE_ACTIVE) {
+		event->state = PERF_EVENT_STATE_PAUSED;
+		perf_event_stop(event);
+	} else if (!exclude && event->state == PERF_EVENT_STATE_PAUSED) {
+		event->state = PERF_EVENT_STATE_ACTIVE;
+		ret = perf_event_start(event);
+	}
+
+	return ret;
+}
+
+static void
+group_enter_context(enum enter_context_t context,
+		    struct perf_event *group_event,
+		    struct perf_cpu_context *cpuctx,
+		    struct perf_event_context *ctx)
+{
+	struct perf_event *event;
+
+	if (group_event->state < PERF_EVENT_STATE_PAUSED)
+		return;
+
+	/*
+	 * We probably want to make the exclude_* things all the same in a
+	 * group, to enforce the group instrumentation and to optmitize this
+	 * path.
+	 */
+	if (event_enter_context(context, group_event))
+		goto fail;
+
+	list_for_each_entry(event, &group_event->sibling_list, group_entry) {
+		if (event_enter_context(context, event))
+			goto fail;
+	}
+
+	return;
+
+ fail:
+	group_sched_out(group_event, cpuctx, ctx);
+	group_event->state = PERF_EVENT_STATE_ERROR;
+}
+
+static void
+ctx_enter_context(enum enter_context_t context,
+		  struct perf_cpu_context *cpuctx,
+		  struct perf_event_context *ctx)
+{
+	struct perf_event *group_event;
+
+	raw_spin_lock(&ctx->lock);
+
+	list_for_each_entry(group_event, &ctx->pinned_groups, group_entry)
+		group_enter_context(context, group_event, cpuctx, ctx);
+
+	list_for_each_entry(group_event, &ctx->flexible_groups, group_entry)
+		group_enter_context(context, group_event, cpuctx, ctx);
+
+	raw_spin_unlock(&ctx->lock);
+}
+
+static void enter_context(enum enter_context_t context)
+{
+	struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context);
+	struct perf_event_context *ctx = current->perf_event_ctxp;
+	unsigned long flags;
+
+	local_irq_save(flags);
+
+	perf_disable();
+
+	ctx_enter_context(context, cpuctx, &cpuctx->ctx);
+	if (ctx)
+		ctx_enter_context(context, cpuctx, ctx);
+
+	perf_enable();
+
+	local_irq_restore(flags);
+}
+
+void __perf_event_hardirq_enter(void)
+{
+	/* Don't account nested cases */
+	if (!hardirq_count())
+		enter_context(CONTEXT_HARDIRQ);
+}
+
+void __perf_event_hardirq_exit(void)
+{
+	/* We are not truly leaving the irq if we nested */
+	if (hardirq_count())
+		return;
+
+	if (softirq_count())
+		enter_context(CONTEXT_SOFTIRQ);
+	else
+		enter_context(CONTEXT_TASK);
+}
+
+void __perf_event_softirq_enter(void)
+{
+	/* Softirqs can't nest */
+	enter_context(CONTEXT_SOFTIRQ);
+}
+
+void __perf_event_softirq_exit(void)
+{
+	/* Softirqs could have only interrupted a task context */
+	enter_context(CONTEXT_TASK);
+}
+
 /*
  * Called from scheduler to remove the events of the current task,
  * with interrupts disabled.
@@ -1506,22 +1671,6 @@ do {					\
 	return div64_u64(dividend, divisor);
 }
 
-static void perf_event_stop(struct perf_event *event)
-{
-	if (!event->pmu->stop)
-		return event->pmu->disable(event);
-
-	return event->pmu->stop(event);
-}
-
-static int perf_event_start(struct perf_event *event)
-{
-	if (!event->pmu->start)
-		return event->pmu->enable(event);
-
-	return event->pmu->start(event);
-}
-
 static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)
 {
 	struct hw_perf_event *hwc = &event->hw;
@@ -1909,6 +2058,9 @@ static void free_event(struct perf_event *event)
 			atomic_dec(&nr_comm_events);
 		if (event->attr.task)
 			atomic_dec(&nr_task_events);
+		if (event->attr.exclude_task || event->attr.exclude_softirq ||
+		    event->attr.exclude_hardirq)
+			atomic_dec(&nr_excluded_events);
 	}
 
 	if (event->buffer) {
@@ -4943,6 +5095,9 @@ done:
 			atomic_inc(&nr_comm_events);
 		if (event->attr.task)
 			atomic_inc(&nr_task_events);
+		if (event->attr.exclude_task || event->attr.exclude_softirq ||
+		    event->attr.exclude_hardirq)
+			atomic_inc(&nr_excluded_events);
 	}
 
 	return event;
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 825e112..bb31457 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -198,6 +198,8 @@ asmlinkage void __do_softirq(void)
 	pending = local_softirq_pending();
 	account_system_vtime(current);
 
+	perf_event_softirq_enter();
+
 	__local_bh_disable((unsigned long)__builtin_return_address(0));
 	lockdep_softirq_enter();
 
@@ -246,6 +248,8 @@ restart:
 
 	account_system_vtime(current);
 	_local_bh_enable();
+
+	perf_event_softirq_exit();
 }
 
 #ifndef __ARCH_HAS_DO_SOFTIRQ
@@ -277,6 +281,7 @@ void irq_enter(void)
 {
 	int cpu = smp_processor_id();
 
+	perf_event_hardirq_enter();
 	rcu_irq_enter();
 	if (idle_cpu(cpu) && !in_interrupt()) {
 		__irq_enter();
@@ -302,6 +307,7 @@ void irq_exit(void)
 	if (!in_interrupt() && local_softirq_pending())
 		invoke_softirq();
 
+	perf_event_hardirq_exit();
 	rcu_irq_exit();
 #ifdef CONFIG_NO_HZ
 	/* Make sure that timer wheel updates are propagated */
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/5] perf: Support for task/softirq/hardirq exclusion on tools
  2010-06-12  7:34 [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2010-06-12  7:34 ` [PATCH 4/5] perf: Introduce task, softirq and hardirq contexts exclusion Frederic Weisbecker
@ 2010-06-12  7:34 ` Frederic Weisbecker
  4 siblings, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12  7:34 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Paul Mackerras, Cyrill Gorcunov,
	Zhang Yanmin, Steven Rostedt

Bring the following new flags on perf events:

- t = Profile task context
- s = Profile softirq context
- i = Profile hardirq context

Example:

	perf record -a -g -e cycles:i ls -R /usr > /dev/null

         3.11%           ls  [kernel.kallsyms]  [k] __lock_acquire
                         |
                         --- __lock_acquire
                            |
                            |--95.83%-- lock_acquire
                            |          _raw_spin_lock
                            |          |
                            |          |--30.43%-- perf_ctx_adjust_freq
                            |          |          perf_event_task_tick
                            |          |          scheduler_tick
                            |          |          update_process_times
                            |          |          tick_sched_timer
                            |          |          __run_hrtimer
                            |          |          hrtimer_interrupt
                            |          |          smp_apic_timer_interrupt
                            |          |          apic_timer_interrupt

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 tools/perf/util/parse-events.c |   37 ++++++++++++++++++++++++++-----------
 1 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 9bf0f40..7a18e71 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -688,24 +688,36 @@ static enum event_result
 parse_event_modifier(const char **strp, struct perf_event_attr *attr)
 {
 	const char *str = *strp;
-	int exclude = 0;
-	int eu = 0, ek = 0, eh = 0, precise = 0;
+	int exclude_ring = 0, exclude_context = 0;
+	int eu = 0, ek = 0, eh = 0, et = 0, es = 0, ei = 0, precise = 0;
 
 	if (*str++ != ':')
 		return 0;
 	while (*str) {
 		if (*str == 'u') {
-			if (!exclude)
-				exclude = eu = ek = eh = 1;
+			if (!exclude_ring)
+				exclude_ring = eu = ek = eh = 1;
 			eu = 0;
 		} else if (*str == 'k') {
-			if (!exclude)
-				exclude = eu = ek = eh = 1;
+			if (!exclude_ring)
+				exclude_ring = eu = ek = eh = 1;
 			ek = 0;
 		} else if (*str == 'h') {
-			if (!exclude)
-				exclude = eu = ek = eh = 1;
+			if (!exclude_ring)
+				exclude_ring = eu = ek = eh = 1;
 			eh = 0;
+		} else if (*str == 't') {
+			if (!exclude_context)
+				exclude_context = et = es = ei = 1;
+			et = 0;
+		} else if (*str == 's') {
+			if (!exclude_context)
+				exclude_context = et = es = ei = 1;
+			es = 0;
+		} else if (*str == 'i') {
+			if (!exclude_context)
+				exclude_context = et = es = ei = 1;
+			ei = 0;
 		} else if (*str == 'p') {
 			precise++;
 		} else
@@ -715,9 +727,12 @@ parse_event_modifier(const char **strp, struct perf_event_attr *attr)
 	}
 	if (str >= *strp + 2) {
 		*strp = str;
-		attr->exclude_user   = eu;
-		attr->exclude_kernel = ek;
-		attr->exclude_hv     = eh;
+		attr->exclude_user	= eu;
+		attr->exclude_kernel	= ek;
+		attr->exclude_hv	= eh;
+		attr->exclude_task	= et;
+		attr->exclude_softirq	= es;
+		attr->exclude_hardirq	= ei;
 		attr->precise_ip     = precise;
 		return 1;
 	}
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-12  7:34 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
@ 2010-06-12  9:43   ` Peter Zijlstra
  2010-06-12 16:25     ` Frederic Weisbecker
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2010-06-12  9:43 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Ingo Molnar, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Sat, 2010-06-12 at 09:34 +0200, Frederic Weisbecker wrote:
> In order to introduce new context exclusions, software events will
> have to eventually stop when needed. We'll want perf_event_stop() to
> act on every events.
> 
> To achieve this, remove the stub stop/start pmu callbacks of software
> and tracepoint events that fixed a race in perf_adjust_period, and do
> an explicit check to only reset the hardware event using the
> start/stop callbacks.

I really object to this,. its just too ugly to live.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] perf: Ability to enable in a paused mode
  2010-06-12  7:34 ` [PATCH 3/5] perf: Ability to enable in a paused mode Frederic Weisbecker
@ 2010-06-12  9:44   ` Peter Zijlstra
  2010-06-12 16:44     ` Frederic Weisbecker
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2010-06-12  9:44 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Ingo Molnar, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Sat, 2010-06-12 at 09:34 +0200, Frederic Weisbecker wrote:
>  struct pmu {
>         int (*enable)                   (struct perf_event *event);
> +       /*
> +        * Reserve acts like enable, except the event must go in a "pause"
> +        * state. Ie: it is scheduled but waiting to be started
> +        * with the ->start() callback.
> +        */
> +       int (*reserve)                  (struct perf_event *event);
>         void (*disable)                 (struct perf_event *event); 

Urgh, so then we have, enable(), reserve() and start(), that's just too
much. Also, you need to visit all pmu implementations if you touch
struct pmu like that.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] perf: Provide a proper stop action for software events
  2010-06-12  9:43   ` Peter Zijlstra
@ 2010-06-12 16:25     ` Frederic Weisbecker
  0 siblings, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12 16:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Ingo Molnar, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Sat, Jun 12, 2010 at 11:43:11AM +0200, Peter Zijlstra wrote:
> On Sat, 2010-06-12 at 09:34 +0200, Frederic Weisbecker wrote:
> > In order to introduce new context exclusions, software events will
> > have to eventually stop when needed. We'll want perf_event_stop() to
> > act on every events.
> > 
> > To achieve this, remove the stub stop/start pmu callbacks of software
> > and tracepoint events that fixed a race in perf_adjust_period, and do
> > an explicit check to only reset the hardware event using the
> > start/stop callbacks.
> 
> I really object to this,. its just too ugly to live.


Several propositions of alternatives then, please tell me if one
looks more palatable to you:

- Having an argument on ->stop() and ->start() callback which
  would be reset_period. If reset_period is true, then the event
  knows the goal is to reprogram the interrupt.

  FWIW, that's my prefered solution, software pmus can just check
  this and return immediately. This avoids all the race between
  concurrent stat and stop plus the wasteful/useless hlist
  manipulation.

- Having a nesting level control on stop and start, so that
  we only call stop/start on nesting level 0. That solves the race.

- Having a flag in the pmu that tells if it wants to reprogram
  on period reset. Then we know if we need the start/stop against
  this flag. I just propose this one for the fun, I already
  know it sucks :)

Thanks.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] perf: Ability to enable in a paused mode
  2010-06-12  9:44   ` Peter Zijlstra
@ 2010-06-12 16:44     ` Frederic Weisbecker
  0 siblings, 0 replies; 20+ messages in thread
From: Frederic Weisbecker @ 2010-06-12 16:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Ingo Molnar, Arnaldo Carvalho de Melo, Paul Mackerras,
	Stephane Eranian, Cyrill Gorcunov, Zhang Yanmin, Steven Rostedt

On Sat, Jun 12, 2010 at 11:44:26AM +0200, Peter Zijlstra wrote:
> On Sat, 2010-06-12 at 09:34 +0200, Frederic Weisbecker wrote:
> >  struct pmu {
> >         int (*enable)                   (struct perf_event *event);
> > +       /*
> > +        * Reserve acts like enable, except the event must go in a "pause"
> > +        * state. Ie: it is scheduled but waiting to be started
> > +        * with the ->start() callback.
> > +        */
> > +       int (*reserve)                  (struct perf_event *event);
> >         void (*disable)                 (struct perf_event *event); 
> 
> Urgh, so then we have, enable(), reserve() and start(), that's just too
> much. Also, you need to visit all pmu implementations if you touch
> struct pmu like that.


No, in fact that's convenient way not to need to check every pmus.
Only those I know and could test got this reserve implemented. Those that
haven't a reserve will fallback to enable and won't enter this PAUSED
state, task exclusion will then miss the first slice between schedule
and the next interrupt, but it's not dangerous. It's just that if
one wants to support a given pmu, he needs to check this pmu to provide
a relevant implementation.

It was the most convenient way as I don't need to check every pmus,
and I suspect I'm going to be quite useless on sparc, powerpc or arm
as I can't event test these archs (I have no more access to a sparc box).

But yeah I understand your worries about adding a new callback for that.

If you prefer I can provide a flag on enable() and return -ENOSYS for
those I can't test or audit. In fact this is probably more sane, and
we could even not fallback to a normal enable() in case of no support
for the paused mode, and put the event in an error state in this case.
The user requested task exclusion anyway, it's better to tell him we
can't rather than half doing it.


On software events I can just return immediately, on x86 I can maintain
an internal state that gets checked on hw_perf_*able() but the generic
state is convenient enough for that, so I will likely reuse it unless
there are oppositions about this, in which case I can maintain a
duplicate internal paused state for x86 events.


Please tell me what you think about this.

Thanks.


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2010-06-12 16:44 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-06-12  7:34 [PATCH 0/5 v3] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
2010-06-12  7:34 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
2010-06-12  9:43   ` Peter Zijlstra
2010-06-12 16:25     ` Frederic Weisbecker
2010-06-12  7:34 ` [PATCH 2/5] perf: Support disable() after stop() on " Frederic Weisbecker
2010-06-12  7:34 ` [PATCH 3/5] perf: Ability to enable in a paused mode Frederic Weisbecker
2010-06-12  9:44   ` Peter Zijlstra
2010-06-12 16:44     ` Frederic Weisbecker
2010-06-12  7:34 ` [PATCH 4/5] perf: Introduce task, softirq and hardirq contexts exclusion Frederic Weisbecker
2010-06-12  7:34 ` [PATCH 5/5] perf: Support for task/softirq/hardirq exclusion on tools Frederic Weisbecker
  -- strict thread matches above, loose matches on Subject: below --
2010-06-10  3:49 [PATCH 0/5] perf events finer grained context instrumentation / context exclusion Frederic Weisbecker
2010-06-10  3:49 ` [PATCH 1/5] perf: Provide a proper stop action for software events Frederic Weisbecker
2010-06-10 10:46   ` Peter Zijlstra
2010-06-10 11:10     ` Peter Zijlstra
2010-06-10 16:12       ` Frederic Weisbecker
2010-06-10 16:16         ` Peter Zijlstra
2010-06-10 16:29           ` Frederic Weisbecker
2010-06-10 16:38             ` Peter Zijlstra
2010-06-10 17:04               ` Frederic Weisbecker
2010-06-10 19:54           ` Frederic Weisbecker
2010-06-10 12:06     ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox