linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] perf_counter bits
@ 2009-05-01 10:23 Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2009-05-01 10:23 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra

 - fixes a race in the output code
 - x86: fixes a hang in nmi_watchdog=2 vs perf_counters
 - teaches perf-report to handle 0-length files
 - updates the documentation
-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/4] perf counter bits
@ 2009-05-20 10:21 Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 1/4] perf_counter: solve the rotate_ctx vs inherit race differently Peter Zijlstra
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Peter Zijlstra @ 2009-05-20 10:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
	Arnaldo Carvalho de Melo, John Kacur

-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4] perf_counter: solve the rotate_ctx vs inherit race differently
  2009-05-20 10:21 [PATCH 0/4] perf counter bits Peter Zijlstra
@ 2009-05-20 10:21 ` Peter Zijlstra
  2009-05-20 17:18   ` [tip:perfcounters/core] perf_counter: Solve " tip-bot for Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 2/4] perf_counter: log irq_period changes Peter Zijlstra
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2009-05-20 10:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
	Arnaldo Carvalho de Melo, John Kacur

[-- Attachment #1: perf_counter-fixup-inherit.patch --]
[-- Type: text/plain, Size: 2126 bytes --]

Instead of disabling RR scheduling of the counters, use a different list
that does not get rotated to iterate the counters on inheritance.

LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 include/linux/perf_counter.h |    1 -
 kernel/perf_counter.c        |   15 +++++----------
 2 files changed, 5 insertions(+), 11 deletions(-)

Index: linux-2.6/include/linux/perf_counter.h
===================================================================
--- linux-2.6.orig/include/linux/perf_counter.h
+++ linux-2.6/include/linux/perf_counter.h
@@ -508,7 +508,6 @@ struct perf_counter_context {
 	int			nr_counters;
 	int			nr_active;
 	int			is_active;
-	int			rr_allowed;
 	struct task_struct	*task;
 
 	/*
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -1120,8 +1120,7 @@ void perf_counter_task_tick(struct task_
 	__perf_counter_task_sched_out(ctx);
 
 	rotate_ctx(&cpuctx->ctx);
-	if (ctx->rr_allowed)
-		rotate_ctx(ctx);
+	rotate_ctx(ctx);
 
 	perf_counter_cpu_sched_in(cpuctx, cpu);
 	perf_counter_task_sched_in(curr, cpu);
@@ -3109,7 +3108,6 @@ __perf_counter_init_context(struct perf_
 	mutex_init(&ctx->mutex);
 	INIT_LIST_HEAD(&ctx->counter_list);
 	INIT_LIST_HEAD(&ctx->event_list);
-	ctx->rr_allowed = 1;
 	ctx->task = task;
 }
 
@@ -3350,14 +3348,14 @@ void perf_counter_init_task(struct task_
 	 */
 	mutex_lock(&parent_ctx->mutex);
 
-	parent_ctx->rr_allowed = 0;
-	barrier(); /* irqs */
-
 	/*
 	 * We dont have to disable NMIs - we are only looking at
 	 * the list, not manipulating it:
 	 */
-	list_for_each_entry(counter, &parent_ctx->counter_list, list_entry) {
+	list_for_each_entry_rcu(counter, &parent_ctx->event_list, event_entry) {
+		if (counter != counter->group_leader)
+			continue;
+
 		if (!counter->hw_event.inherit)
 			continue;
 
@@ -3366,9 +3364,6 @@ void perf_counter_init_task(struct task_
 			break;
 	}
 
-	barrier(); /* irqs */
-	parent_ctx->rr_allowed = 1;
-
 	mutex_unlock(&parent_ctx->mutex);
 }
 

-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/4] perf_counter: log irq_period changes
  2009-05-20 10:21 [PATCH 0/4] perf counter bits Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 1/4] perf_counter: solve the rotate_ctx vs inherit race differently Peter Zijlstra
@ 2009-05-20 10:21 ` Peter Zijlstra
  2009-05-20 17:18   ` [tip:perfcounters/core] perf_counter: Log " tip-bot for Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 3/4] perf_counter: optimize disable of time based sw counters Peter Zijlstra
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2009-05-20 10:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
	Arnaldo Carvalho de Melo, John Kacur

[-- Attachment #1: perf_counter-freq-notification.patch --]
[-- Type: text/plain, Size: 2329 bytes --]

For the dynamic irq_period code, log whenever we change the period so that
analyzing code can normalize the event flow.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 include/linux/perf_counter.h |    8 ++++++++
 kernel/perf_counter.c        |   40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 47 insertions(+), 1 deletion(-)

Index: linux-2.6/include/linux/perf_counter.h
===================================================================
--- linux-2.6.orig/include/linux/perf_counter.h
+++ linux-2.6/include/linux/perf_counter.h
@@ -258,6 +258,14 @@ enum perf_event_type {
 	PERF_EVENT_COMM			= 3,
 
 	/*
+	 * struct {
+	 * 	struct perf_event_header	header;
+	 * 	u64				irq_period;
+	 * };
+	 */
+	PERF_EVENT_PERIOD		= 4,
+
+	/*
 	 * When header.misc & PERF_EVENT_MISC_OVERFLOW the event_type field
 	 * will be PERF_RECORD_*
 	 *
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -1046,7 +1046,9 @@ int perf_counter_task_enable(void)
 	return 0;
 }
 
-void perf_adjust_freq(struct perf_counter_context *ctx)
+static void perf_log_period(struct perf_counter *counter, u64 period);
+
+static void perf_adjust_freq(struct perf_counter_context *ctx)
 {
 	struct perf_counter *counter;
 	u64 irq_period;
@@ -1072,6 +1074,8 @@ void perf_adjust_freq(struct perf_counte
 		if (!irq_period)
 			irq_period = 1;
 
+		perf_log_period(counter, irq_period);
+
 		counter->hw.irq_period = irq_period;
 		counter->hw.interrupts = 0;
 	}
@@ -2407,6 +2411,40 @@ void perf_counter_munmap(unsigned long a
 }
 
 /*
+ *
+ */
+
+static void perf_log_period(struct perf_counter *counter, u64 period)
+{
+	struct perf_output_handle handle;
+	int ret;
+
+	struct {
+		struct perf_event_header	header;
+		u64				time;
+		u64				period;
+	} freq_event = {
+		.header = {
+			.type = PERF_EVENT_PERIOD,
+			.misc = 0,
+			.size = sizeof(freq_event),
+		},
+		.time = sched_clock(),
+		.period = period,
+	};
+
+	if (counter->hw.irq_period == period)
+		return;
+
+	ret = perf_output_begin(&handle, counter, sizeof(freq_event), 0, 0);
+	if (ret)
+		return;
+
+	perf_output_put(&handle, freq_event);
+	perf_output_end(&handle);
+}
+
+/*
  * Generic counter overflow handling.
  */
 

-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 3/4] perf_counter: optimize disable of time based sw counters
  2009-05-20 10:21 [PATCH 0/4] perf counter bits Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 1/4] perf_counter: solve the rotate_ctx vs inherit race differently Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 2/4] perf_counter: log irq_period changes Peter Zijlstra
@ 2009-05-20 10:21 ` Peter Zijlstra
  2009-05-20 17:19   ` [tip:perfcounters/core] perf_counter: Optimize " tip-bot for Peter Zijlstra
  2009-05-20 10:21 ` [PATCH 4/4] perf_counter: optimize sched in/out of counters Peter Zijlstra
  2009-05-20 10:48 ` [PATCH 0/4] perf counter bits Ingo Molnar
  4 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2009-05-20 10:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
	Arnaldo Carvalho de Melo, John Kacur

[-- Attachment #1: perf_counter-opt-swcounter-disable-hrtimer.patch --]
[-- Type: text/plain, Size: 1075 bytes --]

Currently we call hrtimer_cancel() unconditionally on disable of time based
software counters. Avoid when possible.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_counter.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -2716,7 +2716,8 @@ static int cpu_clock_perf_counter_enable
 
 static void cpu_clock_perf_counter_disable(struct perf_counter *counter)
 {
-	hrtimer_cancel(&counter->hw.hrtimer);
+	if (counter->hw.irq_period)
+		hrtimer_cancel(&counter->hw.hrtimer);
 	cpu_clock_perf_counter_update(counter);
 }
 
@@ -2767,7 +2768,8 @@ static int task_clock_perf_counter_enabl
 
 static void task_clock_perf_counter_disable(struct perf_counter *counter)
 {
-	hrtimer_cancel(&counter->hw.hrtimer);
+	if (counter->hw.irq_period)
+		hrtimer_cancel(&counter->hw.hrtimer);
 	task_clock_perf_counter_update(counter, counter->ctx->time);
 
 }

-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 4/4] perf_counter: optimize sched in/out of counters
  2009-05-20 10:21 [PATCH 0/4] perf counter bits Peter Zijlstra
                   ` (2 preceding siblings ...)
  2009-05-20 10:21 ` [PATCH 3/4] perf_counter: optimize disable of time based sw counters Peter Zijlstra
@ 2009-05-20 10:21 ` Peter Zijlstra
  2009-05-20 17:19   ` [tip:perfcounters/core] perf_counter: Optimize " tip-bot for Peter Zijlstra
  2009-05-20 10:48 ` [PATCH 0/4] perf counter bits Ingo Molnar
  4 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2009-05-20 10:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
	Arnaldo Carvalho de Melo, John Kacur

[-- Attachment #1: perf_counter-opt-sched_out.patch --]
[-- Type: text/plain, Size: 1894 bytes --]

Avoid a function call for !group counters by directly calling the counter
function.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_counter.c |   25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -826,8 +826,12 @@ void __perf_counter_sched_out(struct per
 
 	perf_disable();
 	if (ctx->nr_active) {
-		list_for_each_entry(counter, &ctx->counter_list, list_entry)
-			group_sched_out(counter, cpuctx, ctx);
+		list_for_each_entry(counter, &ctx->counter_list, list_entry) {
+			if (counter != counter->group_leader)
+				counter_sched_out(counter, cpuctx, ctx);
+			else
+				group_sched_out(counter, cpuctx, ctx);
+		}
 	}
 	perf_enable();
  out:
@@ -903,8 +907,12 @@ __perf_counter_sched_in(struct perf_coun
 		if (counter->cpu != -1 && counter->cpu != cpu)
 			continue;
 
-		if (group_can_go_on(counter, cpuctx, 1))
-			group_sched_in(counter, cpuctx, ctx, cpu);
+		if (counter != counter->group_leader)
+			counter_sched_in(counter, cpuctx, ctx, cpu);
+		else {
+			if (group_can_go_on(counter, cpuctx, 1))
+				group_sched_in(counter, cpuctx, ctx, cpu);
+		}
 
 		/*
 		 * If this pinned group hasn't been scheduled,
@@ -932,9 +940,14 @@ __perf_counter_sched_in(struct perf_coun
 		if (counter->cpu != -1 && counter->cpu != cpu)
 			continue;
 
-		if (group_can_go_on(counter, cpuctx, can_add_hw)) {
-			if (group_sched_in(counter, cpuctx, ctx, cpu))
+		if (counter != counter->group_leader) {
+			if (counter_sched_in(counter, cpuctx, ctx, cpu))
 				can_add_hw = 0;
+		} else {
+			if (group_can_go_on(counter, cpuctx, can_add_hw)) {
+				if (group_sched_in(counter, cpuctx, ctx, cpu))
+					can_add_hw = 0;
+			}
 		}
 	}
 	perf_enable();

-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/4] perf counter bits
  2009-05-20 10:21 [PATCH 0/4] perf counter bits Peter Zijlstra
                   ` (3 preceding siblings ...)
  2009-05-20 10:21 ` [PATCH 4/4] perf_counter: optimize sched in/out of counters Peter Zijlstra
@ 2009-05-20 10:48 ` Ingo Molnar
  4 siblings, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2009-05-20 10:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul Mackerras, Corey Ashford, linux-kernel,
	Arnaldo Carvalho de Melo, John Kacur


Applied to tip:perfcounters/core, thanks Peter!

	Ingo

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [tip:perfcounters/core] perf_counter: Solve the rotate_ctx vs inherit race differently
  2009-05-20 10:21 ` [PATCH 1/4] perf_counter: solve the rotate_ctx vs inherit race differently Peter Zijlstra
@ 2009-05-20 17:18   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-20 17:18 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
	mtosatti, tglx, cjashfor, mingo

Commit-ID:  d7b629a34fc4134a43c730b5f0197855dc4948d0
Gitweb:     http://git.kernel.org/tip/d7b629a34fc4134a43c730b5f0197855dc4948d0
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Wed, 20 May 2009 12:21:19 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Wed, 20 May 2009 12:43:32 +0200

perf_counter: Solve the rotate_ctx vs inherit race differently

Instead of disabling RR scheduling of the counters, use a different list
that does not get rotated to iterate the counters on inheritance.

[ Impact: cleanup, optimization ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090520102553.237504544@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 include/linux/perf_counter.h |    1 -
 kernel/perf_counter.c        |   15 +++++----------
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index 13cb2fb..c8c1dfc 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -508,7 +508,6 @@ struct perf_counter_context {
 	int			nr_counters;
 	int			nr_active;
 	int			is_active;
-	int			rr_allowed;
 	struct task_struct	*task;
 
 	/*
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 4d8f973..64113e6 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -1120,8 +1120,7 @@ void perf_counter_task_tick(struct task_struct *curr, int cpu)
 	__perf_counter_task_sched_out(ctx);
 
 	rotate_ctx(&cpuctx->ctx);
-	if (ctx->rr_allowed)
-		rotate_ctx(ctx);
+	rotate_ctx(ctx);
 
 	perf_counter_cpu_sched_in(cpuctx, cpu);
 	perf_counter_task_sched_in(curr, cpu);
@@ -3109,7 +3108,6 @@ __perf_counter_init_context(struct perf_counter_context *ctx,
 	mutex_init(&ctx->mutex);
 	INIT_LIST_HEAD(&ctx->counter_list);
 	INIT_LIST_HEAD(&ctx->event_list);
-	ctx->rr_allowed = 1;
 	ctx->task = task;
 }
 
@@ -3350,14 +3348,14 @@ void perf_counter_init_task(struct task_struct *child)
 	 */
 	mutex_lock(&parent_ctx->mutex);
 
-	parent_ctx->rr_allowed = 0;
-	barrier(); /* irqs */
-
 	/*
 	 * We dont have to disable NMIs - we are only looking at
 	 * the list, not manipulating it:
 	 */
-	list_for_each_entry(counter, &parent_ctx->counter_list, list_entry) {
+	list_for_each_entry_rcu(counter, &parent_ctx->event_list, event_entry) {
+		if (counter != counter->group_leader)
+			continue;
+
 		if (!counter->hw_event.inherit)
 			continue;
 
@@ -3366,9 +3364,6 @@ void perf_counter_init_task(struct task_struct *child)
 			break;
 	}
 
-	barrier(); /* irqs */
-	parent_ctx->rr_allowed = 1;
-
 	mutex_unlock(&parent_ctx->mutex);
 }
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [tip:perfcounters/core] perf_counter: Log irq_period changes
  2009-05-20 10:21 ` [PATCH 2/4] perf_counter: log irq_period changes Peter Zijlstra
@ 2009-05-20 17:18   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-20 17:18 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
	mtosatti, tglx, cjashfor, mingo

Commit-ID:  26b119bc811a73bac6ecf95bdf284bf31c7955f0
Gitweb:     http://git.kernel.org/tip/26b119bc811a73bac6ecf95bdf284bf31c7955f0
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Wed, 20 May 2009 12:21:20 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Wed, 20 May 2009 12:43:33 +0200

perf_counter: Log irq_period changes

For the dynamic irq_period code, log whenever we change the period so that
analyzing code can normalize the event flow.

[ Impact: add new feature to allow more precise profiling ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090520102553.298769743@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 include/linux/perf_counter.h |    8 ++++++++
 kernel/perf_counter.c        |   40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 47 insertions(+), 1 deletions(-)

diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index c8c1dfc..f612941 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -258,6 +258,14 @@ enum perf_event_type {
 	PERF_EVENT_COMM			= 3,
 
 	/*
+	 * struct {
+	 * 	struct perf_event_header	header;
+	 * 	u64				irq_period;
+	 * };
+	 */
+	PERF_EVENT_PERIOD		= 4,
+
+	/*
 	 * When header.misc & PERF_EVENT_MISC_OVERFLOW the event_type field
 	 * will be PERF_RECORD_*
 	 *
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 64113e6..db02eb1 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -1046,7 +1046,9 @@ int perf_counter_task_enable(void)
 	return 0;
 }
 
-void perf_adjust_freq(struct perf_counter_context *ctx)
+static void perf_log_period(struct perf_counter *counter, u64 period);
+
+static void perf_adjust_freq(struct perf_counter_context *ctx)
 {
 	struct perf_counter *counter;
 	u64 irq_period;
@@ -1072,6 +1074,8 @@ void perf_adjust_freq(struct perf_counter_context *ctx)
 		if (!irq_period)
 			irq_period = 1;
 
+		perf_log_period(counter, irq_period);
+
 		counter->hw.irq_period = irq_period;
 		counter->hw.interrupts = 0;
 	}
@@ -2407,6 +2411,40 @@ void perf_counter_munmap(unsigned long addr, unsigned long len,
 }
 
 /*
+ *
+ */
+
+static void perf_log_period(struct perf_counter *counter, u64 period)
+{
+	struct perf_output_handle handle;
+	int ret;
+
+	struct {
+		struct perf_event_header	header;
+		u64				time;
+		u64				period;
+	} freq_event = {
+		.header = {
+			.type = PERF_EVENT_PERIOD,
+			.misc = 0,
+			.size = sizeof(freq_event),
+		},
+		.time = sched_clock(),
+		.period = period,
+	};
+
+	if (counter->hw.irq_period == period)
+		return;
+
+	ret = perf_output_begin(&handle, counter, sizeof(freq_event), 0, 0);
+	if (ret)
+		return;
+
+	perf_output_put(&handle, freq_event);
+	perf_output_end(&handle);
+}
+
+/*
  * Generic counter overflow handling.
  */
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [tip:perfcounters/core] perf_counter: Optimize disable of time based sw counters
  2009-05-20 10:21 ` [PATCH 3/4] perf_counter: optimize disable of time based sw counters Peter Zijlstra
@ 2009-05-20 17:19   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-20 17:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
	mtosatti, tglx, cjashfor, mingo

Commit-ID:  b986d7ec0f8b7ea3cc7366d80a137fbe839df227
Gitweb:     http://git.kernel.org/tip/b986d7ec0f8b7ea3cc7366d80a137fbe839df227
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Wed, 20 May 2009 12:21:21 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Wed, 20 May 2009 12:43:33 +0200

perf_counter: Optimize disable of time based sw counters

Currently we call hrtimer_cancel() unconditionally on disable of time based
software counters. Avoid when possible.

[ Impact: micro-optimize the code ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090520102553.388185031@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 kernel/perf_counter.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index db02eb1..473ed2c 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -2716,7 +2716,8 @@ static int cpu_clock_perf_counter_enable(struct perf_counter *counter)
 
 static void cpu_clock_perf_counter_disable(struct perf_counter *counter)
 {
-	hrtimer_cancel(&counter->hw.hrtimer);
+	if (counter->hw.irq_period)
+		hrtimer_cancel(&counter->hw.hrtimer);
 	cpu_clock_perf_counter_update(counter);
 }
 
@@ -2767,7 +2768,8 @@ static int task_clock_perf_counter_enable(struct perf_counter *counter)
 
 static void task_clock_perf_counter_disable(struct perf_counter *counter)
 {
-	hrtimer_cancel(&counter->hw.hrtimer);
+	if (counter->hw.irq_period)
+		hrtimer_cancel(&counter->hw.hrtimer);
 	task_clock_perf_counter_update(counter, counter->ctx->time);
 
 }

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [tip:perfcounters/core] perf_counter: Optimize sched in/out of counters
  2009-05-20 10:21 ` [PATCH 4/4] perf_counter: optimize sched in/out of counters Peter Zijlstra
@ 2009-05-20 17:19   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-20 17:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
	mtosatti, tglx, cjashfor, mingo

Commit-ID:  afedadf23a2c90f3ba0d963282cbe6a6be129494
Gitweb:     http://git.kernel.org/tip/afedadf23a2c90f3ba0d963282cbe6a6be129494
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Wed, 20 May 2009 12:21:22 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Wed, 20 May 2009 12:43:34 +0200

perf_counter: Optimize sched in/out of counters

Avoid a function call for !group counters by directly calling the counter
function.

[ Impact: micro-optimize the code ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090520102553.511933670@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 kernel/perf_counter.c |   25 +++++++++++++++++++------
 1 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 473ed2c..69d4de8 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -826,8 +826,12 @@ void __perf_counter_sched_out(struct perf_counter_context *ctx,
 
 	perf_disable();
 	if (ctx->nr_active) {
-		list_for_each_entry(counter, &ctx->counter_list, list_entry)
-			group_sched_out(counter, cpuctx, ctx);
+		list_for_each_entry(counter, &ctx->counter_list, list_entry) {
+			if (counter != counter->group_leader)
+				counter_sched_out(counter, cpuctx, ctx);
+			else
+				group_sched_out(counter, cpuctx, ctx);
+		}
 	}
 	perf_enable();
  out:
@@ -903,8 +907,12 @@ __perf_counter_sched_in(struct perf_counter_context *ctx,
 		if (counter->cpu != -1 && counter->cpu != cpu)
 			continue;
 
-		if (group_can_go_on(counter, cpuctx, 1))
-			group_sched_in(counter, cpuctx, ctx, cpu);
+		if (counter != counter->group_leader)
+			counter_sched_in(counter, cpuctx, ctx, cpu);
+		else {
+			if (group_can_go_on(counter, cpuctx, 1))
+				group_sched_in(counter, cpuctx, ctx, cpu);
+		}
 
 		/*
 		 * If this pinned group hasn't been scheduled,
@@ -932,9 +940,14 @@ __perf_counter_sched_in(struct perf_counter_context *ctx,
 		if (counter->cpu != -1 && counter->cpu != cpu)
 			continue;
 
-		if (group_can_go_on(counter, cpuctx, can_add_hw)) {
-			if (group_sched_in(counter, cpuctx, ctx, cpu))
+		if (counter != counter->group_leader) {
+			if (counter_sched_in(counter, cpuctx, ctx, cpu))
 				can_add_hw = 0;
+		} else {
+			if (group_can_go_on(counter, cpuctx, can_add_hw)) {
+				if (group_sched_in(counter, cpuctx, ctx, cpu))
+					can_add_hw = 0;
+			}
 		}
 	}
 	perf_enable();

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 0/4] perf counter bits
@ 2009-08-19  9:18 Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2009-08-19  9:18 UTC (permalink / raw)
  To: Ingo Molnar, Paul Mackerras
  Cc: Arnaldo Carvalho de Melo, Frederic Weisbecker, Mike Galbraith,
	linux-kernel, Peter Zijlstra

Some perf counter patches for your consideration ;-)
-- 


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2009-08-19  9:23 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-20 10:21 [PATCH 0/4] perf counter bits Peter Zijlstra
2009-05-20 10:21 ` [PATCH 1/4] perf_counter: solve the rotate_ctx vs inherit race differently Peter Zijlstra
2009-05-20 17:18   ` [tip:perfcounters/core] perf_counter: Solve " tip-bot for Peter Zijlstra
2009-05-20 10:21 ` [PATCH 2/4] perf_counter: log irq_period changes Peter Zijlstra
2009-05-20 17:18   ` [tip:perfcounters/core] perf_counter: Log " tip-bot for Peter Zijlstra
2009-05-20 10:21 ` [PATCH 3/4] perf_counter: optimize disable of time based sw counters Peter Zijlstra
2009-05-20 17:19   ` [tip:perfcounters/core] perf_counter: Optimize " tip-bot for Peter Zijlstra
2009-05-20 10:21 ` [PATCH 4/4] perf_counter: optimize sched in/out of counters Peter Zijlstra
2009-05-20 17:19   ` [tip:perfcounters/core] perf_counter: Optimize " tip-bot for Peter Zijlstra
2009-05-20 10:48 ` [PATCH 0/4] perf counter bits Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2009-08-19  9:18 Peter Zijlstra
2009-05-01 10:23 [PATCH 0/4] perf_counter bits Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).