* [PATCH] perf_events: improve task_sched_in()
@ 2010-03-11 6:26 eranian
2010-03-11 8:35 ` Peter Zijlstra
2010-03-11 14:42 ` [tip:perf/core] perf_events: Improve task_sched_in() tip-bot for eranian@google.com
0 siblings, 2 replies; 3+ messages in thread
From: eranian @ 2010-03-11 6:26 UTC (permalink / raw)
To: linux-kernel
Cc: peterz, mingo, paulus, fweisbec, robert.richter, davem,
perfmon2-devel
This patch is an optimization in perf_event_task_sched_in() to avoid scheduling
the events twice in a row. Without it, the perf_disable()/perf_enable() pair
is invoked twice, thereby pinned events counts while scheduling flexible events
and we go throuh hw_perf_enable() twice. By encapsulating, the whole sequence
into perf_disable()/perf_enable() we ensure, hw_perf_enable() is going to be
invoked only once because of the refcount protection.
Signed-off-by: Stephane Eranian <eranian@google.com>
--
perf_event.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1382,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task)
if (cpuctx->task_ctx == ctx)
return;
+ perf_disable();
+
/*
* We want to keep the following priority order:
* cpu pinned (that don't need to move), task pinned,
@@ -1394,6 +1396,8 @@ void perf_event_task_sched_in(struct task_struct *task)
ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE);
cpuctx->task_ctx = ctx;
+
+ perf_enable();
}
#define MAX_INTERRUPTS (~0ULL)
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] perf_events: improve task_sched_in()
2010-03-11 6:26 [PATCH] perf_events: improve task_sched_in() eranian
@ 2010-03-11 8:35 ` Peter Zijlstra
2010-03-11 14:42 ` [tip:perf/core] perf_events: Improve task_sched_in() tip-bot for eranian@google.com
1 sibling, 0 replies; 3+ messages in thread
From: Peter Zijlstra @ 2010-03-11 8:35 UTC (permalink / raw)
To: eranian
Cc: linux-kernel, mingo, paulus, fweisbec, robert.richter, davem,
perfmon2-devel
On Wed, 2010-03-10 at 22:26 -0800, eranian@google.com wrote:
> This patch is an optimization in perf_event_task_sched_in() to avoid scheduling
> the events twice in a row. Without it, the perf_disable()/perf_enable() pair
> is invoked twice, thereby pinned events counts while scheduling flexible events
> and we go throuh hw_perf_enable() twice. By encapsulating, the whole sequence
> into perf_disable()/perf_enable() we ensure, hw_perf_enable() is going to be
> invoked only once because of the refcount protection.
Agreed, this makes perfect sense.
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Signed-off-by: Stephane Eranian <eranian@google.com>
> --
> perf_event.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -1382,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task)
> if (cpuctx->task_ctx == ctx)
> return;
>
> + perf_disable();
> +
> /*
> * We want to keep the following priority order:
> * cpu pinned (that don't need to move), task pinned,
> @@ -1394,6 +1396,8 @@ void perf_event_task_sched_in(struct task_struct *task)
> ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE);
>
> cpuctx->task_ctx = ctx;
> +
> + perf_enable();
> }
>
> #define MAX_INTERRUPTS (~0ULL)
^ permalink raw reply [flat|nested] 3+ messages in thread
* [tip:perf/core] perf_events: Improve task_sched_in()
2010-03-11 6:26 [PATCH] perf_events: improve task_sched_in() eranian
2010-03-11 8:35 ` Peter Zijlstra
@ 2010-03-11 14:42 ` tip-bot for eranian@google.com
1 sibling, 0 replies; 3+ messages in thread
From: tip-bot for eranian@google.com @ 2010-03-11 14:42 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, eranian, hpa, mingo, a.p.zijlstra, tglx, mingo
Commit-ID: 9b33fa6ba0e2f90fdf407501db801c2511121564
Gitweb: http://git.kernel.org/tip/9b33fa6ba0e2f90fdf407501db801c2511121564
Author: eranian@google.com <eranian@google.com>
AuthorDate: Wed, 10 Mar 2010 22:26:05 -0800
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Thu, 11 Mar 2010 15:23:28 +0100
perf_events: Improve task_sched_in()
This patch is an optimization in perf_event_task_sched_in() to avoid
scheduling the events twice in a row.
Without it, the perf_disable()/perf_enable() pair is invoked twice,
thereby pinned events counts while scheduling flexible events and we go
throuh hw_perf_enable() twice.
By encapsulating, the whole sequence into perf_disable()/perf_enable() we
ensure, hw_perf_enable() is going to be invoked only once because of the
refcount protection.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268288765-5326-1-git-send-email-eranian@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
kernel/perf_event.c | 4 ++++
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 52c69a3..3853d49 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1368,6 +1368,8 @@ void perf_event_task_sched_in(struct task_struct *task)
if (cpuctx->task_ctx == ctx)
return;
+ perf_disable();
+
/*
* We want to keep the following priority order:
* cpu pinned (that don't need to move), task pinned,
@@ -1380,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task)
ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE);
cpuctx->task_ctx = ctx;
+
+ perf_enable();
}
#define MAX_INTERRUPTS (~0ULL)
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2010-03-11 14:43 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-11 6:26 [PATCH] perf_events: improve task_sched_in() eranian
2010-03-11 8:35 ` Peter Zijlstra
2010-03-11 14:42 ` [tip:perf/core] perf_events: Improve task_sched_in() tip-bot for eranian@google.com
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox