From: Stephane Eranian <eranian@google.com>
To: linux-kernel@vger.kernel.org
Cc: peterz@infradead.org, mingo@elte.hu, paulus@samba.org,
perfmon2-devel@lists.sourceforge.net, eranian@google.com,
eranian@gmail.com
Subject: [PATCH] perf_events: fix read() bogus counts when in error state
Date: Thu, 26 Nov 2009 09:24:30 -0800 (PST) [thread overview]
Message-ID: <4b0eb9ce.0508d00a.573b.ffffeab6@mx.google.com> (raw)
When a pinned group cannot be scheduled it goes into error state.
Normally a group cannot go out of error state without being explicitly
re-enabled or disabled. There was a bug in per-thread mode, whereby
upon termination of the thread, the group would transition from error
to off leading to bogus counts and timing information returned by
read().
It is important to realize that the current perf_events implementation
assigns higher priority to system-wide events over per-thread events
and that regardless of the fact that per-thread events may be pinned.
It is not clear to me whether this is per design of the API or just a
side effect of the implementation. I believe it is desirable that a
system-wide tool gets priority access to the PMU but then this causes
issues with per-thread events and especially when they request pinning.
A per-thread pinned event can be evicted until there is enough PMU
resource freed by system-wide events. Although, with this patch it is
now possible to detect this when counting, it remains unclear how this
situation could be detected when sampling, as it incurs potientially
large blind spots and thus bias degrading the quality of the data
collected.
The API is missing a clear definition of what it means to be pinned
for a per-thread event vs. system-wide event. Just like it does not
clearly state that system-wide event have higher priority than per
thread events.
Signed-off-by: Stephane Eranian <eranian@google.com>
---
perf_event.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 0b0d5f7..7a8bb5b 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -333,7 +333,16 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx)
event->group_leader->nr_siblings--;
update_event_times(event);
- event->state = PERF_EVENT_STATE_OFF;
+
+ /*
+ * If event was in error state, then keep it
+ * that way, otherwise bogus counts will be
+ * returned on read(). The only way to get out
+ * of error state is by explicit re-enabling
+ * of the event
+ */
+ if (event->state > PERF_EVENT_STATE_OFF)
+ event->state = PERF_EVENT_STATE_OFF;
/*
* If this was a group event with sibling events then
next reply other threads:[~2009-11-26 17:24 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-26 17:24 Stephane Eranian [this message]
2009-11-26 17:36 ` [PATCH] perf_events: fix read() bogus counts when in error state Peter Zijlstra
2009-11-26 17:48 ` Stephane Eranian
2009-11-26 17:51 ` [tip:perf/core] perf_events: Fix " tip-bot for Stephane Eranian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4b0eb9ce.0508d00a.573b.ffffeab6@mx.google.com \
--to=eranian@google.com \
--cc=eranian@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=paulus@samba.org \
--cc=perfmon2-devel@lists.sourceforge.net \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox