From: David Ahern <dsahern@gmail.com>
To: Stephane Eranian <eranian@google.com>
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
mingo@elte.hu, acme@redhat.com, namhyung.kim@lge.com
Subject: Re: [PATCH 2/5] perf tools: fix piped mode read code
Date: Thu, 10 May 2012 18:11:23 -0600 [thread overview]
Message-ID: <4FAC592B.90904@gmail.com> (raw)
In-Reply-To: <1334134889-20312-3-git-send-email-eranian@google.com>
Hi Stephane:
This patch no longer applies cleanly. Can you update the series?
David
On 4/11/12 3:01 AM, Stephane Eranian wrote:
> In __perf_session__process_pipe_events(), there was a risk
> we would read more than what a union perf_event struct can
> hold. this could happen in case, perf is reading a file which
> contains new record types it does not know about and which are
> larger than anything it knows about.
>
> In general, perf is supposed to skip records it does not
> understand, but in pipe mode, those have to be read and ignored.
> The fixed size header contains the size of the record, but that
> size may be larger than union perf_event, yet it was used as
> the backing to the read in:
>
> union perf_event event;
> void *p;
>
> size = event->header.size;
>
> p =&event;
> p += sizeof(struct perf_event_header);
> if (size - sizeof(struct perf_event_header)) {
> err = readn(self->fd, p, size - sizeof(struct perf_event_header));
>
> We fix this by allocating a buffer based on the size reported in
> the header. We reuse the buffer as much as we can. We realloc in
> case it becomes too small. In the common case, the performance
> impact is negligible.
>
> Signed-off-by: Stephane Eranian<eranian@google.com>
> ---
> tools/perf/util/session.c | 35 +++++++++++++++++++++++++++--------
> 1 files changed, 27 insertions(+), 8 deletions(-)
>
> diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
> index 9412e3b..d13e915 100644
> --- a/tools/perf/util/session.c
> +++ b/tools/perf/util/session.c
> @@ -1056,8 +1056,9 @@ volatile int session_done;
> static int __perf_session__process_pipe_events(struct perf_session *self,
> struct perf_tool *tool)
> {
> - union perf_event event;
> - uint32_t size;
> + union perf_event *event;
> + uint32_t size, cur_size = 0;
> + void *buf = NULL;
> int skip = 0;
> u64 head;
> int err;
> @@ -1066,8 +1067,14 @@ static int __perf_session__process_pipe_events(struct perf_session *self,
> perf_tool__fill_defaults(tool);
>
> head = 0;
> + cur_size = sizeof(union perf_event);
> +
> + buf = malloc(cur_size);
> + if (!buf)
> + return -errno;
> more:
> - err = readn(self->fd,&event, sizeof(struct perf_event_header));
> + event = buf;
> + err = readn(self->fd, event, sizeof(struct perf_event_header));
> if (err<= 0) {
> if (err == 0)
> goto done;
> @@ -1077,13 +1084,23 @@ static int __perf_session__process_pipe_events(struct perf_session *self,
> }
>
> if (self->header.needs_swap)
> - perf_event_header__bswap(&event.header);
> + perf_event_header__bswap(&event->header);
>
> - size = event.header.size;
> + size = event->header.size;
> if (size == 0)
> size = 8;
>
> - p =&event;
> + if (size> cur_size) {
> + void *new = realloc(buf, size);
> + if (!new) {
> + pr_err("failed to allocate memory to read event\n");
> + goto out_err;
> + }
> + buf = new;
> + cur_size = size;
> + event = buf;
> + }
> + p = event;
> p += sizeof(struct perf_event_header);
>
> if (size - sizeof(struct perf_event_header)) {
> @@ -1099,9 +1116,10 @@ static int __perf_session__process_pipe_events(struct perf_session *self,
> }
> }
>
> - if ((skip = perf_session__process_event(self,&event, tool, head))< 0) {
> + skip = perf_session__process_event(self, event, tool, head);
> + if (skip< 0) {
> dump_printf("%#" PRIx64 " [%#x]: skipping unknown header type: %d\n",
> - head, event.header.size, event.header.type);
> + head, event->header.size, event->header.type);
> /*
> * assume we lost track of the stream, check alignment, and
> * increment a single u64 in the hope to catch on again 'soon'.
> @@ -1122,6 +1140,7 @@ static int __perf_session__process_pipe_events(struct perf_session *self,
> done:
> err = 0;
> out_err:
> + free(buf);
> perf_session__warn_about_errors(self, tool);
> perf_session_free_sample_buffers(self);
> return err;
next prev parent reply other threads:[~2012-05-11 0:11 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-11 9:01 [PATCH 0/5] perf tools: add meta-data header support in pipe mode Stephane Eranian
2012-04-11 9:01 ` [PATCH 1/5] perf inject: fix broken perf inject -b Stephane Eranian
2012-04-11 9:01 ` [PATCH 2/5] perf tools: fix piped mode read code Stephane Eranian
2012-05-11 0:11 ` David Ahern [this message]
2012-05-14 20:24 ` Stephane Eranian
2012-04-11 9:01 ` [PATCH 3/5] perf tools: rename HEADER_TRACE_INFO to HEADER_TRACING_DATA Stephane Eranian
2012-04-11 9:01 ` [PATCH 4/5] perf record: add meta-data support for pipe-mode Stephane Eranian
2012-04-11 9:01 ` [PATCH 5/5] perf: make perf buildid-list work better with pipe mode Stephane Eranian
2012-04-26 15:14 ` [PATCH 0/5] perf tools: add meta-data header support in " Stephane Eranian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FAC592B.90904@gmail.com \
--to=dsahern@gmail.com \
--cc=acme@redhat.com \
--cc=eranian@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=namhyung.kim@lge.com \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox