From: Frederic Weisbecker <fweisbec@gmail.com>
To: David Ahern <dsahern@gmail.com>
Cc: acme@ghostprotocols.net, linux-kernel@vger.kernel.org,
Ingo Molnar <mingo@kernel.org>, Jiri Olsa <jolsa@redhat.com>,
Mike Galbraith <efault@gmx.de>,
Namhyung Kim <namhyung@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Stephane Eranian <eranian@google.com>
Subject: Re: [PATCH] perf session: Add option to copy events when queueing
Date: Mon, 16 Sep 2013 18:40:24 +0200 [thread overview]
Message-ID: <20130916164022.GA29373@localhost.localdomain> (raw)
In-Reply-To: <52349C14.2070508@gmail.com>
On Sat, Sep 14, 2013 at 11:25:40AM -0600, David Ahern wrote:
> On 9/14/13 10:16 AM, Frederic Weisbecker wrote:
> >>@@ -676,7 +682,12 @@ int perf_session_queue_event(struct perf_session *s, union perf_event *event,
> >>
> >> new->timestamp = timestamp;
> >> new->file_offset = file_offset;
> >>- new->event = event;
> >>+
> >>+ if (s->copy_on_queue) {
> >>+ new->event = malloc(event->header.size);
> >>+ memcpy(new->event, event, event->header.size);
> >>+ } else
> >>+ new->event = event;
>
> ---8<---
>
> >So do you think it should stay optional? This looks like a global problem, I mean
> >the event can be unmapped anytime for any builtin tool mapping it, right?
>
> Yes. I could make it the default behavior; just overhead in doing
> that (malloc/copy for each event).
Are there any tool that don't suffer from this bug somehow? If not then it must
be applied unconditionally.
>
> >
> >Also we already allocate the sample list node (struct sample_queue) from os->sample
> >buffer. ie: we have our own allocator there.
> >
> >Probably we should reuse that and include the copied event space in "struct sample_queue"?
>
>
> Right, that's where I put the malloc and copy - I kept the relevant
> change above. I take it you are thinking of something different but
> I am not following you. You definitely do NOT want to change struct
> sample_queue to include an event - like this:
>
> diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
> index 51f5edf..866944a 100644
> --- a/tools/perf/util/session.c
> +++ b/tools/perf/util/session.c
> @@ -491,7 +491,7 @@ static perf_event__swap_op perf_event__swap_ops[] = {
> struct sample_queue {
> u64 timestamp;
> u64 file_offset;
> - union perf_event *event;
> + union perf_event event;
Right that's roughly what I thought.
> struct list_head list;
> };
>
> size of event is determined by mmap_event (mmap2_event in latest
> code) which is > 4096 because of the filename argument. Including
> the event directly in sample_queue would balloon memory usage
> (learned this the hard way!).
Ah then perhaps we can allocate with the dynamic size of the event?
>
> >
> >Also looking at it now, it seems we have a bug on the existing code:
> >
> >
> > if (!list_empty(sc)) {
> > new = list_entry(sc->next, struct sample_queue, list);
> > list_del(&new->list);
> > } else if (os->sample_buffer) {
> > new = os->sample_buffer + os->sample_buffer_idx;
> > if (++os->sample_buffer_idx == MAX_SAMPLE_BUFFER)
> > os->sample_buffer = NULL;
> > } else {
> > os->sample_buffer = malloc(MAX_SAMPLE_BUFFER * sizeof(*new));
> > if (!os->sample_buffer)
> > return -ENOMEM;
> > list_add(&os->sample_buffer->list, &os->to_free);
> > os->sample_buffer_idx = 2;
> > new = os->sample_buffer + 1;
> > }
> >
> >If we actually run out of buffer rooms, we should realloc right after and not
> >wait for the next entry, otherwise we loose an event:
> >
> > if (!list_empty(sc)) {
> > new = list_entry(sc->next, struct sample_queue, list);
> > list_del(&new->list);
> > } else {
> > if (os->sample_buffer) {
> > new = os->sample_buffer + os->sample_buffer_idx;
> > if (++os->sample_buffer_idx == MAX_SAMPLE_BUFFER)
> > os->sample_buffer = NULL;
> > }
> >
> > if (!os->sample_buffer) {
> > os->sample_buffer = malloc(MAX_SAMPLE_BUFFER * sizeof(*new));
> > if (!os->sample_buffer)
> > return -ENOMEM;
> > list_add(&os->sample_buffer->list, &os->to_free);
> > os->sample_buffer_idx = 2;
> > new = os->sample_buffer + 1;
> > }
> >
> >
> >Although the mirrored os->sample_buffer condition check is a bit ugly and should move to
> >a function. But the idea is there.
>
> Ok. That should be a separate patch. Are you going to submit that one?
Yeah, unless you beat me at it :)
next prev parent reply other threads:[~2013-09-16 16:40 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-06 19:37 [PATCH] perf session: Add option to copy events when queueing David Ahern
2013-09-14 16:16 ` Frederic Weisbecker
2013-09-14 17:25 ` David Ahern
2013-09-16 16:40 ` Frederic Weisbecker [this message]
2013-09-17 4:14 ` David Ahern
2013-10-24 9:30 ` Frederic Weisbecker
2013-10-24 10:23 ` David Ahern
2013-10-24 12:27 ` Arnaldo Melo
2013-10-24 13:12 ` David Ahern
2013-10-24 14:07 ` Arnaldo Melo
2013-10-25 16:04 ` David Ahern
2013-10-02 12:18 ` Jiri Olsa
2013-10-02 12:38 ` Jiri Olsa
2013-10-02 12:52 ` David Ahern
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130916164022.GA29373@localhost.localdomain \
--to=fweisbec@gmail.com \
--cc=acme@ghostprotocols.net \
--cc=dsahern@gmail.com \
--cc=efault@gmx.de \
--cc=eranian@google.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).