From: David Ahern <dsahern@gmail.com>
To: Arnaldo Melo <acme@ghostprotocols.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>,
LKML <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@kernel.org>, Jiri Olsa <jolsa@redhat.com>,
Mike Galbraith <efault@gmx.de>,
Namhyung Kim <namhyung@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Stephane Eranian <eranian@google.com>
Subject: Re: [PATCH] perf session: Add option to copy events when queueing
Date: Thu, 24 Oct 2013 14:12:29 +0100 [thread overview]
Message-ID: <52691CBD.7050606@gmail.com> (raw)
In-Reply-To: <20131024122748.GC6539@ghostprotocols.net>
On 10/24/13 1:27 PM, Arnaldo Melo wrote:
> Em Thu, Oct 24, 2013 at 11:23:32AM +0100, David Ahern escreveu:
>> On 10/24/13 10:30 AM, Frederic Weisbecker wrote:
>>> Bah, checking that again, there don't seem to be a bug there. Actually
>>> the sample buffer is reset after we pick the last entry. So it looks
>>> all fine. I got confused as usual. Nevermind.
>>
>> Ok. I had not come back to this thread since I decided on a
>> different route for the event copying. I'll take it out of my to-do
>> list.
>
> Can you elaborate on that?
The driving use case is my perf-daemon:
https://github.com/dsahern/linux/blob/perf-sched-timehist-3.11/tools/perf/schedmon.c,
line 271.
Rather than have the perf infrastructure manage the allocation and
copies I decided to have the daemon do it. The session infrastructure is
only used for time sorting.
This works better because the event that pops out of the session
ordering is put onto another list and it gives the daemon control of
when memory is allocated and freed and the event only has to be copied once.
>
> I had this feeling that perhaps we could defer copying the event till it
> would be overwritten, something like making a range read only and then
> when the event would be _really_ consumed the tooling would mark it as
> so... Have to think about it more tho :-\
The above does not solve the problem for live tools where the event
actually lies in the ring buffer and could be overwritten while it sits
in the session ordered samples queue. The kvm-stat-live tool for nested
virtualization is a stress test for this case. I need to come back to
that problem, it just has not bubbled up to the top.
David
next prev parent reply other threads:[~2013-10-24 13:12 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-06 19:37 [PATCH] perf session: Add option to copy events when queueing David Ahern
2013-09-14 16:16 ` Frederic Weisbecker
2013-09-14 17:25 ` David Ahern
2013-09-16 16:40 ` Frederic Weisbecker
2013-09-17 4:14 ` David Ahern
2013-10-24 9:30 ` Frederic Weisbecker
2013-10-24 10:23 ` David Ahern
2013-10-24 12:27 ` Arnaldo Melo
2013-10-24 13:12 ` David Ahern [this message]
2013-10-24 14:07 ` Arnaldo Melo
2013-10-25 16:04 ` David Ahern
2013-10-02 12:18 ` Jiri Olsa
2013-10-02 12:38 ` Jiri Olsa
2013-10-02 12:52 ` David Ahern
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52691CBD.7050606@gmail.com \
--to=dsahern@gmail.com \
--cc=acme@ghostprotocols.net \
--cc=efault@gmx.de \
--cc=eranian@google.com \
--cc=fweisbec@gmail.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).