From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Qu Wenruo <quwenruo@cn.fujitsu.com>,
linux-perf-users@vger.kernel.org,
Namhyung Kim <namhyung@gmail.com>
Subject: Re: Is it possible to trace events and its call stack?
Date: Mon, 16 Jan 2017 16:32:59 -0300 [thread overview]
Message-ID: <20170116193259.GB14872@kernel.org> (raw)
In-Reply-To: <20170116212629.610b7575657f0ac9b1f563e1@kernel.org>
Em Mon, Jan 16, 2017 at 09:26:29PM +0900, Masami Hiramatsu escreveu:
> On Mon, 16 Jan 2017 16:48:57 +0800
> Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
> >
> >
> > At 01/16/2017 10:55 AM, Masami Hiramatsu wrote:
> > > On Thu, 12 Jan 2017 15:49:08 +0800
> > > Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
> > >
> > >> Hi,
> > >>
> > >> Is it possible to use perf/ftrace to trace events and its call stack?
> > >>
> > >> [Background]
> > >> It's one structure in btrfs, btrfs_bio, I'm tracing for.
> > >> That structure is allocated and free somewhat frequently, and its size
> > >> is not fixed, so no SLAB/SLUB cache is used.
> > >>
> > >> I added trace events(or trace points, anyway, just in
> > >> include/trace/events/btrfs.h) to trace the allocation and freeing.
> > >> Which will output the pointer address of that structure, so I can pair
> > >> them, alone with other info.
> > >>
> > >> Things went well until, I found some structures are allocated but not
> > >> freed. (no corresponding trace point is triggered for given address).
> > >>
> > >> It's possible that btrfs just forget to free it, or btrfs is just
> > >> holding it for some purpose.
> > >> So kernel memleak detector won't catch the later one.
> > >>
> > >> That's to say along with the tracepoint data, I still need the call
> > >> stack of each calling, to determine the code who leak or hold the pointer.
> > >>
> > >> Is it possible to do it using perf or ftrace?
> > >
> > > If you are using ftrace, yes, you can enable stacktrace for each
> > > events by setting stacktrace event trigger as below;
> > >
> > > echo stacktrace > events/btrfs/<your event>/trigger
> > >
> > > Then ftrace will show the stacktrace data.
> > > See /sys/kernel/debug/tracing/README for more details. :)
> >
> > That's great!
> >
> > The most pure ftrace method!
> >
> > I also found perf makes life easier compared to pure ftrace one :)
>
> Should be :)
>
> > Although after some search, I didn't find any equivalent of
> > "function_graph" tracer, which is quite handy to handle small amount of
> > calling time data.
> >
> > Does it mean perf tool just doesn't support?
>
> Namhyung had made it. I'm not sure why it is not merged.
>
> https://lwn.net/Articles/570503/
/me trying to revive that patchset...
- Arnaldo
prev parent reply other threads:[~2017-01-16 19:33 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-12 7:49 Is it possible to trace events and its call stack? Qu Wenruo
2017-01-12 10:16 ` Naveen N. Rao
2017-01-12 20:41 ` Arnaldo Carvalho de Melo
2017-01-16 8:54 ` Qu Wenruo
2017-01-16 2:55 ` Masami Hiramatsu
2017-01-16 8:48 ` Qu Wenruo
2017-01-16 12:26 ` Masami Hiramatsu
2017-01-16 19:32 ` Arnaldo Carvalho de Melo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170116193259.GB14872@kernel.org \
--to=acme@kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mhiramat@kernel.org \
--cc=namhyung@gmail.com \
--cc=quwenruo@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).