From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Mark Wielaard <mark@klomp.org>
Cc: Milian Wolff <milian.wolff@kdab.com>,
Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>,
Paolo Bonzini <pbonzini@redhat.com>,
linux-kernel@vger.kernel.org,
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>,
linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v2] perf: libdw support for powerpc [ping]
Date: Tue, 20 Jun 2017 22:06:35 -0300 [thread overview]
Message-ID: <20170621010635.GK13640@kernel.org> (raw)
In-Reply-To: <1497525392.3755.307.camel@klomp.org>
Em Thu, Jun 15, 2017 at 01:16:32PM +0200, Mark Wielaard escreveu:
> On Thu, 2017-06-15 at 10:46 +0200, Milian Wolff wrote:
> > Just a quick question: Have you guys applied my recent patch:
> >
> > commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
> > Author: Milian Wolff <milian.wolff@kdab.com>
> > Date: Thu Jun 1 23:00:21 2017 +0200
> >
> > perf report: Include partial stacks unwound with libdw
> >
> > So far the whole stack was thrown away when any error occurred before
> > the maximum stack depth was unwound. This is actually a very common
> > scenario though. The stacks that got unwound so far are still
> > interesting. This removes a large chunk of differences when comparing
> > perf script output for libunwind and libdw perf unwinding.
> >
> > If not, then this could explain the issue you are seeing.
>
> Thanks! No, I didn't have that patch (*) yet. It makes a huge
> difference. With that, Paolo's patch and the elfutils libdw powerpc64
> fallback unwinder patch, it looks like I get user stack traces for
> everything now on ppc64le.
Can I take that as a Tested-by: you?
- Arnaldo
> Cheers,
>
> Mark
>
> (*) It just this one-liner, but what a difference that makes:
>
> --- a/tools/perf/util/unwind-libdw.c
> +++ b/tools/perf/util/unwind-libdw.c
> @@ -224,7 +224,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>
> err = dwfl_getthread_frames(ui->dwfl, thread->tid, frame_callback, ui);
>
> - if (err && !ui->max_stack)
> + if (err && ui->max_stack != max_stack)
> err = 0;
>
> /*
next prev parent reply other threads:[~2017-06-21 1:06 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-01 10:24 [PATCH v2] perf: libdw support for powerpc Paolo Bonzini
2017-06-09 12:30 ` [PATCH v2] perf: libdw support for powerpc [ping] Paolo Bonzini
2017-06-12 11:58 ` Ravi Bangoria
2017-06-13 11:44 ` Mark Wielaard
2017-06-13 15:55 ` Ravi Bangoria
2017-06-15 8:46 ` Milian Wolff
2017-06-15 11:16 ` Mark Wielaard
2017-06-16 4:21 ` Ravi Bangoria
2017-06-21 1:06 ` Arnaldo Carvalho de Melo [this message]
2017-06-21 1:31 ` Mark Wielaard
2017-06-21 1:07 ` Arnaldo Carvalho de Melo
2017-06-21 8:16 ` Milian Wolff
2017-06-21 12:48 ` Arnaldo Carvalho de Melo
2017-06-21 14:19 ` Milian Wolff
2017-06-21 14:33 ` Arnaldo Carvalho de Melo
2017-06-15 12:13 ` [PATCH v2] perf: libdw support for powerpc Jiri Olsa
2017-06-20 21:53 ` Arnaldo Carvalho de Melo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170621010635.GK13640@kernel.org \
--to=acme@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mark@klomp.org \
--cc=milian.wolff@kdab.com \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=pbonzini@redhat.com \
--cc=ravi.bangoria@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).