From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>, Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
Ian Rogers <irogers@google.com>,
linux-perf-users <linux-perf-users@vger.kernel.org>,
Song Liu <songliubraving@fb.com>, bpf <bpf@vger.kernel.org>
Subject: Re: [PATCH 0/4] perf lock contention: Improve call stack handling (v1)
Date: Tue, 20 Sep 2022 21:22:48 +0100 [thread overview]
Message-ID: <YyohGGdnX88YOXtR@kernel.org> (raw)
In-Reply-To: <CAM9d7cgOPUoGr96yc=M=bBTQG-jkW269Lc7-uEYTWGURiCAjyQ@mail.gmail.com>
Em Thu, Sep 08, 2022 at 04:44:15PM -0700, Namhyung Kim escreveu:
> Hi Arnaldo,
>
> On Thu, Sep 8, 2022 at 11:43 AM Arnaldo Carvalho de Melo
> <acme@kernel.org> wrote:
> >
> > Em Wed, Sep 07, 2022 at 11:37:50PM -0700, Namhyung Kim escreveu:
> > > Hello,
> > >
> > > I found that call stack from the lock tracepoint (using bpf_get_stackid)
> > > can be different on each configuration. For example it's very different
> > > when I run it on a VM than on a real machine.
> > >
> > > The perf lock contention relies on the stack trace to get the lock
> > > caller names, this kind of difference can be annoying. Ideally we could
> > > skip stack trace entries for internal BPF or lock functions and get the
> > > correct caller, but it's not the case as of today. Currently it's hard
> > > coded to control the behavior of stack traces for the lock contention
> > > tracepoints.
> > >
> > > To handle those differences, add two new options to control the number of
> > > stack entries and how many it skips. The default value worked well on
> > > my VM setup, but I had to use --stack-skip=5 on real machines.
> > >
> > > You can get it from 'perf/lock-stack-v1' branch in
> > >
> > > git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
> >
> > This clashed with a patch you Acked earlier, so lets see if someone has
> > extra review comments and a v2 become needed for other reason, when you
> > can refresh it, ok?
>
> Sounds good!
Have you resubmitted this? /me goes on the backlog...
- Arnaldo
next prev parent reply other threads:[~2022-09-20 20:22 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-08 6:37 [PATCH 0/4] perf lock contention: Improve call stack handling (v1) Namhyung Kim
2022-09-08 6:37 ` [PATCH 1/4] perf lock contention: Factor out get_symbol_name_offset() Namhyung Kim
2022-09-08 6:37 ` [PATCH 2/4] perf lock contention: Show full callstack with -v option Namhyung Kim
2022-09-08 6:37 ` [PATCH 3/4] perf lock contention: Allow to change stack depth and skip Namhyung Kim
2022-09-08 6:37 ` [PATCH 4/4] perf lock contention: Skip stack trace from BPF Namhyung Kim
2022-09-08 18:43 ` [PATCH 0/4] perf lock contention: Improve call stack handling (v1) Arnaldo Carvalho de Melo
2022-09-08 23:44 ` Namhyung Kim
2022-09-20 20:22 ` Arnaldo Carvalho de Melo [this message]
2022-09-20 21:04 ` Namhyung Kim
2022-09-21 14:09 ` Arnaldo Carvalho de Melo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YyohGGdnX88YOXtR@kernel.org \
--to=acme@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=songliubraving@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).