linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Howard Chu <howardchu95@gmail.com>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Ian Rogers <irogers@google.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	peterz@infradead.org,  mingo@redhat.com, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com,  jolsa@kernel.org,
	adrian.hunter@intel.com, kan.liang@linux.intel.com,
	 zegao2021@gmail.com, leo.yan@linux.dev, ravi.bangoria@amd.com,
	 linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
	 bpf@vger.kernel.org
Subject: Re: [PATCH v2 0/4] Dump off-cpu samples directly
Date: Thu, 16 May 2024 12:24:35 +0800	[thread overview]
Message-ID: <CAH0uvohPg7LtSOLDNaPwnC5ePwjwg0NtKzLZ_oJcAz7zOwdwdw@mail.gmail.com> (raw)
In-Reply-To: <CAM9d7cggak7qZcX7tFZvJ69H3cwEnWvNOnBsQrkFQkQVf+bUjQ@mail.gmail.com>

Hello,

Here is a little update on --off-cpu.

> > It would be nice to start landing this work so I'm wondering what the
> > minimal way to do that is. It seems putting behavior behind a flag is
> > a first step.

The flag to determine output threshold of off-cpu has been implemented.
If the accumulated off-cpu time exceeds this threshold, output the sample
directly; otherwise, save it later for off_cpu_write.

But adding an extra pass to handle off-cpu samples introduces performance
issues, here's the processing rate of --off-cpu sampling(with the
extra pass to extract raw
sample data) and without. The --off-cpu-threshold is in nanoseconds.

+-----------------------------------------------------+---------------------------------------+----------------------+
| comm                                                | type
                       | process rate         |
+-----------------------------------------------------+---------------------------------------+----------------------+
| -F 4999 -a                                          | regular
samples (w/o extra pass)      | 13128.675 samples/ms |
+-----------------------------------------------------+---------------------------------------+----------------------+
| -F 1 -a --off-cpu --off-cpu-threshold 100           | offcpu samples
(extra pass)           |  2843.247 samples/ms |
+-----------------------------------------------------+---------------------------------------+----------------------+
| -F 4999 -a --off-cpu --off-cpu-threshold 100        | offcpu &
regular samples (extra pass) |  3910.686 samples/ms |
+-----------------------------------------------------+---------------------------------------+----------------------+
| -F 4999 -a --off-cpu --off-cpu-threshold 1000000000 | few offcpu &
regular (extra pass)     |  4661.229 samples/ms |
+-----------------------------------------------------+---------------------------------------+----------------------+

It's not ideal. I will find a way to reduce overhead. For example
process them samples
at save time as Ian mentioned.

> > To turn the bpf-output samples into off-cpu events there is a pass
> > added to the saving. I wonder if that can be more generic, like a save
> > time perf inject.

And I will find a default value for such a threshold based on performance
and common use cases.

> Sounds good.  We might add an option to specify the threshold to
> determine whether to dump the data or to save it for later.  But ideally
> it should be able to find a good default.

These will be done before the GSoC kick-off on May 27.

Thanks,
Howard

On Thu, Apr 25, 2024 at 6:57 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Wed, Apr 24, 2024 at 3:19 PM Ian Rogers <irogers@google.com> wrote:
> >
> > On Wed, Apr 24, 2024 at 2:11 PM Arnaldo Carvalho de Melo
> > <acme@kernel.org> wrote:
> > >
> > > On Wed, Apr 24, 2024 at 12:12:26PM -0700, Namhyung Kim wrote:
> > > > Hello,
> > > >
> > > > On Tue, Apr 23, 2024 at 7:46 PM Howard Chu <howardchu95@gmail.com> wrote:
> > > > >
> > > > > As mentioned in: https://bugzilla.kernel.org/show_bug.cgi?id=207323
> > > > >
> > > > > Currently, off-cpu samples are dumped when perf record is exiting. This
> > > > > results in off-cpu samples being after the regular samples. Also, samples
> > > > > are stored in large BPF maps which contain all the stack traces and
> > > > > accumulated off-cpu time, but they are eventually going to fill up after
> > > > > running for an extensive period. This patch fixes those problems by dumping
> > > > > samples directly into perf ring buffer, and dispatching those samples to the
> > > > > correct format.
> > > >
> > > > Thanks for working on this.
> > > >
> > > > But the problem of dumping all sched-switch events is that it can be
> > > > too frequent on loaded machines.  Copying many events to the buffer
> > > > can result in losing other records.  As perf report doesn't care about
> > > > timing much, I decided to aggregate the result in a BPF map and dump
> > > > them at the end of the profiling session.
> > >
> > > Should we try to adapt when there are too many context switches, i.e.
> > > the BPF program can notice that the interval from the last context
> > > switch is too small and then avoid adding samples, while if the interval
> > > is a long one then indeed this is a problem where the workload is
> > > waiting for a long time for something and we want to know what is that,
> > > and in that case capturing callchains is both desirable and not costly,
> > > no?
>
> Sounds interesting.  Yeah we could make it adaptive based on the
> off-cpu time at the moment.
>
> > >
> > > The tool could then at the end produce one of two outputs: the most
> > > common reasons for being off cpu, or some sort of counter stating that
> > > there are way too many context switches?
> > >
> > > And perhaps we should think about what is best to have as a default, not
> > > to present just plain old cycles, but point out that the workload is
> > > most of the time waiting for IO, etc, i.e. the default should give
> > > interesting clues instead of expecting that the tool user knows all the
> > > possible knobs and try them in all sorts of combinations to then reach
> > > some conclusion.
> > >
> > > The default should use stuff that isn't that costly, thus not getting in
> > > the way of what is being observed, but at the same time look for common
> > > patterns, etc.
> > >
> > > - Arnaldo
> >
> > I really appreciate Howard doing this work!
> >
> > I wonder there are other cases where we want to synthesize events in
> > BPF, for example, we may have fast and slow memory on a system, we
> > could turn memory events on a system into either fast or slow ones in
> > BPF based on the memory accessed, so that fast/slow memory systems can
> > be simulated without access to hardware. This also feels like a perf
> > script type problem. Perhaps we can add something to the bpf-output
> > event so it can have multiple uses and not just off-cpu.
> >
> >
> > I worry about dropping short samples we can create a property that
> > off-cpu time + on-cpu time != wall clock time. Perhaps such short
> > things can get pushed into Namhyung's "at the end" approach while
> > longer things get samples. Perhaps we only do that when the frequency
> > is too great.
>
> Sounds good.  We might add an option to specify the threshold to
> determine whether to dump the data or to save it for later.  But ideally
> it should be able to find a good default.
>
> >

>
> Agreed!
>
> Thanks,
> Namhyung

  reply	other threads:[~2024-05-16  4:24 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-24  2:48 [PATCH v2 0/4] Dump off-cpu samples directly Howard Chu
2024-04-24  2:48 ` [PATCH v2 1/4] perf record off-cpu: Parse off-cpu event, change config location Howard Chu
2024-04-24  2:48 ` [PATCH v2 2/4] perf record off-cpu: BPF perf_event_output on sched_switch Howard Chu
2024-04-24  2:48 ` [PATCH v2 3/4] perf record off-cpu: extract off-cpu sample data from raw_data Howard Chu
2024-04-24  2:48 ` [PATCH v2 4/4] perf record off-cpu: delete bound-to-fail test Howard Chu
2024-04-24 19:12 ` [PATCH v2 0/4] Dump off-cpu samples directly Namhyung Kim
2024-04-24 21:11   ` Arnaldo Carvalho de Melo
2024-04-24 22:19     ` Ian Rogers
2024-04-24 22:57       ` Namhyung Kim
2024-05-16  4:24         ` Howard Chu [this message]
2024-05-16  4:56           ` Ian Rogers
2024-05-23  4:34             ` Namhyung Kim
2024-05-23 16:34               ` Howard Chu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAH0uvohPg7LtSOLDNaPwnC5ePwjwg0NtKzLZ_oJcAz7zOwdwdw@mail.gmail.com \
    --to=howardchu95@gmail.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=leo.yan@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ravi.bangoria@amd.com \
    --cc=zegao2021@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).