From: Howard Chu <howardchu95@gmail.com>
To: Ian Rogers <irogers@google.com>
Cc: acme@kernel.org, peterz@infradead.org, namhyung@kernel.org,
mingo@redhat.com, mark.rutland@arm.com, james.clark@linaro.org,
alexander.shishkin@linux.intel.com, jolsa@kernel.org,
adrian.hunter@intel.com, kan.liang@linux.intel.com,
linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v7 09/10] perf record --off-cpu: Dump the remaining samples in BPF's stack trace map
Date: Tue, 12 Nov 2024 13:40:30 -0800 [thread overview]
Message-ID: <CAH0uvoh76QH8OjBfrs_nRYHEirwDb+=hxbvJTRJQhY5DZKvLEw@mail.gmail.com> (raw)
In-Reply-To: <CAP-5=fX-+bcovrhgW585xZZEZv=bx+=w-Aw5Y8ONR3Q0dORA5Q@mail.gmail.com>
Hello Ian,
On Mon, Nov 11, 2024 at 10:07 AM Ian Rogers <irogers@google.com> wrote:
>
> On Fri, Nov 8, 2024 at 12:41 PM Howard Chu <howardchu95@gmail.com> wrote:
> >
> > Dump the remaining samples, as if it is dumping a direct sample.
> >
> > Put the stack trace, tid, off-cpu time and cgroup id into the raw_data
> > section, just like a direct off-cpu sample coming from BPF's
> > bpf_perf_event_output().
> >
> > This ensures that evsel__parse_sample() correctly parses both direct
> > samples and accumulated samples.
>
> Nice, hopefully this should mean we can get rid of the logic holding
> all threads live for the sake of off-CPU, as off-CPU events were being
> "sampled" after threads had terminated
> (symbol_conf.keep_exited_threads). An example of this logic is:
> https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/builtin-report.c?h=perf-tools-next#n1484
> Perhaps there is a follow up to dump exiting processes with a BPF
> program on `tp/sched/sched_process_exit`. If the exited process is
> already "dumped" then its map entry can be removed which may lower
> overhead if off-CPU is running for a long time system wide.
Thanks for the advice, that's actually a great idea. IIUC, when a task
exits, dump its off-cpu time regardless of the threshold, and delete
the map entry to save space for other tasks.
>
>
> > Suggested-by: Namhyung Kim <namhyung@kernel.org>
> > Signed-off-by: Howard Chu <howardchu95@gmail.com>
>
> Reviewed-by: Ian Rogers <irogers@google.com>
>
> Thanks,
> Ian
>
> > ---
> > tools/perf/util/bpf_off_cpu.c | 62 +++++++++++++++++++++--------------
> > 1 file changed, 38 insertions(+), 24 deletions(-)
> >
> > diff --git a/tools/perf/util/bpf_off_cpu.c b/tools/perf/util/bpf_off_cpu.c
> > index cfe5b17393e9..f1be354e2fe7 100644
> > --- a/tools/perf/util/bpf_off_cpu.c
> > +++ b/tools/perf/util/bpf_off_cpu.c
> > @@ -37,6 +37,8 @@ union off_cpu_data {
> > u64 array[1024 / sizeof(u64)];
> > };
> >
> > +u64 off_cpu_raw[MAX_STACKS + 5];
> > +
> > static int off_cpu_config(struct evlist *evlist)
> > {
> > char off_cpu_event[64];
> > @@ -61,6 +63,9 @@ static int off_cpu_config(struct evlist *evlist)
> > static void off_cpu_start(void *arg)
> > {
> > struct evlist *evlist = arg;
> > + struct evsel *evsel;
> > + struct perf_cpu pcpu;
> > + int i, err;
> >
> > /* update task filter for the given workload */
> > if (skel->rodata->has_task && skel->rodata->uses_tgid &&
> > @@ -304,6 +309,7 @@ int off_cpu_write(struct perf_session *session)
> > {
> > int bytes = 0, size;
> > int fd, stack;
> > + u32 raw_size;
> > u64 sample_type, val, sid = 0;
> > struct evsel *evsel;
> > struct perf_data_file *file = &session->data->file;
> > @@ -343,46 +349,54 @@ int off_cpu_write(struct perf_session *session)
> >
> > while (!bpf_map_get_next_key(fd, &prev, &key)) {
> > int n = 1; /* start from perf_event_header */
> > - int ip_pos = -1;
> >
> > bpf_map_lookup_elem(fd, &key, &val);
> >
> > + /* zero-fill some of the fields, will be overwritten by raw_data when parsing */
> > if (sample_type & PERF_SAMPLE_IDENTIFIER)
> > data.array[n++] = sid;
> > - if (sample_type & PERF_SAMPLE_IP) {
> > - ip_pos = n;
> > + if (sample_type & PERF_SAMPLE_IP)
> > data.array[n++] = 0; /* will be updated */
> > - }
> > if (sample_type & PERF_SAMPLE_TID)
> > - data.array[n++] = (u64)key.pid << 32 | key.tgid;
> > + data.array[n++] = 0;
> > if (sample_type & PERF_SAMPLE_TIME)
> > data.array[n++] = tstamp;
> > - if (sample_type & PERF_SAMPLE_ID)
> > - data.array[n++] = sid;
> > if (sample_type & PERF_SAMPLE_CPU)
> > data.array[n++] = 0;
> > if (sample_type & PERF_SAMPLE_PERIOD)
> > - data.array[n++] = val;
> > - if (sample_type & PERF_SAMPLE_CALLCHAIN) {
> > - int len = 0;
> > -
> > - /* data.array[n] is callchain->nr (updated later) */
> > - data.array[n + 1] = PERF_CONTEXT_USER;
> > - data.array[n + 2] = 0;
> > -
> > - bpf_map_lookup_elem(stack, &key.stack_id, &data.array[n + 2]);
> > - while (data.array[n + 2 + len])
> > + data.array[n++] = 0;
> > + if (sample_type & PERF_SAMPLE_RAW) {
> > + /*
> > + * [ size ][ data ]
> > + * [ data ]
> > + * [ data ]
> > + * [ data ]
> > + * [ data ][ empty]
> > + */
> > + int len = 0, i = 0;
> > + void *raw_data = (void *)data.array + n * sizeof(u64);
> > +
> > + off_cpu_raw[i++] = (u64)key.pid << 32 | key.tgid;
> > + off_cpu_raw[i++] = val;
> > +
> > + /* off_cpu_raw[i] is callchain->nr (updated later) */
> > + off_cpu_raw[i + 1] = PERF_CONTEXT_USER;
> > + off_cpu_raw[i + 2] = 0;
> > +
> > + bpf_map_lookup_elem(stack, &key.stack_id, &off_cpu_raw[i + 2]);
> > + while (off_cpu_raw[i + 2 + len])
> > len++;
> >
> > - /* update length of callchain */
> > - data.array[n] = len + 1;
> > + off_cpu_raw[i] = len + 1;
> > + i += len + 2;
> > +
> > + off_cpu_raw[i++] = key.cgroup_id;
> >
> > - /* update sample ip with the first callchain entry */
> > - if (ip_pos >= 0)
> > - data.array[ip_pos] = data.array[n + 2];
> > + raw_size = i * sizeof(u64) + sizeof(u32); /* 4 bytes for alignment */
> > + memcpy(raw_data, &raw_size, sizeof(raw_size));
> > + memcpy(raw_data + sizeof(u32), off_cpu_raw, i * sizeof(u64));
> >
> > - /* calculate sample callchain data array length */
> > - n += len + 2;
> > + n += i + 1;
> > }
> > if (sample_type & PERF_SAMPLE_CGROUP)
> > data.array[n++] = key.cgroup_id;
> > --
> > 2.43.0
> >
Thanks,
Howard
next prev parent reply other threads:[~2024-11-12 21:40 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-08 20:41 [PATCH v7 00/10] perf record --off-cpu: Dump off-cpu samples directly Howard Chu
2024-11-08 20:41 ` [PATCH v7 01/10] perf record --off-cpu: Add --off-cpu-thresh option Howard Chu
2024-11-11 17:40 ` Ian Rogers
2024-11-08 20:41 ` [PATCH v7 02/10] perf evsel: Expose evsel__is_offcpu_event() for future use Howard Chu
2024-11-11 17:41 ` Ian Rogers
2024-11-08 20:41 ` [PATCH v7 03/10] perf record --off-cpu: Parse off-cpu event Howard Chu
2024-11-11 17:45 ` Ian Rogers
2024-11-11 18:10 ` Howard Chu
2024-11-08 20:41 ` [PATCH v7 04/10] perf record --off-cpu: Preparation of off-cpu BPF program Howard Chu
2024-11-11 17:47 ` Ian Rogers
2024-11-11 18:08 ` Howard Chu
2024-11-12 20:52 ` Howard Chu
2024-11-08 20:41 ` [PATCH v7 05/10] perf record --off-cpu: Dump off-cpu samples in BPF Howard Chu
2024-11-11 17:54 ` Ian Rogers
2024-11-11 18:05 ` Howard Chu
2024-11-11 18:11 ` Ian Rogers
2024-11-08 20:41 ` [PATCH v7 06/10] perf evsel: Assemble offcpu samples Howard Chu
2024-11-11 17:55 ` Ian Rogers
2024-11-08 20:41 ` [PATCH v7 07/10] perf record --off-cpu: Disable perf_event's callchain collection Howard Chu
2024-11-08 20:41 ` [PATCH v7 08/10] perf script: Display off-cpu samples correctly Howard Chu
2024-11-11 17:56 ` Ian Rogers
2024-11-08 20:41 ` [PATCH v7 09/10] perf record --off-cpu: Dump the remaining samples in BPF's stack trace map Howard Chu
2024-11-11 18:06 ` Ian Rogers
2024-11-12 21:40 ` Howard Chu [this message]
2024-11-08 20:41 ` [PATCH v7 10/10] perf test: Add direct off-cpu test Howard Chu
2024-11-11 18:08 ` Ian Rogers
2024-11-12 18:39 ` [PATCH v7 00/10] perf record --off-cpu: Dump off-cpu samples directly Arnaldo Carvalho de Melo
2024-11-12 18:59 ` Ian Rogers
2024-11-12 19:17 ` Arnaldo Carvalho de Melo
2024-11-12 19:18 ` Arnaldo Carvalho de Melo
2024-11-12 19:18 ` Arnaldo Carvalho de Melo
2024-11-12 19:32 ` Arnaldo Carvalho de Melo
2024-11-12 19:56 ` Arnaldo Carvalho de Melo
2024-11-12 20:03 ` Howard Chu
2024-11-16 14:42 ` Howard Chu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAH0uvoh76QH8OjBfrs_nRYHEirwDb+=hxbvJTRJQhY5DZKvLEw@mail.gmail.com' \
--to=howardchu95@gmail.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=irogers@google.com \
--cc=james.clark@linaro.org \
--cc=jolsa@kernel.org \
--cc=kan.liang@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).