From: Jiri Olsa <olsajiri@gmail.com>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
Ian Rogers <irogers@google.com>,
linux-perf-users@vger.kernel.org
Subject: Re: [PATCH 4/4] perf tools: Support reading PERF_FORMAT_LOST
Date: Thu, 18 Aug 2022 14:04:11 +0200 [thread overview]
Message-ID: <Yv4qu8xMpfzUQ/8L@krava> (raw)
In-Reply-To: <20220816221747.275828-5-namhyung@kernel.org>
On Tue, Aug 16, 2022 at 03:17:47PM -0700, Namhyung Kim wrote:
SNIP
> diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
> index a7b0931d5137..7753368d70d6 100644
> --- a/tools/perf/util/event.h
> +++ b/tools/perf/util/event.h
> @@ -65,7 +65,8 @@ struct stack_dump {
>
> struct sample_read_value {
> u64 value;
> - u64 id;
> + u64 id; /* only if PERF_FORMAT_ID */
> + u64 lost; /* only if PERF_FORMAT_LOST */
> };
I was wondering why not to split this patch into smaller piece,
but once you change this struct you break all the places
SNIP
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -1541,7 +1541,7 @@ static int evsel__read_one(struct evsel *evsel, int cpu_map_idx, int thread)
> }
>
> static void evsel__set_count(struct evsel *counter, int cpu_map_idx, int thread,
> - u64 val, u64 ena, u64 run)
> + u64 val, u64 ena, u64 run, u64 lost)
> {
> struct perf_counts_values *count;
>
> @@ -1550,6 +1550,7 @@ static void evsel__set_count(struct evsel *counter, int cpu_map_idx, int thread,
> count->val = val;
> count->ena = ena;
> count->run = run;
> + count->lost = lost;
>
> perf_counts__set_loaded(counter->counts, cpu_map_idx, thread, true);
> }
> @@ -1558,7 +1559,7 @@ static int evsel__process_group_data(struct evsel *leader, int cpu_map_idx, int
> {
> u64 read_format = leader->core.attr.read_format;
> struct sample_read_value *v;
> - u64 nr, ena = 0, run = 0, i;
> + u64 nr, ena = 0, run = 0, lost = 0, i;
>
> nr = *data++;
>
> @@ -1573,16 +1574,25 @@ static int evsel__process_group_data(struct evsel *leader, int cpu_map_idx, int
>
> v = (struct sample_read_value *) data;
>
> - evsel__set_count(leader, cpu_map_idx, thread, v[0].value, ena, run);
> + if (read_format & PERF_FORMAT_LOST)
> + lost = v->lost;
> +
> + evsel__set_count(leader, cpu_map_idx, thread, v[0].value, ena, run, lost);
> +
> + v = next_sample_read_value(v, read_format);
oneway of making this simpler here and share with other places
could be adding something like:
for_each_group_data(v, i, nr, read_format) {
}
but not sure how would that turn out, thoughts?
>
> for (i = 1; i < nr; i++) {
> struct evsel *counter;
>
> - counter = evlist__id2evsel(leader->evlist, v[i].id);
> + counter = evlist__id2evsel(leader->evlist, v->id);
> if (!counter)
> return -EINVAL;
>
> - evsel__set_count(counter, cpu_map_idx, thread, v[i].value, ena, run);
> + if (read_format & PERF_FORMAT_LOST)
> + lost = v->lost;
> +
> + evsel__set_count(counter, cpu_map_idx, thread, v->value, ena, run, lost);
> + v = next_sample_read_value(v, read_format);
> }
>
> return 0;
> @@ -2475,16 +2485,21 @@ int evsel__parse_sample(struct evsel *evsel, union perf_event *event,
>
> if (data->read.group.nr > max_group_nr)
> return -EFAULT;
> - sz = data->read.group.nr *
> - sizeof(struct sample_read_value);
> +
> + sz = data->read.group.nr * sample_read_value_size(read_format);
> OVERFLOW_CHECK(array, sz, max_size);
> - data->read.group.values =
> - (struct sample_read_value *)array;
> + data->read.group.values = (void *)array;
nit, is this void casting needed?
thanks,
jirka
> array = (void *)array + sz;
> } else {
> OVERFLOW_CHECK_u64(array);
> data->read.one.id = *array;
> array++;
> +
> + if (read_format & PERF_FORMAT_LOST) {
> + OVERFLOW_CHECK_u64(array);
> + data->read.one.lost = *array;
> + array++;
> + }
> }
> }
>
SNIP
next prev parent reply other threads:[~2022-08-18 12:04 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-16 22:17 [PATCH 0/4] perf tools: Support reading PERF_FORMAT_LOST (v2) Namhyung Kim
2022-08-16 22:17 ` [PATCH 1/4] tools headers UAPI: Sync linux/perf_event.h with the kernel sources Namhyung Kim
2022-08-16 22:17 ` [PATCH 2/4] tools lib perf: Handle read format in perf_evsel__read() Namhyung Kim
2022-08-16 22:17 ` [PATCH 3/4] tools lib perf: Add a test case for read formats Namhyung Kim
2022-08-16 22:17 ` [PATCH 4/4] perf tools: Support reading PERF_FORMAT_LOST Namhyung Kim
2022-08-18 12:04 ` Jiri Olsa [this message]
2022-08-18 16:59 ` Namhyung Kim
2022-08-17 11:40 ` [PATCH 0/4] perf tools: Support reading PERF_FORMAT_LOST (v2) Arnaldo Carvalho de Melo
-- strict thread matches above, loose matches on Subject: below --
2022-08-19 0:36 [PATCH 0/4] perf tools: Support reading PERF_FORMAT_LOST (v3) Namhyung Kim
2022-08-19 0:36 ` [PATCH 4/4] perf tools: Support reading PERF_FORMAT_LOST Namhyung Kim
2022-08-19 10:53 ` Jiri Olsa
2022-08-15 19:01 [PATCH 0/4] " Namhyung Kim
2022-08-15 19:01 ` [PATCH 4/4] " Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yv4qu8xMpfzUQ/8L@krava \
--to=olsajiri@gmail.com \
--cc=acme@kernel.org \
--cc=irogers@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).