From: Namhyung Kim <namhyung@gmail.com>
To: Ian Rogers <irogers@google.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
acme@kernel.org, sumanthk@linux.ibm.com, dengler@linux.ibm.com,
svens@linux.ibm.com, gor@linux.ibm.com, hca@linux.ibm.com
Subject: Re: [PATCH] perf jevent: fix core dump on software events on s390
Date: Fri, 15 Sep 2023 16:40:41 -0700 [thread overview]
Message-ID: <CAM9d7cgbMZzJea_TJJ46AEhLentykmy8r6twC8bkkgXTNAMqgw@mail.gmail.com> (raw)
In-Reply-To: <CAP-5=fUiHMRPVYhbQv-YM+EMKyBF6TEopea=PPX2thbtdmhGsg@mail.gmail.com>
Hello,
On Thu, Sep 14, 2023 at 6:14 AM Ian Rogers <irogers@google.com> wrote:
>
> On Wed, Sep 13, 2023 at 5:52 AM Thomas Richter <tmricht@linux.ibm.com> wrote:
> >
> > Running commands such as
> > # ./perf stat -e cs -- true
> > Segmentation fault (core dumped)
> > # ./perf stat -e cpu-clock-- true
> > Segmentation fault (core dumped)
> > #
> >
> > dump core. This should not happen as these events are defined
> > even when no hardware PMU is available.
> > Debugging this reveals this call chain:
> >
> > perf_pmus__find_by_type(type=1)
> > +--> pmu_read_sysfs(core_only=false)
> > +--> perf_pmu__find2(dirfd=3, name=0x152a113 "software")
> > +--> perf_pmu__lookup(pmus=0x14f0568 <other_pmus>, dirfd=3,
> > lookup_name=0x152a113 "software")
> > +--> perf_pmu__find_events_table (pmu=0x1532130)
> >
> > Now the pmu is "software" and it tries to find a proper table
> > generated by the pmu-event generation process for s390:
> >
> > # cd pmu-events/
> > # ./jevents.py s390 all /root/linux/tools/perf/pmu-events/arch |\
> > grep -E '^const struct pmu_table_entry'
> > const struct pmu_table_entry pmu_events__cf_z10[] = {
> > const struct pmu_table_entry pmu_events__cf_z13[] = {
> > const struct pmu_table_entry pmu_metrics__cf_z13[] = {
> > const struct pmu_table_entry pmu_events__cf_z14[] = {
> > const struct pmu_table_entry pmu_metrics__cf_z14[] = {
> > const struct pmu_table_entry pmu_events__cf_z15[] = {
> > const struct pmu_table_entry pmu_metrics__cf_z15[] = {
> > const struct pmu_table_entry pmu_events__cf_z16[] = {
> > const struct pmu_table_entry pmu_metrics__cf_z16[] = {
> > const struct pmu_table_entry pmu_events__cf_z196[] = {
> > const struct pmu_table_entry pmu_events__cf_zec12[] = {
> > const struct pmu_table_entry pmu_metrics__cf_zec12[] = {
> > const struct pmu_table_entry pmu_events__test_soc_cpu[] = {
> > const struct pmu_table_entry pmu_metrics__test_soc_cpu[] = {
> > const struct pmu_table_entry pmu_events__test_soc_sys[] = {
> > #
> >
> > However event "software" is not listed, as can be seen in the
> > generated const struct pmu_events_map pmu_events_map[].
> > So in function perf_pmu__find_events_table(), the variable
> > table is initialized to NULL, but never set to a proper
> > value. The function scans all generated &pmu_events_map[]
> > tables, but no table matches, because the tables are
> > s390 CPU Measurement unit specific:
> >
> > i = 0;
> > for (;;) {
> > const struct pmu_events_map *map = &pmu_events_map[i++];
> > if (!map->arch)
> > break;
> >
> > --> the maps are there because the build generated them
> >
> > if (!strcmp_cpuid_str(map->cpuid, cpuid)) {
> > table = &map->event_table;
> > break;
> > }
> > --> Since no matching CPU string the table var remains 0x0
> > }
> > free(cpuid);
> > if (!pmu)
> > return table;
> >
> > --> The pmu is "software" so it exists and no return
> >
> > --> and here perf dies because table is 0x0
> > for (i = 0; i < table->num_pmus; i++) {
> > ...
> > }
> > return NULL;
> >
> > Fix this and do not access the table variable. Instead return 0x0
> > which is the same return code when the for-loop was not successful.
> >
> > Output after:
> > # ./perf stat -e cs -- true
> >
> > Performance counter stats for 'true':
> >
> > 0 cs
> >
> > 0.000853105 seconds time elapsed
> >
> > 0.000061000 seconds user
> > 0.000827000 seconds sys
> >
> > # ./perf stat -e cpu-clock -- true
> >
> > Performance counter stats for 'true':
> >
> > 0.25 msec cpu-clock # 0.341 CPUs utilized
> >
> > 0.000728383 seconds time elapsed
> >
> > 0.000055000 seconds user
> > 0.000706000 seconds sys
> >
> > # ./perf stat -e cycles -- true
> >
> > Performance counter stats for 'true':
> >
> > <not supported> cycles
> >
> > 0.000767298 seconds time elapsed
> >
> > 0.000055000 seconds user
> > 0.000739000 seconds sys
> >
> > #
> >
> > Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
>
> Reviewed-by: Ian Rogers <irogers@google.com>
I'll add this too, ok?
Fixes: 7c52f10c0d4d8 ("perf pmu: Cache JSON events table")
Thanks,
Namhyung
next prev parent reply other threads:[~2023-09-15 23:41 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-13 12:51 [PATCH] perf jevent: fix core dump on software events on s390 Thomas Richter
2023-09-13 16:31 ` Ian Rogers
2023-09-15 23:40 ` Namhyung Kim [this message]
2023-09-16 4:10 ` Ian Rogers
2023-09-17 5:28 ` Namhyung Kim
2023-09-18 8:28 ` Thomas Richter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAM9d7cgbMZzJea_TJJ46AEhLentykmy8r6twC8bkkgXTNAMqgw@mail.gmail.com \
--to=namhyung@gmail.com \
--cc=acme@kernel.org \
--cc=dengler@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=irogers@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=sumanthk@linux.ibm.com \
--cc=svens@linux.ibm.com \
--cc=tmricht@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).