From: Ayush Jain <ayush.jain3@amd.com>
To: Sandipan Das <sandipan.das@amd.com>,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
Cc: peterz@infradead.org, mingo@redhat.com, acme@kernel.org,
mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
jolsa@kernel.org, namhyung@kernel.org, irogers@google.com,
adrian.hunter@intel.com, kjain@linux.ibm.com,
atrajeev@linux.vnet.ibm.com, barnali@linux.ibm.com,
ananth.narayan@amd.com, ravi.bangoria@amd.com,
santosh.shukla@amd.com
Subject: Re: [PATCH] perf test: Retry without grouping for all metrics test
Date: Wed, 14 Jun 2023 17:08:21 +0530 [thread overview]
Message-ID: <1320e6e3-c029-2a8c-e8b7-2cfbb781518a@amd.com> (raw)
In-Reply-To: <20230614090710.680330-1-sandipan.das@amd.com>
Hello Sandipan,
Thank you for this patch,
On 6/14/2023 2:37 PM, Sandipan Das wrote:
> There are cases where a metric uses more events than the number of
> counters. E.g. AMD Zen, Zen 2 and Zen 3 processors have four data fabric
> counters but the "nps1_die_to_dram" metric has eight events. By default,
> the constituent events are placed in a group. Since the events cannot be
> scheduled at the same time, the metric is not computed. The all metrics
> test also fails because of this.
>
> Before announcing failure, the test can try multiple options for each
> available metric. After system-wide mode fails, retry once again with
> the "--metric-no-group" option.
>
> E.g.
>
> $ sudo perf test -v 100
>
> Before:
>
> 100: perf all metrics test :
> --- start ---
> test child forked, pid 672731
> Testing branch_misprediction_ratio
> Testing all_remote_links_outbound
> Testing nps1_die_to_dram
> Metric 'nps1_die_to_dram' not printed in:
> Error:
> Invalid event (dram_channel_data_controller_4) in per-thread mode, enable system wide with '-a'.
> Testing macro_ops_dispatched
> Testing all_l2_cache_accesses
> Testing all_l2_cache_hits
> Testing all_l2_cache_misses
> Testing ic_fetch_miss_ratio
> Testing l2_cache_accesses_from_l2_hwpf
> Testing l2_cache_misses_from_l2_hwpf
> Testing op_cache_fetch_miss_ratio
> Testing l3_read_miss_latency
> Testing l1_itlb_misses
> test child finished with -1
> ---- end ----
> perf all metrics test: FAILED!
>
> After:
>
> 100: perf all metrics test :
> --- start ---
> test child forked, pid 672887
> Testing branch_misprediction_ratio
> Testing all_remote_links_outbound
> Testing nps1_die_to_dram
> Testing macro_ops_dispatched
> Testing all_l2_cache_accesses
> Testing all_l2_cache_hits
> Testing all_l2_cache_misses
> Testing ic_fetch_miss_ratio
> Testing l2_cache_accesses_from_l2_hwpf
> Testing l2_cache_misses_from_l2_hwpf
> Testing op_cache_fetch_miss_ratio
> Testing l3_read_miss_latency
> Testing l1_itlb_misses
> test child finished with 0
> ---- end ----
> perf all metrics test: Ok
>
Issue gets resolved after applying this patch
$ ./perf test 102 -vvv
$102: perf all metrics test :
$--- start ---
$test child forked, pid 244991
$Testing branch_misprediction_ratio
$Testing all_remote_links_outbound
$Testing nps1_die_to_dram
$Testing all_l2_cache_accesses
$Testing all_l2_cache_hits
$Testing all_l2_cache_misses
$Testing ic_fetch_miss_ratio
$Testing l2_cache_accesses_from_l2_hwpf
$Testing l2_cache_misses_from_l2_hwpf
$Testing l3_read_miss_latency
$Testing l1_itlb_misses
$test child finished with 0
$---- end ----
$perf all metrics test: Ok
> Reported-by: Ayush Jain <ayush.jain3@amd.com>
> Signed-off-by: Sandipan Das <sandipan.das@amd.com>
Tested-by: Ayush Jain <ayush.jain3@amd.com>
> ---
> tools/perf/tests/shell/stat_all_metrics.sh | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/tools/perf/tests/shell/stat_all_metrics.sh b/tools/perf/tests/shell/stat_all_metrics.sh
> index 54774525e18a..1e88ea8c5677 100755
> --- a/tools/perf/tests/shell/stat_all_metrics.sh
> +++ b/tools/perf/tests/shell/stat_all_metrics.sh
> @@ -16,6 +16,13 @@ for m in $(perf list --raw-dump metrics); do
> then
> continue
> fi
> + # Failed again, possibly there are not enough counters so retry system wide
> + # mode but without event grouping.
> + result=$(perf stat -M "$m" --metric-no-group -a sleep 0.01 2>&1)
> + if [[ "$result" =~ ${m:0:50} ]]
> + then
> + continue
> + fi
> # Failed again, possibly the workload was too small so retry with something
> # longer.
> result=$(perf stat -M "$m" perf bench internals synthesize 2>&1)
Thanks & Regards,
Ayush Jain
next prev parent reply other threads:[~2023-06-14 11:39 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-14 9:07 [PATCH] perf test: Retry without grouping for all metrics test Sandipan Das
2023-06-14 11:38 ` Ayush Jain [this message]
2023-12-06 13:08 ` Arnaldo Carvalho de Melo
2023-12-06 16:35 ` Ian Rogers
2023-12-06 17:54 ` Arnaldo Carvalho de Melo
2023-12-06 18:50 ` Ian Rogers
2023-06-14 16:40 ` Ian Rogers
2023-06-19 11:46 ` Sandipan Das
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1320e6e3-c029-2a8c-e8b7-2cfbb781518a@amd.com \
--to=ayush.jain3@amd.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=ananth.narayan@amd.com \
--cc=atrajeev@linux.vnet.ibm.com \
--cc=barnali@linux.ibm.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=kjain@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@amd.com \
--cc=sandipan.das@amd.com \
--cc=santosh.shukla@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox