* [PATCH 1/4] perf test: Skip dlfilter test for build failures
@ 2025-12-19 1:18 Namhyung Kim
2025-12-19 1:18 ` [PATCH 2/4] perf test: Use shelldir to refer perf source location Namhyung Kim
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Namhyung Kim @ 2025-12-19 1:18 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, James Clark
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users
For some reason, it may fail to build the dlfilter. Let's skip the test
as it's not an error in the perf. This can happen when you run the perf
test without source code or in a different directory.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/tests/shell/script_dlfilter.sh | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/perf/tests/shell/script_dlfilter.sh b/tools/perf/tests/shell/script_dlfilter.sh
index 45c97d4a7d5f90e8..7895ab0309b29dd5 100755
--- a/tools/perf/tests/shell/script_dlfilter.sh
+++ b/tools/perf/tests/shell/script_dlfilter.sh
@@ -70,15 +70,15 @@ test_dlfilter() {
# Build the dlfilter
if ! cc -c -I tools/perf/include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
then
- echo "Basic --dlfilter test [Failed to build dlfilter object]"
- err=1
+ echo "Basic --dlfilter test [Skip - failed to build dlfilter object]"
+ err=2
return
fi
if ! cc -shared -o "${dlfilter_so}" "${dlfilter_so}.o"
then
- echo "Basic --dlfilter test [Failed to link dlfilter shared object]"
- err=1
+ echo "Basic --dlfilter test [Skip - failed to link dlfilter shared object]"
+ err=2
return
fi
--
2.52.0.322.g1dd061c0dc-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/4] perf test: Use shelldir to refer perf source location
2025-12-19 1:18 [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
@ 2025-12-19 1:18 ` Namhyung Kim
2026-01-09 23:17 ` Ian Rogers
2025-12-19 1:18 ` [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded Namhyung Kim
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Namhyung Kim @ 2025-12-19 1:18 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, James Clark
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users
It uses tools/perf/include which assumes it's running from the root of
the linux kernel source tree. But you can run perf from other places
like tools/perf, then the include path won't match. We can use the
shelldir variable to locate the test script in the tree.
$ cd tools/perf
$ ./perf test dlfilter
63: dlfilter C API : Ok
101: perf script --dlfilter tests : Ok
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/tests/shell/script_dlfilter.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/script_dlfilter.sh b/tools/perf/tests/shell/script_dlfilter.sh
index 7895ab0309b29dd5..aaed92bb78285dfd 100755
--- a/tools/perf/tests/shell/script_dlfilter.sh
+++ b/tools/perf/tests/shell/script_dlfilter.sh
@@ -68,7 +68,7 @@ test_dlfilter() {
fi
# Build the dlfilter
- if ! cc -c -I tools/perf/include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
+ if ! cc -c -I ${shelldir}/../../include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
then
echo "Basic --dlfilter test [Skip - failed to build dlfilter object]"
err=2
--
2.52.0.322.g1dd061c0dc-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded
2025-12-19 1:18 [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
2025-12-19 1:18 ` [PATCH 2/4] perf test: Use shelldir to refer perf source location Namhyung Kim
@ 2025-12-19 1:18 ` Namhyung Kim
2026-01-09 23:37 ` Ian Rogers
2025-12-19 1:18 ` [PATCH 4/4] perf test: Do not skip when some metric-group tests succeed Namhyung Kim
2026-01-08 23:58 ` [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
3 siblings, 1 reply; 11+ messages in thread
From: Namhyung Kim @ 2025-12-19 1:18 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, James Clark
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users
I think the return value of SKIP (2) should be used when it skipped the
entire test suite rather than a few of them. While the FAIL should be
reserved if any of test failed.
$ perf test -vv 110
110: perf all metrics test:
--- start ---
test child forked, pid 2496399
Testing tma_core_bound
Testing tma_info_core_ilp
Testing tma_info_memory_l2mpki
Testing tma_memory_bound
Testing tma_bottleneck_irregular_overhead
Testing tma_bottleneck_mispredictions
Testing tma_info_bad_spec_branch_misprediction_cost
Testing tma_info_bad_spec_ipmisp_cond_ntaken
Testing tma_info_bad_spec_ipmisp_cond_taken
Testing tma_info_bad_spec_ipmisp_indirect
Testing tma_info_bad_spec_ipmisp_ret
Testing tma_info_bad_spec_ipmispredict
Testing tma_info_branches_callret
Testing tma_info_branches_cond_nt
Testing tma_info_branches_cond_tk
Testing tma_info_branches_jump
Testing tma_info_branches_other_branches
Testing tma_branch_mispredicts
Testing tma_clears_resteers
Testing tma_machine_clears
Testing tma_mispredicts_resteers
Testing tma_bottleneck_big_code
Testing tma_icache_misses
Testing tma_itlb_misses
Testing tma_unknown_branches
Testing tma_info_bad_spec_spec_clears_ratio
Testing tma_other_mispredicts
Testing tma_branch_instructions
Testing tma_info_frontend_tbpc
Testing tma_info_inst_mix_bptkbranch
Testing tma_info_inst_mix_ipbranch
Testing tma_info_inst_mix_ipcall
Testing tma_info_inst_mix_iptb
Testing tma_info_system_ipfarbranch
Testing tma_info_thread_uptb
Testing tma_bottleneck_branching_overhead
Testing tma_nop_instructions
Testing tma_bottleneck_compute_bound_est
Testing tma_divider
Testing tma_ports_utilized_3m
Testing tma_bottleneck_instruction_fetch_bw
Testing tma_frontend_bound
Testing tma_assists
Testing tma_other_nukes
Testing tma_serializing_operation
Testing tma_bottleneck_data_cache_memory_bandwidth
Testing tma_fb_full
Testing tma_mem_bandwidth
Testing tma_sq_full
Testing tma_bottleneck_data_cache_memory_latency
Testing tma_l1_latency_dependency
Testing tma_l2_bound
Testing tma_l3_hit_latency
Testing tma_mem_latency
Testing tma_store_latency
Testing tma_bottleneck_memory_synchronization
Testing tma_contested_accesses
Testing tma_data_sharing
Testing tma_false_sharing
Testing tma_bottleneck_memory_data_tlbs
Testing tma_dtlb_load
Testing tma_dtlb_store
Testing tma_backend_bound
Testing tma_bottleneck_other_bottlenecks
Testing tma_bottleneck_useful_work
Testing tma_retiring
Testing tma_info_memory_fb_hpki
Testing tma_info_memory_l1mpki
Testing tma_info_memory_l1mpki_load
Testing tma_info_memory_l2hpki_all
Testing tma_info_memory_l2hpki_load
Testing tma_info_memory_l2mpki_all
Testing tma_info_memory_l2mpki_load
Testing tma_l1_bound
Testing tma_l3_bound
Testing tma_info_memory_l2mpki_rfo
Testing tma_fp_scalar
Testing tma_fp_vector
Testing tma_fp_vector_128b
Testing tma_fp_vector_256b
Testing tma_fp_vector_512b
Testing tma_port_0
Testing tma_x87_use
Testing tma_info_botlnk_l0_core_bound_likely
Testing tma_info_core_fp_arith_utilization
Testing tma_info_pipeline_execute
Testing tma_info_system_gflops
Testing tma_info_thread_execute_per_issue
Testing tma_dsb
Testing tma_info_botlnk_l2_dsb_bandwidth
Testing tma_info_frontend_dsb_coverage
Testing tma_decoder0_alone
Testing tma_dsb_switches
Testing tma_info_botlnk_l2_dsb_misses
Testing tma_info_frontend_dsb_switch_cost
Testing tma_info_frontend_ipdsb_miss_ret
Testing tma_mite
Testing tma_mite_4wide
Testing CPUs_utilized
Testing backend_cycles_idle
[Ignored backend_cycles_idle] failed but as a Default metric this can be expected
Performance counter stats for 'perf test -w noploop': <not counted> cpu-cycles:u <not supported> stalled-cycles-backend:u 1.014051473 seconds time elapsed 1.005718000 seconds user 0.008013000 seconds sys
Testing branch_frequency
Testing branch_miss_rate
Testing cs_per_second
Testing cycles_frequency
Testing frontend_cycles_idle
[Ignored frontend_cycles_idle] failed but as a Default metric this can be expected
Performance counter stats for 'perf test -w noploop': <not counted> cpu-cycles:u <not supported> stalled-cycles-frontend:u 1.012813656 seconds time elapsed 1.004603000 seconds user 0.008004000 seconds sys
Testing insn_per_cycle
Testing migrations_per_second
Testing page_faults_per_second
Testing stalled_cycles_per_instruction
[Ignored stalled_cycles_per_instruction] failed but as a Default metric this can be expected
Error: No supported events found. The stalled-cycles-backend:u event is not supported.
Testing tma_bad_speculation
Testing l1d_miss_rate
Testing llc_miss_rate
Testing dtlb_miss_rate
Testing itlb_miss_rate
[Ignored itlb_miss_rate] failed but as a Default metric this can be expected
Performance counter stats for 'perf test -w noploop': <not supported> iTLB-loads:u 3,097 iTLB-load-misses:u 1.012766732 seconds time elapsed 1.004318000 seconds user 0.008002000 seconds sys
Testing l1i_miss_rate
[Ignored l1i_miss_rate] failed but as a Default metric this can be expected
Performance counter stats for 'perf test -w noploop': <not counted> L1-icache-load-misses:u <not supported> L1-icache-loads:u 1.013606395 seconds time elapsed 1.001371000 seconds user 0.011968000 seconds sys
Testing l1_prefetch_miss_rate
[Ignored l1_prefetch_miss_rate] failed but as a Default metric this can be expected
Error: No supported events found. The L1-dcache-prefetches:u event is not supported.
Testing tma_info_botlnk_l2_ic_misses
Testing tma_info_frontend_fetch_upc
Testing tma_info_frontend_icache_miss_latency
Testing tma_info_frontend_ipunknown_branch
Testing tma_info_frontend_lsd_coverage
Testing tma_info_memory_tlb_code_stlb_mpki
Testing tma_info_pipeline_fetch_dsb
Testing tma_info_pipeline_fetch_lsd
Testing tma_info_pipeline_fetch_mite
Testing tma_info_pipeline_fetch_ms
Testing tma_fetch_bandwidth
Testing tma_lsd
Testing tma_branch_resteers
Testing tma_code_l2_hit
Testing tma_code_l2_miss
Testing tma_code_stlb_hit
Testing tma_code_stlb_miss
Testing tma_code_stlb_miss_2m
Testing tma_code_stlb_miss_4k
Testing tma_lcp
Testing tma_ms_switches
Testing tma_info_core_flopc
Testing tma_info_inst_mix_iparith
Testing tma_info_inst_mix_iparith_avx128
Testing tma_info_inst_mix_iparith_avx256
Testing tma_info_inst_mix_iparith_avx512
Testing tma_info_inst_mix_iparith_scalar_dp
Testing tma_info_inst_mix_iparith_scalar_sp
Testing tma_info_inst_mix_ipflop
Testing tma_info_inst_mix_ippause
Testing tma_fetch_latency
Testing tma_fp_arith
Testing tma_fp_assists
Testing tma_info_system_cpu_utilization
Testing tma_info_system_dram_bw_use
[Skipped tma_info_system_dram_bw_use] Not supported events
Performance counter stats for 'perf test -w noploop': <not supported> UNC_ARB_TRK_REQUESTS.ALL:u <not supported> UNC_ARB_COH_TRK_REQUESTS.ALL:u 1,013,554,749 duration_time 1.013527265 seconds time elapsed 1.005417000 seconds user 0.008011000 seconds sys
Testing tma_info_frontend_l2mpki_code
Testing tma_info_frontend_l2mpki_code_all
Testing tma_info_inst_mix_ipload
Testing tma_info_inst_mix_ipstore
Testing tma_info_memory_latency_load_l2_miss_latency
Testing tma_lock_latency
Testing tma_info_memory_core_l1d_cache_fill_bw_2t
Testing tma_info_memory_core_l2_cache_fill_bw_2t
Testing tma_info_memory_core_l3_cache_access_bw_2t
Testing tma_info_memory_core_l3_cache_fill_bw_2t
Testing tma_info_memory_l1d_cache_fill_bw
Testing tma_info_memory_l2_cache_fill_bw
Testing tma_info_memory_l3_cache_access_bw
Testing tma_info_memory_l3_cache_fill_bw
Testing tma_info_memory_l3mpki
Testing tma_info_memory_load_miss_real_latency
Testing tma_info_memory_mix_bus_lock_pki
Testing tma_info_memory_mix_uc_load_pki
Testing tma_info_memory_mlp
Testing tma_info_memory_tlb_load_stlb_mpki
Testing tma_info_memory_tlb_page_walks_utilization
Testing tma_info_memory_tlb_store_stlb_mpki
Testing tma_info_system_mem_parallel_reads
[Skipped tma_info_system_mem_parallel_reads] Not supported events
Performance counter stats for 'perf test -w noploop': <not supported> UNC_ARB_DAT_OCCUPANCY.RD:u <not counted> UNC_ARB_DAT_OCCUPANCY.RD/cmask=1/ 1.013354884 seconds time elapsed 1.009239000 seconds user 0.004004000 seconds sys
Testing tma_info_system_mem_read_latency
[Skipped tma_info_system_mem_read_latency] Not supported events
Performance counter stats for 'perf test -w noploop': <not supported> UNC_ARB_DAT_OCCUPANCY.RD:u <not counted> UNC_ARB_TRK_OCCUPANCY.RD <not counted> UNC_ARB_TRK_REQUESTS.RD 1.012882143 seconds time elapsed 1.004600000 seconds user 0.008036000 seconds sys
Testing tma_info_thread_cpi
Testing tma_streaming_stores
Testing tma_dram_bound
Testing tma_store_bound
Testing tma_l2_hit_latency
Testing tma_load_stlb_hit
Testing tma_load_stlb_miss
Testing tma_load_stlb_miss_1g
Testing tma_load_stlb_miss_2m
Testing tma_load_stlb_miss_4k
Testing tma_store_stlb_hit
Testing tma_store_stlb_miss
Testing tma_store_stlb_miss_1g
Testing tma_store_stlb_miss_2m
Testing tma_store_stlb_miss_4k
Testing tma_info_memory_latency_data_l2_mlp
Testing tma_info_memory_latency_load_l2_mlp
Testing tma_info_pipeline_ipassist
Testing tma_microcode_sequencer
Testing tma_ms
Testing tma_info_system_kernel_cpi
[Failed tma_info_system_kernel_cpi] Metric contains missing events
Error: No supported events found. Access to performance monitoring and observability operations is limited. Consider adjusting /proc/sys/kernel/perf_event_paranoid setting to open access to performance monitoring and observability operations for processes without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability. More information can be found at 'Perf events and tool security' document: https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html perf_event_paranoid setting is 2: -1: Allow use of (almost) all events by all users Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK >= 0: Disallow raw and ftrace function tracepoint access >= 1: Disallow CPU event access >= 2: Disallow kernel profiling To make the adjusted perf_event_paranoid setting permanent preserve it in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)
Testing tma_info_system_kernel_utilization
[Failed tma_info_system_kernel_utilization] Metric contains missing events
Error: No supported events found. Access to performance monitoring and observability operations is limited. Consider adjusting /proc/sys/kernel/perf_event_paranoid setting to open access to performance monitoring and observability operations for processes without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability. More information can be found at 'Perf events and tool security' document: https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html perf_event_paranoid setting is 2: -1: Allow use of (almost) all events by all users Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK >= 0: Disallow raw and ftrace function tracepoint access >= 1: Disallow CPU event access >= 2: Disallow kernel profiling To make the adjusted perf_event_paranoid setting permanent preserve it in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)
Testing tma_info_pipeline_retire
Testing tma_info_thread_clks
Testing tma_info_thread_uoppi
Testing tma_memory_operations
Testing tma_other_light_ops
Testing tma_ports_utilization
Testing tma_ports_utilized_0
Testing tma_ports_utilized_1
Testing tma_ports_utilized_2
Testing C10_Pkg_Residency
[Failed C10_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c10-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c10-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C2_Pkg_Residency
[Failed C2_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c2-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c2-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C3_Pkg_Residency
[Failed C3_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { msr/tsc/, cstate_pkg/c3-residency/ } Error: No supported events found. Invalid event (msr/tsc/u) in per-thread mode, enable system wide with '-a'.
Testing C6_Core_Residency
[Failed C6_Core_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_core/c6-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_core/c6-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C6_Pkg_Residency
[Failed C6_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c6-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c6-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C7_Core_Residency
[Failed C7_Core_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_core/c7-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_core/c7-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C7_Pkg_Residency
[Failed C7_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c7-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c7-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C8_Pkg_Residency
[Failed C8_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c8-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c8-residency/u) in per-thread mode, enable system wide with '-a'.
Testing C9_Pkg_Residency
[Failed C9_Pkg_Residency] Metric contains missing events
WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c9-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c9-residency/u) in per-thread mode, enable system wide with '-a'.
Testing tma_info_core_epc
Testing tma_info_system_core_frequency
Testing tma_info_system_power
[Skipped tma_info_system_power] Not supported events
Performance counter stats for 'perf test -w noploop': <not supported> Joules power/energy-pkg/u 1,013,238,256 duration_time 1.013223072 seconds time elapsed 0.995924000 seconds user 0.011903000 seconds sys
Testing tma_info_system_power_license0_utilization
Testing tma_info_system_power_license1_utilization
Testing tma_info_system_power_license2_utilization
Testing tma_info_system_turbo_utilization
Testing tma_info_inst_mix_ipswpf
Testing tma_info_memory_prefetches_useless_hwpf
Testing tma_info_core_coreipc
Testing tma_info_thread_ipc
Testing tma_heavy_operations
Testing tma_light_operations
Testing tma_info_core_core_clks
Testing tma_info_system_smt_2t_utilization
Testing tma_info_thread_slots_utilization
Testing UNCORE_FREQ
[Skipped UNCORE_FREQ] Not supported events
Performance counter stats for 'perf test -w noploop': <not supported> UNC_CLOCK.SOCKET:u 1,015,993,466 duration_time 1.015949387 seconds time elapsed 1.007676000 seconds user 0.008029000 seconds sys
Testing tma_info_system_socket_clks
[Failed tma_info_system_socket_clks] Metric contains missing events
Error: No supported events found. Invalid event (UNC_CLOCK.SOCKET:u) in per-thread mode, enable system wide with '-a'.
Testing tma_info_inst_mix_instructions
Testing tma_info_system_cpus_utilized
Testing tma_info_system_mux
Testing tma_info_system_time
Testing tma_info_thread_slots
Testing tma_few_uops_instructions
Testing tma_4k_aliasing
Testing tma_cisc
Testing tma_fp_divider
Testing tma_int_divider
Testing tma_slow_pause
Testing tma_split_loads
Testing tma_split_stores
Testing tma_store_fwd_blk
Testing tma_alu_op_utilization
Testing tma_load_op_utilization
Testing tma_mixing_vectors
Testing tma_store_op_utilization
Testing tma_port_1
Testing tma_port_5
Testing tma_port_6
Testing smi_cycles
[Skipped smi_cycles] Not supported events
Performance counter stats for 'perf test -w noploop': <not supported> msr/smi/u <not supported> msr/aperf/u 3,965,789,327 cycles:u 1.012779591 seconds time elapsed 1.004579000 seconds user 0.007972000 seconds sys
Testing smi_num
[Failed smi_num] Metric contains missing events
Error: No supported events found. Invalid event (msr/smi/u) in per-thread mode, enable system wide with '-a'.
Testing tsx_aborted_cycles
Testing tsx_cycles_per_elision
Testing tsx_cycles_per_transaction
Testing tsx_transactional_cycles
---- end(-1) ----
110: perf all metrics test : FAILED!
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/tests/shell/stat_all_metrics.sh | 29 ++++++++++++++++------
1 file changed, 22 insertions(+), 7 deletions(-)
diff --git a/tools/perf/tests/shell/stat_all_metrics.sh b/tools/perf/tests/shell/stat_all_metrics.sh
index 3dabb39c7cc8c46a..b582d23f28c9e8e2 100755
--- a/tools/perf/tests/shell/stat_all_metrics.sh
+++ b/tools/perf/tests/shell/stat_all_metrics.sh
@@ -15,7 +15,8 @@ then
test_prog="perf test -w noploop"
fi
-err=0
+skip=0
+err=3
for m in $(perf list --raw-dump metrics); do
echo "Testing $m"
result=$(perf stat -M "$m" $system_wide_flag -- $test_prog 2>&1)
@@ -23,6 +24,10 @@ for m in $(perf list --raw-dump metrics); do
if [[ $result_err -eq 0 && "$result" =~ ${m:0:50} ]]
then
# No error result and metric shown.
+ if [[ "$err" -ne 1 ]]
+ then
+ err=0
+ fi
continue
fi
if [[ "$result" =~ "Cannot resolve IDs for" || "$result" =~ "No supported events found" ]]
@@ -44,7 +49,7 @@ for m in $(perf list --raw-dump metrics); do
echo $result
if [[ $err -eq 0 ]]
then
- err=2 # Skip
+ skip=1
fi
continue
elif [[ "$result" =~ "in per-thread mode, enable system wide" ]]
@@ -53,7 +58,7 @@ for m in $(perf list --raw-dump metrics); do
echo $result
if [[ $err -eq 0 ]]
then
- err=2 # Skip
+ skip=1
fi
continue
elif [[ "$result" =~ "<not supported>" ]]
@@ -68,7 +73,7 @@ for m in $(perf list --raw-dump metrics); do
echo $result
if [[ $err -eq 0 ]]
then
- err=2 # Skip
+ skip=1
fi
continue
elif [[ "$result" =~ "<not counted>" ]]
@@ -77,7 +82,7 @@ for m in $(perf list --raw-dump metrics); do
echo $result
if [[ $err -eq 0 ]]
then
- err=2 # Skip
+ skip=1
fi
continue
elif [[ "$result" =~ "FP_ARITH" || "$result" =~ "AMX" ]]
@@ -86,7 +91,7 @@ for m in $(perf list --raw-dump metrics); do
echo $result
if [[ $err -eq 0 ]]
then
- err=2 # Skip
+ skip=1
fi
continue
elif [[ "$result" =~ "PMM" ]]
@@ -95,7 +100,7 @@ for m in $(perf list --raw-dump metrics); do
echo $result
if [[ $err -eq 0 ]]
then
- err=2 # Skip
+ skip=1
fi
continue
fi
@@ -106,6 +111,10 @@ for m in $(perf list --raw-dump metrics); do
if [[ $result_err -eq 0 && "$result" =~ ${m:0:50} ]]
then
# No error result and metric shown.
+ if [[ "$err" -ne 1 ]]
+ then
+ err=0
+ fi
continue
fi
echo "[Failed $m] has non-zero error '$result_err' or not printed in:"
@@ -113,4 +122,10 @@ for m in $(perf list --raw-dump metrics); do
err=1
done
+# return SKIP only if no success returned
+if [[ "$err" -eq 3 && "$skip" -eq 1 ]]
+then
+ err=2
+fi
+
exit "$err"
--
2.52.0.322.g1dd061c0dc-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 4/4] perf test: Do not skip when some metric-group tests succeed
2025-12-19 1:18 [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
2025-12-19 1:18 ` [PATCH 2/4] perf test: Use shelldir to refer perf source location Namhyung Kim
2025-12-19 1:18 ` [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded Namhyung Kim
@ 2025-12-19 1:18 ` Namhyung Kim
2026-01-09 23:39 ` Ian Rogers
2026-01-08 23:58 ` [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
3 siblings, 1 reply; 11+ messages in thread
From: Namhyung Kim @ 2025-12-19 1:18 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, James Clark
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users
I think the return value of SKIP (2) should be used when it skipped the
entire test suite rather than a few of them. While the FAIL should be
reserved if any of test failed.
$ perf test -vv 109
109: perf all metricgroups test:
--- start ---
test child forked, pid 2493003
Testing Backend
Testing Bad
Testing BadSpec
Testing BigFootprint
Testing BrMispredicts
Testing Branches
Testing BvBC
Testing BvBO
Testing BvCB
Testing BvFB
Testing BvIO
Testing BvMB
Testing BvML
Testing BvMP
Testing BvMS
Testing BvMT
Testing BvOB
Testing BvUW
Testing CacheHits
Testing CacheMisses
Testing CodeGen
Testing Compute
Testing Cor
Testing DSB
Testing DSBmiss
Testing DataSharing
Testing Default
Testing Default2
Testing Default3
Testing Default4
Ignoring failures in Default4 that may contain unsupported legacy events
Testing Fed
Testing FetchBW
Testing FetchLat
Testing Flops
Testing FpScalar
Testing FpVector
Testing Frontend
Testing HPC
Testing IcMiss
Testing InsType
Testing LSD
Testing LockCont
Testing MachineClears
Testing Machine_Clears
Testing Mem
Testing MemOffcore
Testing MemoryBW
Testing MemoryBound
Testing MemoryLat
Testing MemoryTLB
Testing Memory_BW
Testing Memory_Lat
Testing MicroSeq
Testing OS
Testing Offcore
Testing PGO
Testing Pipeline
Testing PortsUtil
Testing Power
Testing Prefetches
Testing Ret
Testing Retire
Testing SMT
Testing Snoop
Testing SoC
Testing Summary
Testing TmaL1
Testing TmaL2
Testing TmaL3mem
Testing TopdownL1
Testing TopdownL2
Testing TopdownL3
Testing TopdownL4
Testing TopdownL5
Testing TopdownL6
Testing smi
Testing tma_L1_group
Testing tma_L2_group
Testing tma_L3_group
Testing tma_L4_group
Testing tma_L5_group
Testing tma_L6_group
Testing tma_alu_op_utilization_group
Testing tma_assists_group
Testing tma_backend_bound_group
Testing tma_bad_speculation_group
Testing tma_branch_mispredicts_group
Testing tma_branch_resteers_group
Testing tma_code_stlb_miss_group
Testing tma_core_bound_group
Testing tma_divider_group
Testing tma_dram_bound_group
Testing tma_dtlb_load_group
Testing tma_dtlb_store_group
Testing tma_fetch_bandwidth_group
Testing tma_fetch_latency_group
Testing tma_fp_arith_group
Testing tma_fp_vector_group
Testing tma_frontend_bound_group
Testing tma_heavy_operations_group
Testing tma_icache_misses_group
Testing tma_issue2P
Testing tma_issueBM
Testing tma_issueBW
Testing tma_issueComp
Testing tma_issueD0
Testing tma_issueFB
Testing tma_issueFL
Testing tma_issueL1
Testing tma_issueLat
Testing tma_issueMC
Testing tma_issueMS
Testing tma_issueMV
Testing tma_issueRFO
Testing tma_issueSL
Testing tma_issueSO
Testing tma_issueSmSt
Testing tma_issueSpSt
Testing tma_issueSyncxn
Testing tma_issueTLB
Testing tma_itlb_misses_group
Testing tma_l1_bound_group
Testing tma_l2_bound_group
Testing tma_l3_bound_group
Testing tma_light_operations_group
Testing tma_load_stlb_miss_group
Testing tma_machine_clears_group
Testing tma_memory_bound_group
Testing tma_microcode_sequencer_group
Testing tma_mite_group
Testing tma_other_light_ops_group
Testing tma_ports_utilization_group
Testing tma_ports_utilized_0_group
Testing tma_ports_utilized_3m_group
Testing tma_retiring_group
Testing tma_serializing_operation_group
Testing tma_store_bound_group
Testing tma_store_stlb_miss_group
Testing transaction
---- end(0) ----
109: perf all metricgroups test : Ok
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
.../perf/tests/shell/stat_all_metricgroups.sh | 26 ++++++++++++-------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/tools/perf/tests/shell/stat_all_metricgroups.sh b/tools/perf/tests/shell/stat_all_metricgroups.sh
index 1400880ec01f8267..81bc7070b5ab0d5c 100755
--- a/tools/perf/tests/shell/stat_all_metricgroups.sh
+++ b/tools/perf/tests/shell/stat_all_metricgroups.sh
@@ -12,31 +12,32 @@ if ParanoidAndNotRoot 0
then
system_wide_flag=""
fi
-err=0
+
+err=3
+skip=0
for m in $(perf list --raw-dump metricgroups)
do
echo "Testing $m"
result=$(perf stat -M "$m" $system_wide_flag sleep 0.01 2>&1)
result_err=$?
- if [[ $result_err -gt 0 ]]
+ if [[ $result_err -eq 0 ]]
then
+ if [[ "$err" -ne 1 ]]
+ then
+ err=0
+ fi
+ else
if [[ "$result" =~ \
"Access to performance monitoring and observability operations is limited" ]]
then
echo "Permission failure"
echo $result
- if [[ $err -eq 0 ]]
- then
- err=2 # Skip
- fi
+ skip=1
elif [[ "$result" =~ "in per-thread mode, enable system wide" ]]
then
echo "Permissions - need system wide mode"
echo $result
- if [[ $err -eq 0 ]]
- then
- err=2 # Skip
- fi
+ skip=1
elif [[ "$m" == @(Default2|Default3|Default4) ]]
then
echo "Ignoring failures in $m that may contain unsupported legacy events"
@@ -48,4 +49,9 @@ do
fi
done
+if [[ "$err" -eq 3 && "$skip" -eq 1 ]]
+then
+ err=2
+fi
+
exit $err
--
2.52.0.322.g1dd061c0dc-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/4] perf test: Skip dlfilter test for build failures
2025-12-19 1:18 [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
` (2 preceding siblings ...)
2025-12-19 1:18 ` [PATCH 4/4] perf test: Do not skip when some metric-group tests succeed Namhyung Kim
@ 2026-01-08 23:58 ` Namhyung Kim
2026-01-09 23:16 ` Ian Rogers
3 siblings, 1 reply; 11+ messages in thread
From: Namhyung Kim @ 2026-01-08 23:58 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, James Clark
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users
Ping (for the whole series)!
Thanks,
Namhyung
On Thu, Dec 18, 2025 at 05:18:17PM -0800, Namhyung Kim wrote:
> For some reason, it may fail to build the dlfilter. Let's skip the test
> as it's not an error in the perf. This can happen when you run the perf
> test without source code or in a different directory.
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> tools/perf/tests/shell/script_dlfilter.sh | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/tools/perf/tests/shell/script_dlfilter.sh b/tools/perf/tests/shell/script_dlfilter.sh
> index 45c97d4a7d5f90e8..7895ab0309b29dd5 100755
> --- a/tools/perf/tests/shell/script_dlfilter.sh
> +++ b/tools/perf/tests/shell/script_dlfilter.sh
> @@ -70,15 +70,15 @@ test_dlfilter() {
> # Build the dlfilter
> if ! cc -c -I tools/perf/include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
> then
> - echo "Basic --dlfilter test [Failed to build dlfilter object]"
> - err=1
> + echo "Basic --dlfilter test [Skip - failed to build dlfilter object]"
> + err=2
> return
> fi
>
> if ! cc -shared -o "${dlfilter_so}" "${dlfilter_so}.o"
> then
> - echo "Basic --dlfilter test [Failed to link dlfilter shared object]"
> - err=1
> + echo "Basic --dlfilter test [Skip - failed to link dlfilter shared object]"
> + err=2
> return
> fi
>
> --
> 2.52.0.322.g1dd061c0dc-goog
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/4] perf test: Skip dlfilter test for build failures
2026-01-08 23:58 ` [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
@ 2026-01-09 23:16 ` Ian Rogers
2026-01-13 20:21 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 11+ messages in thread
From: Ian Rogers @ 2026-01-09 23:16 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, James Clark, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users
On Thu, Jan 8, 2026 at 3:58 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Ping (for the whole series)!
>
> Thanks,
> Namhyung
>
>
> On Thu, Dec 18, 2025 at 05:18:17PM -0800, Namhyung Kim wrote:
> > For some reason, it may fail to build the dlfilter. Let's skip the test
> > as it's not an error in the perf. This can happen when you run the perf
> > test without source code or in a different directory.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Reviewed-by: Ian Rogers <irogers@google.com>
Thanks,
Ian
> > ---
> > tools/perf/tests/shell/script_dlfilter.sh | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/perf/tests/shell/script_dlfilter.sh b/tools/perf/tests/shell/script_dlfilter.sh
> > index 45c97d4a7d5f90e8..7895ab0309b29dd5 100755
> > --- a/tools/perf/tests/shell/script_dlfilter.sh
> > +++ b/tools/perf/tests/shell/script_dlfilter.sh
> > @@ -70,15 +70,15 @@ test_dlfilter() {
> > # Build the dlfilter
> > if ! cc -c -I tools/perf/include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
> > then
> > - echo "Basic --dlfilter test [Failed to build dlfilter object]"
> > - err=1
> > + echo "Basic --dlfilter test [Skip - failed to build dlfilter object]"
> > + err=2
> > return
> > fi
> >
> > if ! cc -shared -o "${dlfilter_so}" "${dlfilter_so}.o"
> > then
> > - echo "Basic --dlfilter test [Failed to link dlfilter shared object]"
> > - err=1
> > + echo "Basic --dlfilter test [Skip - failed to link dlfilter shared object]"
> > + err=2
> > return
> > fi
> >
> > --
> > 2.52.0.322.g1dd061c0dc-goog
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/4] perf test: Use shelldir to refer perf source location
2025-12-19 1:18 ` [PATCH 2/4] perf test: Use shelldir to refer perf source location Namhyung Kim
@ 2026-01-09 23:17 ` Ian Rogers
0 siblings, 0 replies; 11+ messages in thread
From: Ian Rogers @ 2026-01-09 23:17 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, James Clark, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users
On Thu, Dec 18, 2025 at 5:18 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> It uses tools/perf/include which assumes it's running from the root of
> the linux kernel source tree. But you can run perf from other places
> like tools/perf, then the include path won't match. We can use the
> shelldir variable to locate the test script in the tree.
>
> $ cd tools/perf
>
> $ ./perf test dlfilter
> 63: dlfilter C API : Ok
> 101: perf script --dlfilter tests : Ok
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Reviewed-by: Ian Rogers <irogers@google.com>
Thanks,
Ian
> ---
> tools/perf/tests/shell/script_dlfilter.sh | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/perf/tests/shell/script_dlfilter.sh b/tools/perf/tests/shell/script_dlfilter.sh
> index 7895ab0309b29dd5..aaed92bb78285dfd 100755
> --- a/tools/perf/tests/shell/script_dlfilter.sh
> +++ b/tools/perf/tests/shell/script_dlfilter.sh
> @@ -68,7 +68,7 @@ test_dlfilter() {
> fi
>
> # Build the dlfilter
> - if ! cc -c -I tools/perf/include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
> + if ! cc -c -I ${shelldir}/../../include -fpic -x c "${dlfilter_c}" -o "${dlfilter_so}.o"
> then
> echo "Basic --dlfilter test [Skip - failed to build dlfilter object]"
> err=2
> --
> 2.52.0.322.g1dd061c0dc-goog
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded
2025-12-19 1:18 ` [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded Namhyung Kim
@ 2026-01-09 23:37 ` Ian Rogers
2026-01-10 0:26 ` Namhyung Kim
0 siblings, 1 reply; 11+ messages in thread
From: Ian Rogers @ 2026-01-09 23:37 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, James Clark, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users
On Thu, Dec 18, 2025 at 5:18 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> I think the return value of SKIP (2) should be used when it skipped the
> entire test suite rather than a few of them. While the FAIL should be
> reserved if any of test failed.
This doesn't make sense to me. The current behavior is to set err to 0
(success):
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/shell/stat_all_metrics.sh?h=perf-tools-next#n18
If a failure condition occurs then err is set to 1 (fail)
unconditionally as you can't be more failing than failing, like:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/shell/stat_all_metrics.sh?h=perf-tools-next#n38
If a skip condition is encountered then we set err to 2 (skip)
conditional on the err state only being currently 0 (success) - ie
don't turn a fail into a skip:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/shell/stat_all_metrics.sh?h=perf-tools-next#n45
All metrics are always tested by test, so I'm not sure what "entire
test suite" means. The test should report success if nothing skips or
fails, skip if something skips and there are no failures, and
otherwise failure.
I can see the test being simplified by having a failure flag and a
skip flag, then determining the 0, 1 or 2 value from these two flags.
It would be avoiding testing the err value when setting it to 2
(skip), but the change here isn't doing that. The change introduces a
new value of 3, which seems more complex, and the skip flag is set
depending on the global err value rather than just being a sticky flag
showing a metric failed.
Thanks,
Ian
> $ perf test -vv 110
> 110: perf all metrics test:
> --- start ---
> test child forked, pid 2496399
> Testing tma_core_bound
> Testing tma_info_core_ilp
> Testing tma_info_memory_l2mpki
> Testing tma_memory_bound
> Testing tma_bottleneck_irregular_overhead
> Testing tma_bottleneck_mispredictions
> Testing tma_info_bad_spec_branch_misprediction_cost
> Testing tma_info_bad_spec_ipmisp_cond_ntaken
> Testing tma_info_bad_spec_ipmisp_cond_taken
> Testing tma_info_bad_spec_ipmisp_indirect
> Testing tma_info_bad_spec_ipmisp_ret
> Testing tma_info_bad_spec_ipmispredict
> Testing tma_info_branches_callret
> Testing tma_info_branches_cond_nt
> Testing tma_info_branches_cond_tk
> Testing tma_info_branches_jump
> Testing tma_info_branches_other_branches
> Testing tma_branch_mispredicts
> Testing tma_clears_resteers
> Testing tma_machine_clears
> Testing tma_mispredicts_resteers
> Testing tma_bottleneck_big_code
> Testing tma_icache_misses
> Testing tma_itlb_misses
> Testing tma_unknown_branches
> Testing tma_info_bad_spec_spec_clears_ratio
> Testing tma_other_mispredicts
> Testing tma_branch_instructions
> Testing tma_info_frontend_tbpc
> Testing tma_info_inst_mix_bptkbranch
> Testing tma_info_inst_mix_ipbranch
> Testing tma_info_inst_mix_ipcall
> Testing tma_info_inst_mix_iptb
> Testing tma_info_system_ipfarbranch
> Testing tma_info_thread_uptb
> Testing tma_bottleneck_branching_overhead
> Testing tma_nop_instructions
> Testing tma_bottleneck_compute_bound_est
> Testing tma_divider
> Testing tma_ports_utilized_3m
> Testing tma_bottleneck_instruction_fetch_bw
> Testing tma_frontend_bound
> Testing tma_assists
> Testing tma_other_nukes
> Testing tma_serializing_operation
> Testing tma_bottleneck_data_cache_memory_bandwidth
> Testing tma_fb_full
> Testing tma_mem_bandwidth
> Testing tma_sq_full
> Testing tma_bottleneck_data_cache_memory_latency
> Testing tma_l1_latency_dependency
> Testing tma_l2_bound
> Testing tma_l3_hit_latency
> Testing tma_mem_latency
> Testing tma_store_latency
> Testing tma_bottleneck_memory_synchronization
> Testing tma_contested_accesses
> Testing tma_data_sharing
> Testing tma_false_sharing
> Testing tma_bottleneck_memory_data_tlbs
> Testing tma_dtlb_load
> Testing tma_dtlb_store
> Testing tma_backend_bound
> Testing tma_bottleneck_other_bottlenecks
> Testing tma_bottleneck_useful_work
> Testing tma_retiring
> Testing tma_info_memory_fb_hpki
> Testing tma_info_memory_l1mpki
> Testing tma_info_memory_l1mpki_load
> Testing tma_info_memory_l2hpki_all
> Testing tma_info_memory_l2hpki_load
> Testing tma_info_memory_l2mpki_all
> Testing tma_info_memory_l2mpki_load
> Testing tma_l1_bound
> Testing tma_l3_bound
> Testing tma_info_memory_l2mpki_rfo
> Testing tma_fp_scalar
> Testing tma_fp_vector
> Testing tma_fp_vector_128b
> Testing tma_fp_vector_256b
> Testing tma_fp_vector_512b
> Testing tma_port_0
> Testing tma_x87_use
> Testing tma_info_botlnk_l0_core_bound_likely
> Testing tma_info_core_fp_arith_utilization
> Testing tma_info_pipeline_execute
> Testing tma_info_system_gflops
> Testing tma_info_thread_execute_per_issue
> Testing tma_dsb
> Testing tma_info_botlnk_l2_dsb_bandwidth
> Testing tma_info_frontend_dsb_coverage
> Testing tma_decoder0_alone
> Testing tma_dsb_switches
> Testing tma_info_botlnk_l2_dsb_misses
> Testing tma_info_frontend_dsb_switch_cost
> Testing tma_info_frontend_ipdsb_miss_ret
> Testing tma_mite
> Testing tma_mite_4wide
> Testing CPUs_utilized
> Testing backend_cycles_idle
> [Ignored backend_cycles_idle] failed but as a Default metric this can be expected
> Performance counter stats for 'perf test -w noploop': <not counted> cpu-cycles:u <not supported> stalled-cycles-backend:u 1.014051473 seconds time elapsed 1.005718000 seconds user 0.008013000 seconds sys
> Testing branch_frequency
> Testing branch_miss_rate
> Testing cs_per_second
> Testing cycles_frequency
> Testing frontend_cycles_idle
> [Ignored frontend_cycles_idle] failed but as a Default metric this can be expected
> Performance counter stats for 'perf test -w noploop': <not counted> cpu-cycles:u <not supported> stalled-cycles-frontend:u 1.012813656 seconds time elapsed 1.004603000 seconds user 0.008004000 seconds sys
> Testing insn_per_cycle
> Testing migrations_per_second
> Testing page_faults_per_second
> Testing stalled_cycles_per_instruction
> [Ignored stalled_cycles_per_instruction] failed but as a Default metric this can be expected
> Error: No supported events found. The stalled-cycles-backend:u event is not supported.
> Testing tma_bad_speculation
> Testing l1d_miss_rate
> Testing llc_miss_rate
> Testing dtlb_miss_rate
> Testing itlb_miss_rate
> [Ignored itlb_miss_rate] failed but as a Default metric this can be expected
> Performance counter stats for 'perf test -w noploop': <not supported> iTLB-loads:u 3,097 iTLB-load-misses:u 1.012766732 seconds time elapsed 1.004318000 seconds user 0.008002000 seconds sys
> Testing l1i_miss_rate
> [Ignored l1i_miss_rate] failed but as a Default metric this can be expected
> Performance counter stats for 'perf test -w noploop': <not counted> L1-icache-load-misses:u <not supported> L1-icache-loads:u 1.013606395 seconds time elapsed 1.001371000 seconds user 0.011968000 seconds sys
> Testing l1_prefetch_miss_rate
> [Ignored l1_prefetch_miss_rate] failed but as a Default metric this can be expected
> Error: No supported events found. The L1-dcache-prefetches:u event is not supported.
> Testing tma_info_botlnk_l2_ic_misses
> Testing tma_info_frontend_fetch_upc
> Testing tma_info_frontend_icache_miss_latency
> Testing tma_info_frontend_ipunknown_branch
> Testing tma_info_frontend_lsd_coverage
> Testing tma_info_memory_tlb_code_stlb_mpki
> Testing tma_info_pipeline_fetch_dsb
> Testing tma_info_pipeline_fetch_lsd
> Testing tma_info_pipeline_fetch_mite
> Testing tma_info_pipeline_fetch_ms
> Testing tma_fetch_bandwidth
> Testing tma_lsd
> Testing tma_branch_resteers
> Testing tma_code_l2_hit
> Testing tma_code_l2_miss
> Testing tma_code_stlb_hit
> Testing tma_code_stlb_miss
> Testing tma_code_stlb_miss_2m
> Testing tma_code_stlb_miss_4k
> Testing tma_lcp
> Testing tma_ms_switches
> Testing tma_info_core_flopc
> Testing tma_info_inst_mix_iparith
> Testing tma_info_inst_mix_iparith_avx128
> Testing tma_info_inst_mix_iparith_avx256
> Testing tma_info_inst_mix_iparith_avx512
> Testing tma_info_inst_mix_iparith_scalar_dp
> Testing tma_info_inst_mix_iparith_scalar_sp
> Testing tma_info_inst_mix_ipflop
> Testing tma_info_inst_mix_ippause
> Testing tma_fetch_latency
> Testing tma_fp_arith
> Testing tma_fp_assists
> Testing tma_info_system_cpu_utilization
> Testing tma_info_system_dram_bw_use
> [Skipped tma_info_system_dram_bw_use] Not supported events
> Performance counter stats for 'perf test -w noploop': <not supported> UNC_ARB_TRK_REQUESTS.ALL:u <not supported> UNC_ARB_COH_TRK_REQUESTS.ALL:u 1,013,554,749 duration_time 1.013527265 seconds time elapsed 1.005417000 seconds user 0.008011000 seconds sys
> Testing tma_info_frontend_l2mpki_code
> Testing tma_info_frontend_l2mpki_code_all
> Testing tma_info_inst_mix_ipload
> Testing tma_info_inst_mix_ipstore
> Testing tma_info_memory_latency_load_l2_miss_latency
> Testing tma_lock_latency
> Testing tma_info_memory_core_l1d_cache_fill_bw_2t
> Testing tma_info_memory_core_l2_cache_fill_bw_2t
> Testing tma_info_memory_core_l3_cache_access_bw_2t
> Testing tma_info_memory_core_l3_cache_fill_bw_2t
> Testing tma_info_memory_l1d_cache_fill_bw
> Testing tma_info_memory_l2_cache_fill_bw
> Testing tma_info_memory_l3_cache_access_bw
> Testing tma_info_memory_l3_cache_fill_bw
> Testing tma_info_memory_l3mpki
> Testing tma_info_memory_load_miss_real_latency
> Testing tma_info_memory_mix_bus_lock_pki
> Testing tma_info_memory_mix_uc_load_pki
> Testing tma_info_memory_mlp
> Testing tma_info_memory_tlb_load_stlb_mpki
> Testing tma_info_memory_tlb_page_walks_utilization
> Testing tma_info_memory_tlb_store_stlb_mpki
> Testing tma_info_system_mem_parallel_reads
> [Skipped tma_info_system_mem_parallel_reads] Not supported events
> Performance counter stats for 'perf test -w noploop': <not supported> UNC_ARB_DAT_OCCUPANCY.RD:u <not counted> UNC_ARB_DAT_OCCUPANCY.RD/cmask=1/ 1.013354884 seconds time elapsed 1.009239000 seconds user 0.004004000 seconds sys
> Testing tma_info_system_mem_read_latency
> [Skipped tma_info_system_mem_read_latency] Not supported events
> Performance counter stats for 'perf test -w noploop': <not supported> UNC_ARB_DAT_OCCUPANCY.RD:u <not counted> UNC_ARB_TRK_OCCUPANCY.RD <not counted> UNC_ARB_TRK_REQUESTS.RD 1.012882143 seconds time elapsed 1.004600000 seconds user 0.008036000 seconds sys
> Testing tma_info_thread_cpi
> Testing tma_streaming_stores
> Testing tma_dram_bound
> Testing tma_store_bound
> Testing tma_l2_hit_latency
> Testing tma_load_stlb_hit
> Testing tma_load_stlb_miss
> Testing tma_load_stlb_miss_1g
> Testing tma_load_stlb_miss_2m
> Testing tma_load_stlb_miss_4k
> Testing tma_store_stlb_hit
> Testing tma_store_stlb_miss
> Testing tma_store_stlb_miss_1g
> Testing tma_store_stlb_miss_2m
> Testing tma_store_stlb_miss_4k
> Testing tma_info_memory_latency_data_l2_mlp
> Testing tma_info_memory_latency_load_l2_mlp
> Testing tma_info_pipeline_ipassist
> Testing tma_microcode_sequencer
> Testing tma_ms
> Testing tma_info_system_kernel_cpi
> [Failed tma_info_system_kernel_cpi] Metric contains missing events
> Error: No supported events found. Access to performance monitoring and observability operations is limited. Consider adjusting /proc/sys/kernel/perf_event_paranoid setting to open access to performance monitoring and observability operations for processes without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability. More information can be found at 'Perf events and tool security' document: https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html perf_event_paranoid setting is 2: -1: Allow use of (almost) all events by all users Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK >= 0: Disallow raw and ftrace function tracepoint access >= 1: Disallow CPU event access >= 2: Disallow kernel profiling To make the adjusted perf_event_paranoid setting permanent preserve it in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)
> Testing tma_info_system_kernel_utilization
> [Failed tma_info_system_kernel_utilization] Metric contains missing events
> Error: No supported events found. Access to performance monitoring and observability operations is limited. Consider adjusting /proc/sys/kernel/perf_event_paranoid setting to open access to performance monitoring and observability operations for processes without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability. More information can be found at 'Perf events and tool security' document: https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html perf_event_paranoid setting is 2: -1: Allow use of (almost) all events by all users Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK >= 0: Disallow raw and ftrace function tracepoint access >= 1: Disallow CPU event access >= 2: Disallow kernel profiling To make the adjusted perf_event_paranoid setting permanent preserve it in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)
> Testing tma_info_pipeline_retire
> Testing tma_info_thread_clks
> Testing tma_info_thread_uoppi
> Testing tma_memory_operations
> Testing tma_other_light_ops
> Testing tma_ports_utilization
> Testing tma_ports_utilized_0
> Testing tma_ports_utilized_1
> Testing tma_ports_utilized_2
> Testing C10_Pkg_Residency
> [Failed C10_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c10-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c10-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C2_Pkg_Residency
> [Failed C2_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c2-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c2-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C3_Pkg_Residency
> [Failed C3_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { msr/tsc/, cstate_pkg/c3-residency/ } Error: No supported events found. Invalid event (msr/tsc/u) in per-thread mode, enable system wide with '-a'.
> Testing C6_Core_Residency
> [Failed C6_Core_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_core/c6-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_core/c6-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C6_Pkg_Residency
> [Failed C6_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c6-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c6-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C7_Core_Residency
> [Failed C7_Core_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_core/c7-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_core/c7-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C7_Pkg_Residency
> [Failed C7_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c7-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c7-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C8_Pkg_Residency
> [Failed C8_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c8-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c8-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing C9_Pkg_Residency
> [Failed C9_Pkg_Residency] Metric contains missing events
> WARNING: grouped events cpus do not match. Events with CPUs not matching the leader will be removed from the group. anon group { cstate_pkg/c9-residency/, msr/tsc/ } Error: No supported events found. Invalid event (cstate_pkg/c9-residency/u) in per-thread mode, enable system wide with '-a'.
> Testing tma_info_core_epc
> Testing tma_info_system_core_frequency
> Testing tma_info_system_power
> [Skipped tma_info_system_power] Not supported events
> Performance counter stats for 'perf test -w noploop': <not supported> Joules power/energy-pkg/u 1,013,238,256 duration_time 1.013223072 seconds time elapsed 0.995924000 seconds user 0.011903000 seconds sys
> Testing tma_info_system_power_license0_utilization
> Testing tma_info_system_power_license1_utilization
> Testing tma_info_system_power_license2_utilization
> Testing tma_info_system_turbo_utilization
> Testing tma_info_inst_mix_ipswpf
> Testing tma_info_memory_prefetches_useless_hwpf
> Testing tma_info_core_coreipc
> Testing tma_info_thread_ipc
> Testing tma_heavy_operations
> Testing tma_light_operations
> Testing tma_info_core_core_clks
> Testing tma_info_system_smt_2t_utilization
> Testing tma_info_thread_slots_utilization
> Testing UNCORE_FREQ
> [Skipped UNCORE_FREQ] Not supported events
> Performance counter stats for 'perf test -w noploop': <not supported> UNC_CLOCK.SOCKET:u 1,015,993,466 duration_time 1.015949387 seconds time elapsed 1.007676000 seconds user 0.008029000 seconds sys
> Testing tma_info_system_socket_clks
> [Failed tma_info_system_socket_clks] Metric contains missing events
> Error: No supported events found. Invalid event (UNC_CLOCK.SOCKET:u) in per-thread mode, enable system wide with '-a'.
> Testing tma_info_inst_mix_instructions
> Testing tma_info_system_cpus_utilized
> Testing tma_info_system_mux
> Testing tma_info_system_time
> Testing tma_info_thread_slots
> Testing tma_few_uops_instructions
> Testing tma_4k_aliasing
> Testing tma_cisc
> Testing tma_fp_divider
> Testing tma_int_divider
> Testing tma_slow_pause
> Testing tma_split_loads
> Testing tma_split_stores
> Testing tma_store_fwd_blk
> Testing tma_alu_op_utilization
> Testing tma_load_op_utilization
> Testing tma_mixing_vectors
> Testing tma_store_op_utilization
> Testing tma_port_1
> Testing tma_port_5
> Testing tma_port_6
> Testing smi_cycles
> [Skipped smi_cycles] Not supported events
> Performance counter stats for 'perf test -w noploop': <not supported> msr/smi/u <not supported> msr/aperf/u 3,965,789,327 cycles:u 1.012779591 seconds time elapsed 1.004579000 seconds user 0.007972000 seconds sys
> Testing smi_num
> [Failed smi_num] Metric contains missing events
> Error: No supported events found. Invalid event (msr/smi/u) in per-thread mode, enable system wide with '-a'.
> Testing tsx_aborted_cycles
> Testing tsx_cycles_per_elision
> Testing tsx_cycles_per_transaction
> Testing tsx_transactional_cycles
> ---- end(-1) ----
> 110: perf all metrics test : FAILED!
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> tools/perf/tests/shell/stat_all_metrics.sh | 29 ++++++++++++++++------
> 1 file changed, 22 insertions(+), 7 deletions(-)
>
> diff --git a/tools/perf/tests/shell/stat_all_metrics.sh b/tools/perf/tests/shell/stat_all_metrics.sh
> index 3dabb39c7cc8c46a..b582d23f28c9e8e2 100755
> --- a/tools/perf/tests/shell/stat_all_metrics.sh
> +++ b/tools/perf/tests/shell/stat_all_metrics.sh
> @@ -15,7 +15,8 @@ then
> test_prog="perf test -w noploop"
> fi
>
> -err=0
> +skip=0
> +err=3
> for m in $(perf list --raw-dump metrics); do
> echo "Testing $m"
> result=$(perf stat -M "$m" $system_wide_flag -- $test_prog 2>&1)
> @@ -23,6 +24,10 @@ for m in $(perf list --raw-dump metrics); do
> if [[ $result_err -eq 0 && "$result" =~ ${m:0:50} ]]
> then
> # No error result and metric shown.
> + if [[ "$err" -ne 1 ]]
> + then
> + err=0
> + fi
> continue
> fi
> if [[ "$result" =~ "Cannot resolve IDs for" || "$result" =~ "No supported events found" ]]
> @@ -44,7 +49,7 @@ for m in $(perf list --raw-dump metrics); do
> echo $result
> if [[ $err -eq 0 ]]
> then
> - err=2 # Skip
> + skip=1
> fi
> continue
> elif [[ "$result" =~ "in per-thread mode, enable system wide" ]]
> @@ -53,7 +58,7 @@ for m in $(perf list --raw-dump metrics); do
> echo $result
> if [[ $err -eq 0 ]]
> then
> - err=2 # Skip
> + skip=1
> fi
> continue
> elif [[ "$result" =~ "<not supported>" ]]
> @@ -68,7 +73,7 @@ for m in $(perf list --raw-dump metrics); do
> echo $result
> if [[ $err -eq 0 ]]
> then
> - err=2 # Skip
> + skip=1
> fi
> continue
> elif [[ "$result" =~ "<not counted>" ]]
> @@ -77,7 +82,7 @@ for m in $(perf list --raw-dump metrics); do
> echo $result
> if [[ $err -eq 0 ]]
> then
> - err=2 # Skip
> + skip=1
> fi
> continue
> elif [[ "$result" =~ "FP_ARITH" || "$result" =~ "AMX" ]]
> @@ -86,7 +91,7 @@ for m in $(perf list --raw-dump metrics); do
> echo $result
> if [[ $err -eq 0 ]]
> then
> - err=2 # Skip
> + skip=1
> fi
> continue
> elif [[ "$result" =~ "PMM" ]]
> @@ -95,7 +100,7 @@ for m in $(perf list --raw-dump metrics); do
> echo $result
> if [[ $err -eq 0 ]]
> then
> - err=2 # Skip
> + skip=1
> fi
> continue
> fi
> @@ -106,6 +111,10 @@ for m in $(perf list --raw-dump metrics); do
> if [[ $result_err -eq 0 && "$result" =~ ${m:0:50} ]]
> then
> # No error result and metric shown.
> + if [[ "$err" -ne 1 ]]
> + then
> + err=0
> + fi
> continue
> fi
> echo "[Failed $m] has non-zero error '$result_err' or not printed in:"
> @@ -113,4 +122,10 @@ for m in $(perf list --raw-dump metrics); do
> err=1
> done
>
> +# return SKIP only if no success returned
> +if [[ "$err" -eq 3 && "$skip" -eq 1 ]]
> +then
> + err=2
> +fi
> +
> exit "$err"
> --
> 2.52.0.322.g1dd061c0dc-goog
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 4/4] perf test: Do not skip when some metric-group tests succeed
2025-12-19 1:18 ` [PATCH 4/4] perf test: Do not skip when some metric-group tests succeed Namhyung Kim
@ 2026-01-09 23:39 ` Ian Rogers
0 siblings, 0 replies; 11+ messages in thread
From: Ian Rogers @ 2026-01-09 23:39 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, James Clark, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users
On Thu, Dec 18, 2025 at 5:18 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> I think the return value of SKIP (2) should be used when it skipped the
> entire test suite rather than a few of them. While the FAIL should be
> reserved if any of test failed.
The same feedback as the last patch:
https://lore.kernel.org/lkml/CAP-5=fVDYu+GLspYQNEHk7_X8-FE-OaEF0OaG+vfWrrYH9ZcoQ@mail.gmail.com/
Thanks,
Ian
> $ perf test -vv 109
> 109: perf all metricgroups test:
> --- start ---
> test child forked, pid 2493003
> Testing Backend
> Testing Bad
> Testing BadSpec
> Testing BigFootprint
> Testing BrMispredicts
> Testing Branches
> Testing BvBC
> Testing BvBO
> Testing BvCB
> Testing BvFB
> Testing BvIO
> Testing BvMB
> Testing BvML
> Testing BvMP
> Testing BvMS
> Testing BvMT
> Testing BvOB
> Testing BvUW
> Testing CacheHits
> Testing CacheMisses
> Testing CodeGen
> Testing Compute
> Testing Cor
> Testing DSB
> Testing DSBmiss
> Testing DataSharing
> Testing Default
> Testing Default2
> Testing Default3
> Testing Default4
> Ignoring failures in Default4 that may contain unsupported legacy events
> Testing Fed
> Testing FetchBW
> Testing FetchLat
> Testing Flops
> Testing FpScalar
> Testing FpVector
> Testing Frontend
> Testing HPC
> Testing IcMiss
> Testing InsType
> Testing LSD
> Testing LockCont
> Testing MachineClears
> Testing Machine_Clears
> Testing Mem
> Testing MemOffcore
> Testing MemoryBW
> Testing MemoryBound
> Testing MemoryLat
> Testing MemoryTLB
> Testing Memory_BW
> Testing Memory_Lat
> Testing MicroSeq
> Testing OS
> Testing Offcore
> Testing PGO
> Testing Pipeline
> Testing PortsUtil
> Testing Power
> Testing Prefetches
> Testing Ret
> Testing Retire
> Testing SMT
> Testing Snoop
> Testing SoC
> Testing Summary
> Testing TmaL1
> Testing TmaL2
> Testing TmaL3mem
> Testing TopdownL1
> Testing TopdownL2
> Testing TopdownL3
> Testing TopdownL4
> Testing TopdownL5
> Testing TopdownL6
> Testing smi
> Testing tma_L1_group
> Testing tma_L2_group
> Testing tma_L3_group
> Testing tma_L4_group
> Testing tma_L5_group
> Testing tma_L6_group
> Testing tma_alu_op_utilization_group
> Testing tma_assists_group
> Testing tma_backend_bound_group
> Testing tma_bad_speculation_group
> Testing tma_branch_mispredicts_group
> Testing tma_branch_resteers_group
> Testing tma_code_stlb_miss_group
> Testing tma_core_bound_group
> Testing tma_divider_group
> Testing tma_dram_bound_group
> Testing tma_dtlb_load_group
> Testing tma_dtlb_store_group
> Testing tma_fetch_bandwidth_group
> Testing tma_fetch_latency_group
> Testing tma_fp_arith_group
> Testing tma_fp_vector_group
> Testing tma_frontend_bound_group
> Testing tma_heavy_operations_group
> Testing tma_icache_misses_group
> Testing tma_issue2P
> Testing tma_issueBM
> Testing tma_issueBW
> Testing tma_issueComp
> Testing tma_issueD0
> Testing tma_issueFB
> Testing tma_issueFL
> Testing tma_issueL1
> Testing tma_issueLat
> Testing tma_issueMC
> Testing tma_issueMS
> Testing tma_issueMV
> Testing tma_issueRFO
> Testing tma_issueSL
> Testing tma_issueSO
> Testing tma_issueSmSt
> Testing tma_issueSpSt
> Testing tma_issueSyncxn
> Testing tma_issueTLB
> Testing tma_itlb_misses_group
> Testing tma_l1_bound_group
> Testing tma_l2_bound_group
> Testing tma_l3_bound_group
> Testing tma_light_operations_group
> Testing tma_load_stlb_miss_group
> Testing tma_machine_clears_group
> Testing tma_memory_bound_group
> Testing tma_microcode_sequencer_group
> Testing tma_mite_group
> Testing tma_other_light_ops_group
> Testing tma_ports_utilization_group
> Testing tma_ports_utilized_0_group
> Testing tma_ports_utilized_3m_group
> Testing tma_retiring_group
> Testing tma_serializing_operation_group
> Testing tma_store_bound_group
> Testing tma_store_stlb_miss_group
> Testing transaction
> ---- end(0) ----
> 109: perf all metricgroups test : Ok
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> .../perf/tests/shell/stat_all_metricgroups.sh | 26 ++++++++++++-------
> 1 file changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/tools/perf/tests/shell/stat_all_metricgroups.sh b/tools/perf/tests/shell/stat_all_metricgroups.sh
> index 1400880ec01f8267..81bc7070b5ab0d5c 100755
> --- a/tools/perf/tests/shell/stat_all_metricgroups.sh
> +++ b/tools/perf/tests/shell/stat_all_metricgroups.sh
> @@ -12,31 +12,32 @@ if ParanoidAndNotRoot 0
> then
> system_wide_flag=""
> fi
> -err=0
> +
> +err=3
> +skip=0
> for m in $(perf list --raw-dump metricgroups)
> do
> echo "Testing $m"
> result=$(perf stat -M "$m" $system_wide_flag sleep 0.01 2>&1)
> result_err=$?
> - if [[ $result_err -gt 0 ]]
> + if [[ $result_err -eq 0 ]]
> then
> + if [[ "$err" -ne 1 ]]
> + then
> + err=0
> + fi
> + else
> if [[ "$result" =~ \
> "Access to performance monitoring and observability operations is limited" ]]
> then
> echo "Permission failure"
> echo $result
> - if [[ $err -eq 0 ]]
> - then
> - err=2 # Skip
> - fi
> + skip=1
> elif [[ "$result" =~ "in per-thread mode, enable system wide" ]]
> then
> echo "Permissions - need system wide mode"
> echo $result
> - if [[ $err -eq 0 ]]
> - then
> - err=2 # Skip
> - fi
> + skip=1
> elif [[ "$m" == @(Default2|Default3|Default4) ]]
> then
> echo "Ignoring failures in $m that may contain unsupported legacy events"
> @@ -48,4 +49,9 @@ do
> fi
> done
>
> +if [[ "$err" -eq 3 && "$skip" -eq 1 ]]
> +then
> + err=2
> +fi
> +
> exit $err
> --
> 2.52.0.322.g1dd061c0dc-goog
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded
2026-01-09 23:37 ` Ian Rogers
@ 2026-01-10 0:26 ` Namhyung Kim
0 siblings, 0 replies; 11+ messages in thread
From: Namhyung Kim @ 2026-01-10 0:26 UTC (permalink / raw)
To: Ian Rogers
Cc: Arnaldo Carvalho de Melo, James Clark, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users
Hi Ian,
On Fri, Jan 09, 2026 at 03:37:01PM -0800, Ian Rogers wrote:
> On Thu, Dec 18, 2025 at 5:18 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > I think the return value of SKIP (2) should be used when it skipped the
> > entire test suite rather than a few of them. While the FAIL should be
> > reserved if any of test failed.
>
> This doesn't make sense to me. The current behavior is to set err to 0
> (success):
> https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/shell/stat_all_metrics.sh?h=perf-tools-next#n18
> If a failure condition occurs then err is set to 1 (fail)
> unconditionally as you can't be more failing than failing, like:
> https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/shell/stat_all_metrics.sh?h=perf-tools-next#n38
> If a skip condition is encountered then we set err to 2 (skip)
> conditional on the err state only being currently 0 (success) - ie
> don't turn a fail into a skip:
> https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/shell/stat_all_metrics.sh?h=perf-tools-next#n45
>
> All metrics are always tested by test, so I'm not sure what "entire
> test suite" means. The test should report success if nothing skips or
> fails, skip if something skips and there are no failures, and
> otherwise failure.
We have different ideas for the return values. :) I think it should
report SKIP if it cannot test anything, FAILURE if anything fails and
otherwise SUCCESS.
So the gray area is when some subtests succeed and some skip (and no
failures). It'd be great if we can agree on one way or another.
>
> I can see the test being simplified by having a failure flag and a
> skip flag, then determining the 0, 1 or 2 value from these two flags.
> It would be avoiding testing the err value when setting it to 2
> (skip), but the change here isn't doing that. The change introduces a
> new value of 3, which seems more complex, and the skip flag is set
> depending on the global err value rather than just being a sticky flag
> showing a metric failed.
3 is not supposed to return, but I think it'd be nice if we can do
without introducing it.
Thanks,
Namhyung
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/4] perf test: Skip dlfilter test for build failures
2026-01-09 23:16 ` Ian Rogers
@ 2026-01-13 20:21 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 11+ messages in thread
From: Arnaldo Carvalho de Melo @ 2026-01-13 20:21 UTC (permalink / raw)
To: Ian Rogers
Cc: Namhyung Kim, James Clark, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users
On Fri, Jan 09, 2026 at 03:16:50PM -0800, Ian Rogers wrote:
> On Thu, Jan 8, 2026 at 3:58 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Ping (for the whole series)!
> >
> > Thanks,
> > Namhyung
> >
> >
> > On Thu, Dec 18, 2025 at 05:18:17PM -0800, Namhyung Kim wrote:
> > > For some reason, it may fail to build the dlfilter. Let's skip the test
> > > as it's not an error in the perf. This can happen when you run the perf
> > > test without source code or in a different directory.
> > >
> > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
>
> Reviewed-by: Ian Rogers <irogers@google.com>
Thanks, applied to perf-tools-next,
- Arnaldo
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-01-13 20:21 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-19 1:18 [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
2025-12-19 1:18 ` [PATCH 2/4] perf test: Use shelldir to refer perf source location Namhyung Kim
2026-01-09 23:17 ` Ian Rogers
2025-12-19 1:18 ` [PATCH 3/4] perf test: Do not skip when some metrics tests succeeded Namhyung Kim
2026-01-09 23:37 ` Ian Rogers
2026-01-10 0:26 ` Namhyung Kim
2025-12-19 1:18 ` [PATCH 4/4] perf test: Do not skip when some metric-group tests succeed Namhyung Kim
2026-01-09 23:39 ` Ian Rogers
2026-01-08 23:58 ` [PATCH 1/4] perf test: Skip dlfilter test for build failures Namhyung Kim
2026-01-09 23:16 ` Ian Rogers
2026-01-13 20:21 ` Arnaldo Carvalho de Melo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox