* [PATCH v4 0/7] Introduce structure for shell tests
@ 2025-09-30 16:09 Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
` (6 more replies)
0 siblings, 7 replies; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
Hi Nahmyung,
The series of Veronika's patches as a part of upstreaming effort of perftool-testsuite has been rebased on the latest perf-tools-next branch and should now apply cleanly.
Patches 01/10, 02/10, and 05/10 have been dropped as they were already accepted upstream. This revision should address the problems that were previously discussed.
Thanks,
Jakub
Link to v3: https://lore.kernel.org/linux-perf-users/20250721132642.40906-1-jbrnak@redhat.com/#r
Changes since v3:
Patch 1: fix long lines, fix coding style issues, fix skipping testcases
because of missing $DIR_PATH in path to scripts used by the testscases
Patch 2: fix coding style issues, fix typo in function name
Patch 5: fix coding style issues
Patch 6: remove additional buffer and sanitize the name of log directory in
place, fix coding style issues
Veronika Molnarova (7):
perf test perftool_testsuite: Use absolute paths
perf tests: Create a structure for shell tests
perf test: Provide setup for the shell test suite
perftool-testsuite: Add empty setup for base_probe
perf test: Introduce storing logs for shell tests
perf test: Format log directories for shell tests
perf test: Remove perftool drivers
tools/perf/tests/builtin-test.c | 129 ++++++++-
tools/perf/tests/shell/base_probe/setup.sh | 13 +
.../base_probe/test_adding_blacklisted.sh | 20 +-
.../shell/base_probe/test_adding_kernel.sh | 97 +++++--
.../perf/tests/shell/base_probe/test_basic.sh | 31 +-
.../shell/base_probe/test_invalid_options.sh | 14 +-
.../shell/base_probe/test_line_semantics.sh | 7 +-
tools/perf/tests/shell/base_report/setup.sh | 10 +-
.../tests/shell/base_report/test_basic.sh | 103 +++++--
tools/perf/tests/shell/common/init.sh | 4 +-
.../tests/shell/perftool-testsuite_probe.sh | 24 --
.../tests/shell/perftool-testsuite_report.sh | 23 --
tools/perf/tests/tests-scripts.c | 267 +++++++++++++++---
tools/perf/tests/tests-scripts.h | 15 +
tools/perf/tests/tests.h | 8 +-
15 files changed, 585 insertions(+), 180 deletions(-)
create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh
--
2.50.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
2025-09-30 18:28 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 2/7] perf tests: Create a structure for shell tests Jakub Brnak
` (5 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
Test cases from perftool_testsuite are affected by the current
directory where the test are run. For this reason, the test
driver has to change the directory to the base_dir for references to
work correctly.
Utilize absolute paths when sourcing and referencing other scripts so
that the current working directory doesn't impact the test cases.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
.../base_probe/test_adding_blacklisted.sh | 20 +++-
.../shell/base_probe/test_adding_kernel.sh | 97 ++++++++++++-----
.../perf/tests/shell/base_probe/test_basic.sh | 31 ++++--
.../shell/base_probe/test_invalid_options.sh | 14 ++-
.../shell/base_probe/test_line_semantics.sh | 7 +-
tools/perf/tests/shell/base_report/setup.sh | 10 +-
.../tests/shell/base_report/test_basic.sh | 103 +++++++++++++-----
tools/perf/tests/shell/common/init.sh | 4 +-
8 files changed, 202 insertions(+), 84 deletions(-)
diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index 8226449ac5c3..f74aab5c5d7f 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -13,11 +13,12 @@
# they must be skipped.
#
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
TEST_RESULT=0
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
# skip if not supported
BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
if [ -z "$BLACKFUNC_LIST" ]; then
@@ -53,7 +54,8 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
PERF_EXIT_CODE=$?
# check for bad DWARF polluting the result
- ../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
if [ $? -eq 0 ]; then
SKIP_DWARF=1
@@ -73,7 +75,11 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
fi
fi
else
- ../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
+ "$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" \
+ "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" \
+ "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" \
+ "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
CHECK_EXIT_CODE=$?
SKIP_DWARF=0
@@ -94,7 +100,9 @@ fi
$CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
PERF_EXIT_CODE=$?
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" \
+ < $LOGS_DIR/adding_blacklisted_list.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index df288cf90cd6..555a825d55f2 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -13,13 +13,14 @@
# and removing.
#
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
TEST_RESULT=0
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
# shellcheck source=lib/probe_vfs_getname.sh
-. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
+. "$DIR_PATH/../lib/probe_vfs_getname.sh"
TEST_PROBE=${TEST_PROBE:-"inode_permission"}
@@ -44,7 +45,9 @@ for opt in "" "-a" "--add"; do
$CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
PERF_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
+ < $LOGS_DIR/adding_kernel_add$opt.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
@@ -58,7 +61,10 @@ done
$CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
PERF_EXIT_CODE=$?
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$RE_LINE_EMPTY" "List of pre-defined events" \
+ "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" \
+ "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
@@ -71,7 +77,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
$CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" \
+ < $LOGS_DIR/adding_kernel_list-l.log
CHECK_EXIT_CODE=$?
if [ $NO_DEBUGINFO ] ; then
@@ -93,9 +101,13 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
# the value should be greater than 1
REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
-../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" \
+ "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" \
+ < $LOGS_DIR/adding_kernel_using_probe.log
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
@@ -108,7 +120,8 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
$CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
PERF_EXIT_CODE=$?
-../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
@@ -121,7 +134,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
$CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
PERF_EXIT_CODE=$?
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" \
+ < $LOGS_DIR/adding_kernel_list_removed.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
@@ -135,7 +150,9 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
PERF_EXIT_CODE=$?
# check for the output (should be the same as usual)
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
+ < $LOGS_DIR/adding_kernel_dryrun.err
CHECK_EXIT_CODE=$?
# check that no probe was added in real
@@ -152,7 +169,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
$CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
+ < $LOGS_DIR/adding_kernel_forceadd_01.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
@@ -162,7 +181,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Error: event \"$TEST_PROBE\" already exists." \
+ "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
@@ -173,7 +194,9 @@ NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l`
$CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" \
+ "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
@@ -187,7 +210,9 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
PERF_EXIT_CODE=$?
REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
-../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" \
+ < $LOGS_DIR/adding_kernel_using_two.log
CHECK_EXIT_CODE=$?
VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
@@ -205,7 +230,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
$CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "Removed event: probe:$TEST_PROBE" \
+ "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
@@ -217,7 +244,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
$CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" \
+ "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
CHECK_EXIT_CODE=$?
if [ $NO_DEBUGINFO ] ; then
@@ -240,13 +269,22 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
PERF_EXIT_CODE=$?
# check that the error message is reasonable
-../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Failed to find" \
+ "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" \
+ < $LOGS_DIR/adding_kernel_nonexisting.err
CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "in this function|at this address" "Error" "Failed to add events" \
+ < $LOGS_DIR/adding_kernel_nonexisting.err
(( CHECK_EXIT_CODE += $? ))
-../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "Failed to find" "Error" "Probe point .+ not found" "optimized out" \
+ "Use.+\-\-range option to show.+location range" \
+ < $LOGS_DIR/adding_kernel_nonexisting.err
(( CHECK_EXIT_CODE += $? ))
-../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_no_patterns_found.pl" \
+ "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
(( CHECK_EXIT_CODE += $? ))
if [ $NO_DEBUGINFO ]; then
@@ -264,7 +302,10 @@ fi
$CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Added new events?:" "probe:$TEST_PROBE" \
+ "on $TEST_PROBE%return with \\\$retval" \
+ < $LOGS_DIR/adding_kernel_func_retval_add.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
@@ -274,7 +315,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
$CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" \
+ < $LOGS_DIR/adding_kernel_func_retval_record.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
@@ -285,9 +328,11 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
PERF_EXIT_CODE=$?
REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
-../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index 9d8b5afbeddd..162838ddc974 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -12,11 +12,12 @@
# This test tests basic functionality of perf probe command.
#
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
TEST_RESULT=0
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
if ! check_kprobes_available; then
print_overall_skipped
exit 2
@@ -30,15 +31,25 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
$CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
PERF_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" \
+ "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" \
+ < $LOGS_DIR/basic_helpmsg.log
CHECK_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" \
+ < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" \
+ "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" \
+ "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+ "$DIR_PATH/../common/check_no_patterns_found.pl" \
+ "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -53,7 +64,9 @@ fi
# without any args perf-probe should print usage
$CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
-../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" \
+ "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
CHECK_EXIT_CODE=$?
print_results 0 $CHECK_EXIT_CODE "usage message"
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 92f7254eb32a..44a3ae014bfa 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -12,11 +12,12 @@
# This test checks whether the invalid and incompatible options are reported
#
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
TEST_RESULT=0
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
if ! check_kprobes_available; then
print_overall_skipped
exit 2
@@ -33,7 +34,9 @@ for opt in '-a' '-d' '-L' '-V'; do
! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
PERF_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Error: switch .* requires a value" \
+ < $LOGS_DIR/invalid_options_missing_argument$opt.err
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
@@ -66,7 +69,8 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
PERF_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index 20435b6bf6bc..576442d87a44 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -13,11 +13,12 @@
# arguments are properly reported.
#
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
TEST_RESULT=0
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
if ! check_kprobes_available; then
print_overall_skipped
exit 2
diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
index 8634e7e0dda6..bb49b0fabb11 100755
--- a/tools/perf/tests/shell/base_report/setup.sh
+++ b/tools/perf/tests/shell/base_report/setup.sh
@@ -12,8 +12,10 @@
#
#
+DIR_PATH="$(dirname $0)"
+
# include working environment
-. ../common/init.sh
+. "$DIR_PATH/../common/init.sh"
TEST_RESULT=0
@@ -24,7 +26,8 @@ SW_EVENT="cpu-clock"
$CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
@@ -38,7 +41,8 @@ echo ==================
cat $LOGS_DIR/setup-latency.log
echo ==================
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup-latency.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup-latency.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data.1 file"
diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
index adfd8713b8f8..0dfe7e5fd1ca 100755
--- a/tools/perf/tests/shell/base_report/test_basic.sh
+++ b/tools/perf/tests/shell/base_report/test_basic.sh
@@ -12,11 +12,12 @@
#
#
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
TEST_RESULT=0
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
### help message
@@ -25,19 +26,37 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
$CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
PERF_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" \
+ "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
CHECK_EXIT_CODE=$?
- ../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "input" "verbose" "show-nr-samples" "show-cpu-utilization" \
+ "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" \
+ < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "hide-unresolved" "sort" "fields" "parent" "exclude-other" \
+ "column-widths" "field-separator" "dump-raw-trace" "children" \
+ < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" \
+ "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" \
+ < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" \
+ "show-total-period" "show-info" "branch-stack" "group" \
+ < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
+ "$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "branch-history" "objdump" "demangle" "percent-limit" "percentage" \
+ "header" "itrace" "full-source-path" "show-ref-call-graph" \
+ < $LOGS_DIR/basic_helpmsg.log
(( CHECK_EXIT_CODE += $? ))
- ../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+ "$DIR_PATH/../common/check_no_patterns_found.pl" \
+ "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -57,9 +76,12 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" \
+ "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
@@ -74,9 +96,11 @@ PERF_EXIT_CODE=$?
REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
@@ -98,7 +122,10 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
# disable precise check for "nrcpus avail" in BASIC runmode
test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
-../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" \
+ "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" \
+ "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
@@ -129,9 +156,11 @@ PERF_EXIT_CODE=$?
REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
@@ -144,9 +173,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
$CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
PERF_EXIT_CODE=$?
-grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
+grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | \
+ "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
@@ -159,9 +190,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
$CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
PERF_EXIT_CODE=$?
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" \
+ "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
@@ -174,9 +207,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
$CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
PERF_EXIT_CODE=$?
-grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
+grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | \
+ "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
@@ -189,7 +224,8 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
$CMD_PERF report -i $CURRENT_TEST_DIR/perf.data.1 --stdio --header-only > $LOGS_DIR/latency_header.log
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl ", context_switch = 1, " < $LOGS_DIR/latency_header.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ ", context_switch = 1, " < $LOGS_DIR/latency_header.log
CHECK_EXIT_CODE=$?
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency header"
@@ -200,9 +236,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency header"
$CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/latency_default.log 2> $LOGS_DIR/latency_default.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "# Overhead Latency Command" < $LOGS_DIR/latency_default.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "# Overhead Latency Command" < $LOGS_DIR/latency_default.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/latency_default.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "stderr-whitelist.txt" < $LOGS_DIR/latency_default.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "default report for latency profile"
@@ -213,9 +251,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "default report for latency profi
$CMD_PERF report --latency --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/latency_latency.log 2> $LOGS_DIR/latency_latency.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "# Latency Overhead Command" < $LOGS_DIR/latency_latency.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "# Latency Overhead Command" < $LOGS_DIR/latency_latency.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/latency_latency.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "stderr-whitelist.txt" < $LOGS_DIR/latency_latency.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency report for latency profile"
@@ -226,9 +266,12 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency report for latency profi
$CMD_PERF report --hierarchy --sort latency,parallelism,comm,symbol --parallelism=1,2 --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/parallelism_hierarchy.log 2> $LOGS_DIR/parallelism_hierarchy.err
PERF_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "# Latency Parallelism / Command / Symbol" < $LOGS_DIR/parallelism_hierarchy.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" \
+ "# Latency Parallelism / Command / Symbol" \
+ < $LOGS_DIR/parallelism_hierarchy.log
CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/parallelism_hierarchy.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" \
+ "stderr-whitelist.txt" < $LOGS_DIR/parallelism_hierarchy.err
(( CHECK_EXIT_CODE += $? ))
print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "parallelism histogram"
diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
index 26c7525651e0..cbfc78bec974 100644
--- a/tools/perf/tests/shell/common/init.sh
+++ b/tools/perf/tests/shell/common/init.sh
@@ -11,8 +11,8 @@
#
-. ../common/settings.sh
-. ../common/patterns.sh
+. "$(dirname $0)/../common/settings.sh"
+. "$(dirname $0)/../common/patterns.sh"
THIS_TEST_NAME=`basename $0 .sh`
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 2/7] perf tests: Create a structure for shell tests
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
2025-09-30 18:49 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
` (4 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
The general structure of test suites with test cases has been implemented
for C tests for some time, while shell tests were just all put into a list
without any possible structuring.
Provide the same possibility of test suite structure for shell tests. The
suite is created for each subdirectory located in the 'perf/tests/shell'
directory that contains at least one test script. All of the deeper levels
of subdirectories will be merged with the first level of test cases.
The name of the test suite is the name of the subdirectory, where the test
cases are located. For all of the test scripts that are not in any
subdirectory, a test suite with a single test case is created as it has
been till now.
The new structure of the shell tests for 'perf test list':
77: build id cache operations
78: coresight
78:1: CoreSight / ASM Pure Loop
78:2: CoreSight / Memcpy 16k 10 Threads
78:3: CoreSight / Thread Loop 10 Threads - Check TID
78:4: CoreSight / Thread Loop 2 Threads - Check TID
78:5: CoreSight / Unroll Loop Thread 10
79: daemon operations
80: perf diff tests
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
tools/perf/tests/tests-scripts.c | 229 ++++++++++++++++++++++++++-----
tools/perf/tests/tests-scripts.h | 4 +
2 files changed, 195 insertions(+), 38 deletions(-)
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index f18c4cd337c8..e47f7eb50a73 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -151,14 +151,47 @@ static char *strdup_check(const char *str)
return newstr;
}
-static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
+/* Free the whole structure of test_suite with its test_cases */
+static void free_suite(struct test_suite *suite)
{
- const char *file = test->priv;
+ if (suite->test_cases) {
+ int num = 0;
+
+ while (suite->test_cases[num].name) { /* Last case has name set to NULL */
+ free((void *) suite->test_cases[num].name);
+ free((void *) suite->test_cases[num].desc);
+ num++;
+ }
+ free(suite->test_cases);
+ }
+ if (suite->desc)
+ free((void *) suite->desc);
+ if (suite->priv) {
+ struct shell_info *test_info = suite->priv;
+
+ free((void *) test_info->base_path);
+ free(test_info);
+ }
+
+ free(suite);
+}
+
+static int shell_test__run(struct test_suite *test, int subtest)
+{
+ const char *file;
int err;
char *cmd = NULL;
+ /* Get absolute file path */
+ if (subtest >= 0) {
+ file = test->test_cases[subtest].name;
+ } else { /* Single test case */
+ file = test->test_cases[0].name;
+ }
+
if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
return TEST_FAIL;
+
err = system(cmd);
free(cmd);
if (!err)
@@ -167,63 +200,155 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
}
-static void append_script(int dir_fd, const char *name, char *desc,
- struct test_suite ***result,
- size_t *result_sz)
+static struct test_suite *prepare_test_suite(int dir_fd)
{
- char filename[PATH_MAX], link[128];
- struct test_suite *test_suite, **result_tmp;
- struct test_case *tests;
+ char dirpath[PATH_MAX], link[128];
ssize_t len;
- char *exclusive;
+ struct test_suite *test_suite = NULL;
+ struct shell_info *test_info;
+ /* Get dir absolute path */
snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
- len = readlink(link, filename, sizeof(filename));
+ len = readlink(link, dirpath, sizeof(dirpath));
if (len < 0) {
pr_err("Failed to readlink %s", link);
- return;
+ return NULL;
}
- filename[len++] = '/';
- strcpy(&filename[len], name);
+ dirpath[len++] = '/';
+ dirpath[len] = '\0';
- tests = calloc(2, sizeof(*tests));
- if (!tests) {
- pr_err("Out of memory while building script test suite list\n");
- return;
- }
- tests[0].name = strdup_check(name);
- exclusive = strstr(desc, " (exclusive)");
- if (exclusive != NULL) {
- tests[0].exclusive = true;
- exclusive[0] = '\0';
- }
- tests[0].desc = strdup_check(desc);
- tests[0].run_case = shell_test__run;
test_suite = zalloc(sizeof(*test_suite));
if (!test_suite) {
pr_err("Out of memory while building script test suite list\n");
- free(tests);
- return;
+ return NULL;
}
- test_suite->desc = desc;
- test_suite->test_cases = tests;
- test_suite->priv = strdup_check(filename);
+
+ test_info = zalloc(sizeof(*test_info));
+ if (!test_info) {
+ pr_err("Out of memory while building script test suite list\n");
+ return NULL;
+ }
+
+ test_info->base_path = strdup_check(dirpath); /* Absolute path to dir */
+
+ test_suite->priv = test_info;
+ test_suite->desc = NULL;
+ test_suite->test_cases = NULL;
+
+ return test_suite;
+}
+
+static void append_suite(struct test_suite ***result,
+ size_t *result_sz, struct test_suite *test_suite)
+{
+ struct test_suite **result_tmp;
+
/* Realloc is good enough, though we could realloc by chunks, not that
* anyone will ever measure performance here */
result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
if (result_tmp == NULL) {
pr_err("Out of memory while building script test suite list\n");
- free(tests);
- free(test_suite);
+ free_suite(test_suite);
return;
}
+
/* Add file to end and NULL terminate the struct array */
*result = result_tmp;
(*result)[*result_sz] = test_suite;
(*result_sz)++;
}
-static void append_scripts_in_dir(int dir_fd,
+static void append_script_to_suite(int dir_fd, const char *name, char *desc,
+ struct test_suite *test_suite, size_t *tc_count)
+{
+ char file_name[PATH_MAX], link[128];
+ struct test_case *tests;
+ size_t len;
+ char *exclusive;
+
+ if (!test_suite)
+ return;
+
+ /* Requires an empty test case at the end */
+ tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
+ if (!tests) {
+ pr_err("Out of memory while building script test suite list\n");
+ return;
+ }
+
+ /* Get path to the test script */
+ snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
+ len = readlink(link, file_name, sizeof(file_name));
+ if (len < 0) {
+ pr_err("Failed to readlink %s", link);
+ return;
+ }
+ file_name[len++] = '/';
+ strcpy(&file_name[len], name);
+
+ /* Get path to the script from base dir */
+ tests[(*tc_count)].name = strdup_check(file_name);
+ tests[(*tc_count)].exclusive = false;
+ exclusive = strstr(desc, " (exclusive)");
+ if (exclusive != NULL) {
+ tests[(*tc_count)].exclusive = true;
+ exclusive[0] = '\0';
+ }
+ tests[(*tc_count)].desc = desc;
+ tests[(*tc_count)].skip_reason = NULL; /* Unused */
+ tests[(*tc_count)++].run_case = shell_test__run;
+
+ tests[(*tc_count)].name = NULL; /* End the test cases */
+
+ test_suite->test_cases = tests;
+}
+
+static void append_scripts_in_subdir(int dir_fd,
+ struct test_suite *suite,
+ size_t *tc_count)
+{
+ struct dirent **entlist;
+ struct dirent *ent;
+ int n_dirs, i;
+
+ /* List files, sorted by alpha */
+ n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+ if (n_dirs == -1)
+ return;
+ for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
+ int fd;
+
+ if (ent->d_name[0] == '.')
+ continue; /* Skip hidden files */
+ if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
+ char *desc = shell_test__description(dir_fd, ent->d_name);
+
+ if (desc) /* It has a desc line - valid script */
+ append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
+ continue;
+ }
+
+ if (ent->d_type != DT_DIR) {
+ struct stat st;
+
+ if (ent->d_type != DT_UNKNOWN)
+ continue;
+ fstatat(dir_fd, ent->d_name, &st, 0);
+ if (!S_ISDIR(st.st_mode))
+ continue;
+ }
+
+ fd = openat(dir_fd, ent->d_name, O_PATH);
+
+ /* Recurse into the dir */
+ append_scripts_in_subdir(fd, suite, tc_count);
+ }
+ for (i = 0; i < n_dirs; i++) /* Clean up */
+ zfree(&entlist[i]);
+ free(entlist);
+}
+
+static void append_suites_in_dir(int dir_fd,
struct test_suite ***result,
size_t *result_sz)
{
@@ -237,16 +362,29 @@ static void append_scripts_in_dir(int dir_fd,
return;
for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
int fd;
+ struct test_suite *test_suite;
+ size_t cases_count = 0;
if (ent->d_name[0] == '.')
continue; /* Skip hidden files */
if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
char *desc = shell_test__description(dir_fd, ent->d_name);
- if (desc) /* It has a desc line - valid script */
- append_script(dir_fd, ent->d_name, desc, result, result_sz);
+ if (desc) { /* It has a desc line - valid script */
+ /* Create a test suite with a single test case */
+ test_suite = prepare_test_suite(dir_fd);
+ append_script_to_suite(dir_fd, ent->d_name, desc,
+ test_suite, &cases_count);
+ test_suite->desc = strdup_check(desc);
+
+ if (cases_count)
+ append_suite(result, result_sz, test_suite);
+ else /* Wasn't able to create the test case */
+ free_suite(test_suite);
+ }
continue;
}
+
if (ent->d_type != DT_DIR) {
struct stat st;
@@ -258,8 +396,23 @@ static void append_scripts_in_dir(int dir_fd,
}
if (strncmp(ent->d_name, "base_", 5) == 0)
continue; /* Skip scripts that have a separate driver. */
+
+ /* Scan subdir for test cases*/
fd = openat(dir_fd, ent->d_name, O_PATH);
- append_scripts_in_dir(fd, result, result_sz);
+ test_suite = prepare_test_suite(fd); /* Prepare a testsuite with its path */
+ if (!test_suite)
+ continue;
+
+ append_scripts_in_subdir(fd, test_suite, &cases_count);
+ if (cases_count == 0) {
+ free_suite(test_suite);
+ continue;
+ }
+
+ /* If no setup, set name to the directory */
+ test_suite->desc = strdup_check(ent->d_name);
+
+ append_suite(result, result_sz, test_suite);
close(fd);
}
for (i = 0; i < n_dirs; i++) /* Clean up */
@@ -278,7 +431,7 @@ struct test_suite **create_script_test_suites(void)
* length array.
*/
if (dir_fd >= 0)
- append_scripts_in_dir(dir_fd, &result, &result_sz);
+ append_suites_in_dir(dir_fd, &result, &result_sz);
result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
if (result_tmp == NULL) {
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index b553ad26ea17..60a1a19a45c9 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,6 +4,10 @@
#include "tests.h"
+struct shell_info {
+ const char *base_path;
+};
+
struct test_suite **create_script_test_suites(void);
#endif /* TESTS_SCRIPTS_H */
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 3/7] perf test: Provide setup for the shell test suite
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 2/7] perf tests: Create a structure for shell tests Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
2025-09-30 18:51 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
` (3 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
Some of the perftool-testsuite test cases require a setup to be done
beforehand as may be recording data, setting up cache or restoring sample
rate. The setup file also provides the possibility to set the name of
the test suite, if the name of the directory is not good enough.
Check for the existence of the "setup.sh" script for the shell test
suites and run it before the any of the test cases. If the setup fails,
skip all of the test cases of the test suite as the setup may be
required for the result to be valid.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
tools/perf/tests/builtin-test.c | 30 +++++++++++++++++++-----
tools/perf/tests/tests-scripts.c | 39 +++++++++++++++++++++++++++++---
tools/perf/tests/tests-scripts.h | 10 ++++++++
tools/perf/tests/tests.h | 8 ++++---
4 files changed, 75 insertions(+), 12 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 85142dfb3e01..6fc031ef50ea 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -258,6 +258,22 @@ static test_fnptr test_function(const struct test_suite *t, int test_case)
return t->test_cases[test_case].run_case;
}
+/* If setup fails, skip all test cases */
+static void check_shell_setup(const struct test_suite *t, int ret)
+{
+ struct shell_info *test_info;
+
+ if (!t->priv)
+ return;
+
+ test_info = t->priv;
+
+ if (ret == TEST_SETUP_FAIL)
+ test_info->has_setup = FAILED_SETUP;
+ else if (test_info->has_setup == RUN_SETUP)
+ test_info->has_setup = PASSED_SETUP;
+}
+
static bool test_exclusive(const struct test_suite *t, int test_case)
{
if (test_case <= 0)
@@ -347,10 +363,9 @@ static int run_test_child(struct child_process *process)
return -err;
}
-#define TEST_RUNNING -3
-
-static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case,
- int result, int width, int running)
+static int print_test_result(struct test_suite *t, int curr_suite,
+ int curr_test_case, int result, int width,
+ int running)
{
if (test_suite__num_test_cases(t) > 1) {
int subw = width > 2 ? width - 2 : width;
@@ -367,7 +382,8 @@ static int print_test_result(struct test_suite *t, int curr_suite, int curr_test
case TEST_OK:
pr_info(" Ok\n");
break;
- case TEST_SKIP: {
+ case TEST_SKIP:
+ case TEST_SETUP_FAIL:{
const char *reason = skip_reason(t, curr_test_case);
if (reason)
@@ -482,6 +498,7 @@ static void finish_test(struct child_test **child_tests, int running_test, int c
}
/* Clean up child process. */
ret = finish_command(&child_test->process);
+ check_shell_setup(t, ret);
if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
fprintf(stderr, "%s", err_output.buf);
@@ -504,7 +521,8 @@ static int start_test(struct test_suite *test, int curr_suite, int curr_test_cas
err = test_function(test, curr_test_case)(test, curr_test_case);
pr_debug("---- end ----\n");
print_test_result(test, curr_suite, curr_test_case, err, width,
- /*running=*/0);
+ /*running=*/0);
+ check_shell_setup(test, err);
}
return 0;
}
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index e47f7eb50a73..10aab7c19ffe 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -138,6 +138,12 @@ static bool is_test_script(int dir_fd, const char *name)
return is_shell_script(dir_fd, name);
}
+/* Filter for scandir */
+static int setup_filter(const struct dirent *entry)
+{
+ return strcmp(entry->d_name, SHELL_SETUP);
+}
+
/* Duplicate a string and fall over and die if we run out of memory */
static char *strdup_check(const char *str)
{
@@ -178,6 +184,7 @@ static void free_suite(struct test_suite *suite)
static int shell_test__run(struct test_suite *test, int subtest)
{
+ struct shell_info *test_info = test->priv;
const char *file;
int err;
char *cmd = NULL;
@@ -189,6 +196,23 @@ static int shell_test__run(struct test_suite *test, int subtest)
file = test->test_cases[0].name;
}
+ /* Run setup if needed */
+ if (test_info->has_setup == RUN_SETUP) {
+ char *setup_script;
+
+ if (asprintf(&setup_script, "%s%s%s", test_info->base_path,
+ SHELL_SETUP, verbose ? " -v" : "") < 0)
+ return TEST_SETUP_FAIL;
+
+ err = system(setup_script);
+ free(setup_script);
+
+ if (err)
+ return TEST_SETUP_FAIL;
+ } else if (test_info->has_setup == FAILED_SETUP) {
+ return TEST_SKIP; /* Skip test suite if setup failed */
+ }
+
if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
return TEST_FAIL;
@@ -230,6 +254,7 @@ static struct test_suite *prepare_test_suite(int dir_fd)
}
test_info->base_path = strdup_check(dirpath); /* Absolute path to dir */
+ test_info->has_setup = NO_SETUP;
test_suite->priv = test_info;
test_suite->desc = NULL;
@@ -312,7 +337,7 @@ static void append_scripts_in_subdir(int dir_fd,
int n_dirs, i;
/* List files, sorted by alpha */
- n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+ n_dirs = scandirat(dir_fd, ".", &entlist, setup_filter, alphasort);
if (n_dirs == -1)
return;
for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
@@ -409,8 +434,16 @@ static void append_suites_in_dir(int dir_fd,
continue;
}
- /* If no setup, set name to the directory */
- test_suite->desc = strdup_check(ent->d_name);
+ if (is_test_script(fd, SHELL_SETUP)) { /* Check for setup existence */
+ char *desc = shell_test__description(fd, SHELL_SETUP);
+
+ /* Set the suite name by the setup description */
+ test_suite->desc = desc;
+ ((struct shell_info *)(test_suite->priv))->has_setup = RUN_SETUP;
+ } else {
+ /* If no setup, set name to the directory */
+ test_suite->desc = strdup_check(ent->d_name);
+ }
append_suite(result, result_sz, test_suite);
close(fd);
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index 60a1a19a45c9..da4dcd26140c 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,8 +4,18 @@
#include "tests.h"
+#define SHELL_SETUP "setup.sh"
+
+enum shell_setup {
+ NO_SETUP = 0,
+ RUN_SETUP = 1,
+ FAILED_SETUP = 2,
+ PASSED_SETUP = 3,
+};
+
struct shell_info {
const char *base_path;
+ enum shell_setup has_setup;
};
struct test_suite **create_script_test_suites(void);
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 97e62db8764a..9f3e3b90f1ac 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -6,9 +6,11 @@
#include "util/debug.h"
enum {
- TEST_OK = 0,
- TEST_FAIL = -1,
- TEST_SKIP = -2,
+ TEST_OK = 0,
+ TEST_FAIL = -1,
+ TEST_SKIP = -2,
+ TEST_RUNNING = -3,
+ TEST_SETUP_FAIL = -4,
};
#define TEST_ASSERT_VAL(text, cond) \
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 4/7] perftool-testsuite: Add empty setup for base_probe
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
` (2 preceding siblings ...)
2025-09-30 16:09 ` [PATCH v4 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
2025-09-30 18:52 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
` (2 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
Add empty setup to set a proper name for base_probe testsuite, can be
utilized for basic test setup for the future.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
tools/perf/tests/shell/base_probe/setup.sh | 13 +++++++++++++
1 file changed, 13 insertions(+)
create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
diff --git a/tools/perf/tests/shell/base_probe/setup.sh b/tools/perf/tests/shell/base_probe/setup.sh
new file mode 100755
index 000000000000..fbb99325b555
--- /dev/null
+++ b/tools/perf/tests/shell/base_probe/setup.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+# perftool-testsuite :: perf_probe
+# SPDX-License-Identifier: GPL-2.0
+
+#
+# setup.sh of perf probe test
+# Author: Michael Petlan <mpetlan@redhat.com>
+#
+# Description:
+#
+# Setting testsuite name, for future use
+#
+#
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 5/7] perf test: Introduce storing logs for shell tests
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
` (3 preceding siblings ...)
2025-09-30 16:09 ` [PATCH v4 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
2025-09-30 19:00 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 6/7] perf test: Format log directories " Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 7/7] perf test: Remove perftool drivers Jakub Brnak
6 siblings, 1 reply; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
Create temporary directories for storing log files for shell tests
that could help while debugging. The log files are necessary for
perftool testsuite test cases also. If the variable PERFTEST_KEEP_LOGS
is set keep the logs, else delete them.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
tools/perf/tests/builtin-test.c | 91 ++++++++++++++++++++++++++++++++
tools/perf/tests/tests-scripts.c | 3 ++
tools/perf/tests/tests-scripts.h | 1 +
3 files changed, 95 insertions(+)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 6fc031ef50ea..a943f66cbac0 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -6,6 +6,7 @@
*/
#include <ctype.h>
#include <fcntl.h>
+#include <ftw.h>
#include <errno.h>
#ifdef HAVE_BACKTRACE_SUPPORT
#include <execinfo.h>
@@ -282,6 +283,85 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
return t->test_cases[test_case].exclusive;
}
+static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
+ int typeflag, struct FTW *ftwbuf)
+{
+ int rv = -1;
+
+ /* Stop traversal if going too deep */
+ if (ftwbuf->level > 5) {
+ pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
+ return rv;
+ }
+
+ /* Remove only expected directories */
+ if (typeflag == FTW_D || typeflag == FTW_DP) {
+ const char *dirname = fpath + ftwbuf->base;
+
+ if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
+ strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
+ pr_err("Unknown directory %s", dirname);
+ return rv;
+ }
+ }
+
+ /* Attempt to remove the file */
+ rv = remove(fpath);
+ if (rv)
+ pr_err("Failed to remove file: %s", fpath);
+
+ return rv;
+}
+
+static bool create_logs(struct test_suite *t, int pass)
+{
+ bool store_logs = t->priv && ((struct shell_info *)(t->priv))->store_logs;
+
+ if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
+ /* Sequential and non-exclusive tests run on the first pass. */
+ return store_logs;
+ } else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
+ /* Exclusive tests without sequential run on the second pass. */
+ return store_logs;
+ }
+ return false;
+}
+
+static char *setup_shell_logs(const char *name)
+{
+ char template[PATH_MAX];
+ char *temp_dir;
+
+ if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+ pr_err("Failed to create log dir template");
+ return NULL; /* Skip the testsuite */
+ }
+
+ temp_dir = mkdtemp(template);
+ if (temp_dir) {
+ setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
+ return strdup(temp_dir);
+ }
+
+ pr_err("Failed to create the temporary directory");
+
+ return NULL; /* Skip the testsuite */
+}
+
+static void cleanup_shell_logs(char *dirname)
+{
+ char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
+
+ /* Check if logs should be kept or do cleanup */
+ if (dirname) {
+ if (!keep_logs || strcmp(keep_logs, "y") != 0)
+ nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
+ free(dirname);
+ }
+
+ unsetenv("PERFSUITE_RUN_DIR");
+}
+
static bool perf_test__matches(const char *desc, int suite_num, int argc, const char *argv[])
{
int i;
@@ -628,6 +708,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
for (struct test_suite **t = suites; *t; t++, curr_suite++) {
int curr_test_case;
bool suite_matched = false;
+ char *tmpdir = NULL;
if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) {
/*
@@ -657,6 +738,15 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
}
for (unsigned int run = 0; run < runs_per_test; run++) {
+ /* Setup temporary log directories for shell test suites */
+ if (create_logs(*t, pass)) {
+ tmpdir = setup_shell_logs((*t)->desc);
+
+ /* Couldn't create log dir, skip test suite */
+ if (tmpdir == NULL)
+ ((struct shell_info *)((*t)->priv))->has_setup =
+ FAILED_SETUP;
+ }
test_suite__for_each_test_case(*t, curr_test_case) {
if (!suite_matched &&
!perf_test__matches(test_description(*t, curr_test_case),
@@ -669,6 +759,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
goto err_out;
}
}
+ cleanup_shell_logs(tmpdir);
}
if (!sequential) {
/* Parallel mode starts tests but doesn't finish them. Do that now. */
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 10aab7c19ffe..9b4782bc1767 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -255,6 +255,7 @@ static struct test_suite *prepare_test_suite(int dir_fd)
test_info->base_path = strdup_check(dirpath); /* Absolute path to dir */
test_info->has_setup = NO_SETUP;
+ test_info->store_logs = false;
test_suite->priv = test_info;
test_suite->desc = NULL;
@@ -434,6 +435,8 @@ static void append_suites_in_dir(int dir_fd,
continue;
}
+ /* Store logs for testsuite is sub-directories */
+ ((struct shell_info *)(test_suite->priv))->store_logs = true;
if (is_test_script(fd, SHELL_SETUP)) { /* Check for setup existence */
char *desc = shell_test__description(fd, SHELL_SETUP);
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index da4dcd26140c..41da0a175e4e 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -16,6 +16,7 @@ enum shell_setup {
struct shell_info {
const char *base_path;
enum shell_setup has_setup;
+ bool store_logs;
};
struct test_suite **create_script_test_suites(void);
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 6/7] perf test: Format log directories for shell tests
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
` (4 preceding siblings ...)
2025-09-30 16:09 ` [PATCH v4 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 7/7] perf test: Remove perftool drivers Jakub Brnak
6 siblings, 0 replies; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
The name of the log directory can be taken from the test suite
description, which possibly could contain whitespace characters. This
can cause further issues if the name is not quoted correctly.
Replace the whitespace characters with an underscore to prevent the
possible issues caused by the name splitting.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
tools/perf/tests/builtin-test.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index a943f66cbac0..c5f923d6f9fa 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -283,6 +283,7 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
return t->test_cases[test_case].exclusive;
}
+
static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
int typeflag, struct FTW *ftwbuf)
{
@@ -331,12 +332,19 @@ static char *setup_shell_logs(const char *name)
{
char template[PATH_MAX];
char *temp_dir;
+ size_t i;
if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
pr_err("Failed to create log dir template");
return NULL; /* Skip the testsuite */
}
+ /* Replace non-alphanumeric characters with _ in the name part */
+ for (i = 15; template[i] != '.' && template[i] != '\0'; i++) {
+ if (!isalnum((unsigned char)template[i]))
+ template[i] = '_';
+ }
+
temp_dir = mkdtemp(template);
if (temp_dir) {
setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 7/7] perf test: Remove perftool drivers
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
` (5 preceding siblings ...)
2025-09-30 16:09 ` [PATCH v4 6/7] perf test: Format log directories " Jakub Brnak
@ 2025-09-30 16:09 ` Jakub Brnak
6 siblings, 0 replies; 14+ messages in thread
From: Jakub Brnak @ 2025-09-30 16:09 UTC (permalink / raw)
To: namhyung; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, vmolnaro
From: Veronika Molnarova <vmolnaro@redhat.com>
The perf now provides all of the features required for running the
perftool test cases, such as creating log directories, running
setup scripts and the tests are structured by the base_ directories.
Remove the drivers as they are no longer necessary together with
the condition of skipping the base_ directories and run the
test cases by the default perf test structure.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
.../tests/shell/perftool-testsuite_probe.sh | 24 -------------------
.../tests/shell/perftool-testsuite_report.sh | 23 ------------------
tools/perf/tests/tests-scripts.c | 2 --
3 files changed, 49 deletions(-)
delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh
diff --git a/tools/perf/tests/shell/perftool-testsuite_probe.sh b/tools/perf/tests/shell/perftool-testsuite_probe.sh
deleted file mode 100755
index 3863df16c19b..000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_probe.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_probe (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-[ "$(id -u)" = 0 ] || exit 2
-test -d "$(dirname "$0")/base_probe" || exit 2
-cd "$(dirname "$0")/base_probe" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do # skip setup.sh if not present or not executable
- test -x "$testcase" || continue
- ./"$testcase"
- (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
- rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
deleted file mode 100755
index a8cf75b4e77e..000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_report.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_report (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-test -d "$(dirname "$0")/base_report" || exit 2
-cd "$(dirname "$0")/base_report" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do # skip setup.sh if not present or not executable
- test -x "$testcase" || continue
- ./"$testcase"
- (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
- rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 9b4782bc1767..e5137a89471b 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -420,8 +420,6 @@ static void append_suites_in_dir(int dir_fd,
if (!S_ISDIR(st.st_mode))
continue;
}
- if (strncmp(ent->d_name, "base_", 5) == 0)
- continue; /* Skip scripts that have a separate driver. */
/* Scan subdir for test cases*/
fd = openat(dir_fd, ent->d_name, O_PATH);
--
2.50.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths
2025-09-30 16:09 ` [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
@ 2025-09-30 18:28 ` Ian Rogers
2025-10-01 12:37 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 14+ messages in thread
From: Ian Rogers @ 2025-09-30 18:28 UTC (permalink / raw)
To: Jakub Brnak; +Cc: namhyung, acme, acme, linux-perf-users, mpetlan, vmolnaro
On Tue, Sep 30, 2025 at 9:09 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> Test cases from perftool_testsuite are affected by the current
> directory where the test are run. For this reason, the test
> driver has to change the directory to the base_dir for references to
> work correctly.
>
> Utilize absolute paths when sourcing and referencing other scripts so
> that the current working directory doesn't impact the test cases.
>
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Although not changed here, any chance of making the naming of the file
and variables better fitting to inclusive language guidelines?
https://www.aswf.io/inclusive-language-guide/
Thanks,
Ian
> ---
> .../base_probe/test_adding_blacklisted.sh | 20 +++-
> .../shell/base_probe/test_adding_kernel.sh | 97 ++++++++++++-----
> .../perf/tests/shell/base_probe/test_basic.sh | 31 ++++--
> .../shell/base_probe/test_invalid_options.sh | 14 ++-
> .../shell/base_probe/test_line_semantics.sh | 7 +-
> tools/perf/tests/shell/base_report/setup.sh | 10 +-
> .../tests/shell/base_report/test_basic.sh | 103 +++++++++++++-----
> tools/perf/tests/shell/common/init.sh | 4 +-
> 8 files changed, 202 insertions(+), 84 deletions(-)
>
> diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> index 8226449ac5c3..f74aab5c5d7f 100755
> --- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> +++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> @@ -13,11 +13,12 @@
> # they must be skipped.
> #
>
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
> TEST_RESULT=0
>
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
> # skip if not supported
> BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
> if [ -z "$BLACKFUNC_LIST" ]; then
> @@ -53,7 +54,8 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
> PERF_EXIT_CODE=$?
>
> # check for bad DWARF polluting the result
> - ../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
>
> if [ $? -eq 0 ]; then
> SKIP_DWARF=1
> @@ -73,7 +75,11 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
> fi
> fi
> else
> - ../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> + "$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" \
> + "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" \
> + "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" \
> + "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> CHECK_EXIT_CODE=$?
>
> SKIP_DWARF=0
> @@ -94,7 +100,9 @@ fi
> $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
> PERF_EXIT_CODE=$?
>
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" \
> + < $LOGS_DIR/adding_blacklisted_list.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
> diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> index df288cf90cd6..555a825d55f2 100755
> --- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> +++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> @@ -13,13 +13,14 @@
> # and removing.
> #
>
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
> TEST_RESULT=0
>
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
> # shellcheck source=lib/probe_vfs_getname.sh
> -. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
> +. "$DIR_PATH/../lib/probe_vfs_getname.sh"
>
> TEST_PROBE=${TEST_PROBE:-"inode_permission"}
>
> @@ -44,7 +45,9 @@ for opt in "" "-a" "--add"; do
> $CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
> PERF_EXIT_CODE=$?
>
> - ../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
> + < $LOGS_DIR/adding_kernel_add$opt.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
> @@ -58,7 +61,10 @@ done
> $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
> PERF_EXIT_CODE=$?
>
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$RE_LINE_EMPTY" "List of pre-defined events" \
> + "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" \
> + "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
> @@ -71,7 +77,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
> $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" \
> + < $LOGS_DIR/adding_kernel_list-l.log
> CHECK_EXIT_CODE=$?
>
> if [ $NO_DEBUGINFO ] ; then
> @@ -93,9 +101,13 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
> # the value should be greater than 1
> REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
> REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
> -../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" \
> + "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> CHECK_EXIT_CODE=$?
> -../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" \
> + < $LOGS_DIR/adding_kernel_using_probe.log
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> @@ -108,7 +120,8 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> @@ -121,7 +134,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
> PERF_EXIT_CODE=$?
>
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" \
> + < $LOGS_DIR/adding_kernel_list_removed.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
> @@ -135,7 +150,9 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
> PERF_EXIT_CODE=$?
>
> # check for the output (should be the same as usual)
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
> + < $LOGS_DIR/adding_kernel_dryrun.err
> CHECK_EXIT_CODE=$?
>
> # check that no probe was added in real
> @@ -152,7 +169,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
> $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
> + < $LOGS_DIR/adding_kernel_forceadd_01.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
> @@ -162,7 +181,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
> ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Error: event \"$TEST_PROBE\" already exists." \
> + "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
> @@ -173,7 +194,9 @@ NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l`
> $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" \
> + "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
> @@ -187,7 +210,9 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
> PERF_EXIT_CODE=$?
>
> REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
> -../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" \
> + < $LOGS_DIR/adding_kernel_using_two.log
> CHECK_EXIT_CODE=$?
>
> VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
> @@ -205,7 +230,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
> $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "Removed event: probe:$TEST_PROBE" \
> + "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> @@ -217,7 +244,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" \
> + "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> CHECK_EXIT_CODE=$?
>
> if [ $NO_DEBUGINFO ] ; then
> @@ -240,13 +269,22 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
> PERF_EXIT_CODE=$?
>
> # check that the error message is reasonable
> -../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Failed to find" \
> + "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" \
> + < $LOGS_DIR/adding_kernel_nonexisting.err
> CHECK_EXIT_CODE=$?
> -../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "in this function|at this address" "Error" "Failed to add events" \
> + < $LOGS_DIR/adding_kernel_nonexisting.err
> (( CHECK_EXIT_CODE += $? ))
> -../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "Failed to find" "Error" "Probe point .+ not found" "optimized out" \
> + "Use.+\-\-range option to show.+location range" \
> + < $LOGS_DIR/adding_kernel_nonexisting.err
> (( CHECK_EXIT_CODE += $? ))
> -../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_no_patterns_found.pl" \
> + "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> (( CHECK_EXIT_CODE += $? ))
>
> if [ $NO_DEBUGINFO ]; then
> @@ -264,7 +302,10 @@ fi
> $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Added new events?:" "probe:$TEST_PROBE" \
> + "on $TEST_PROBE%return with \\\$retval" \
> + < $LOGS_DIR/adding_kernel_func_retval_add.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> @@ -274,7 +315,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" \
> + < $LOGS_DIR/adding_kernel_func_retval_record.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
> @@ -285,9 +328,11 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
> PERF_EXIT_CODE=$?
>
> REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
> -../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> CHECK_EXIT_CODE=$?
> -../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
> diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
> index 9d8b5afbeddd..162838ddc974 100755
> --- a/tools/perf/tests/shell/base_probe/test_basic.sh
> +++ b/tools/perf/tests/shell/base_probe/test_basic.sh
> @@ -12,11 +12,12 @@
> # This test tests basic functionality of perf probe command.
> #
>
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
> TEST_RESULT=0
>
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
> if ! check_kprobes_available; then
> print_overall_skipped
> exit 2
> @@ -30,15 +31,25 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
> $CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
> PERF_EXIT_CODE=$?
>
> - ../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" \
> + "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" \
> + < $LOGS_DIR/basic_helpmsg.log
> CHECK_EXIT_CODE=$?
> - ../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" \
> + < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" \
> + "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" \
> + "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> + "$DIR_PATH/../common/check_no_patterns_found.pl" \
> + "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> @@ -53,7 +64,9 @@ fi
> # without any args perf-probe should print usage
> $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
>
> -../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" \
> + "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> CHECK_EXIT_CODE=$?
>
> print_results 0 $CHECK_EXIT_CODE "usage message"
> diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> index 92f7254eb32a..44a3ae014bfa 100755
> --- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> +++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> @@ -12,11 +12,12 @@
> # This test checks whether the invalid and incompatible options are reported
> #
>
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
> TEST_RESULT=0
>
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
> if ! check_kprobes_available; then
> print_overall_skipped
> exit 2
> @@ -33,7 +34,9 @@ for opt in '-a' '-d' '-L' '-V'; do
> ! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
> PERF_EXIT_CODE=$?
>
> - ../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Error: switch .* requires a value" \
> + < $LOGS_DIR/invalid_options_missing_argument$opt.err
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
> @@ -66,7 +69,8 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
> ! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
> PERF_EXIT_CODE=$?
>
> - ../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
> diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> index 20435b6bf6bc..576442d87a44 100755
> --- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> +++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> @@ -13,11 +13,12 @@
> # arguments are properly reported.
> #
>
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
> TEST_RESULT=0
>
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
> if ! check_kprobes_available; then
> print_overall_skipped
> exit 2
> diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
> index 8634e7e0dda6..bb49b0fabb11 100755
> --- a/tools/perf/tests/shell/base_report/setup.sh
> +++ b/tools/perf/tests/shell/base_report/setup.sh
> @@ -12,8 +12,10 @@
> #
> #
>
> +DIR_PATH="$(dirname $0)"
> +
> # include working environment
> -. ../common/init.sh
> +. "$DIR_PATH/../common/init.sh"
>
> TEST_RESULT=0
>
> @@ -24,7 +26,8 @@ SW_EVENT="cpu-clock"
> $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
> @@ -38,7 +41,8 @@ echo ==================
> cat $LOGS_DIR/setup-latency.log
> echo ==================
>
> -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup-latency.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup-latency.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data.1 file"
> diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
> index adfd8713b8f8..0dfe7e5fd1ca 100755
> --- a/tools/perf/tests/shell/base_report/test_basic.sh
> +++ b/tools/perf/tests/shell/base_report/test_basic.sh
> @@ -12,11 +12,12 @@
> #
> #
>
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
> TEST_RESULT=0
>
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>
> ### help message
>
> @@ -25,19 +26,37 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
> $CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
> PERF_EXIT_CODE=$?
>
> - ../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" \
> + "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> CHECK_EXIT_CODE=$?
> - ../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "input" "verbose" "show-nr-samples" "show-cpu-utilization" \
> + "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" \
> + < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "hide-unresolved" "sort" "fields" "parent" "exclude-other" \
> + "column-widths" "field-separator" "dump-raw-trace" "children" \
> + < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" \
> + "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" \
> + < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" \
> + "show-total-period" "show-info" "branch-stack" "group" \
> + < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
> + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "branch-history" "objdump" "demangle" "percent-limit" "percentage" \
> + "header" "itrace" "full-source-path" "show-ref-call-graph" \
> + < $LOGS_DIR/basic_helpmsg.log
> (( CHECK_EXIT_CODE += $? ))
> - ../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> + "$DIR_PATH/../common/check_no_patterns_found.pl" \
> + "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> @@ -57,9 +76,12 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
> REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
> REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
> REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> -../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" \
> + "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
> @@ -74,9 +96,11 @@ PERF_EXIT_CODE=$?
>
> REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
> REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
> @@ -98,7 +122,10 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
> REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
> # disable precise check for "nrcpus avail" in BASIC runmode
> test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
> -../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" \
> + "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" \
> + "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
> @@ -129,9 +156,11 @@ PERF_EXIT_CODE=$?
>
> REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
> REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> @@ -144,9 +173,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
> PERF_EXIT_CODE=$?
>
> -grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
> +grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | \
> + "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> @@ -159,9 +190,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> + "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> @@ -174,9 +207,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
> PERF_EXIT_CODE=$?
>
> -grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
> +grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | \
> + "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
> @@ -189,7 +224,8 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
> $CMD_PERF report -i $CURRENT_TEST_DIR/perf.data.1 --stdio --header-only > $LOGS_DIR/latency_header.log
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl ", context_switch = 1, " < $LOGS_DIR/latency_header.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + ", context_switch = 1, " < $LOGS_DIR/latency_header.log
> CHECK_EXIT_CODE=$?
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency header"
> @@ -200,9 +236,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency header"
> $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/latency_default.log 2> $LOGS_DIR/latency_default.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "# Overhead Latency Command" < $LOGS_DIR/latency_default.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "# Overhead Latency Command" < $LOGS_DIR/latency_default.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/latency_default.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "stderr-whitelist.txt" < $LOGS_DIR/latency_default.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "default report for latency profile"
> @@ -213,9 +251,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "default report for latency profi
> $CMD_PERF report --latency --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/latency_latency.log 2> $LOGS_DIR/latency_latency.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "# Latency Overhead Command" < $LOGS_DIR/latency_latency.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "# Latency Overhead Command" < $LOGS_DIR/latency_latency.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/latency_latency.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "stderr-whitelist.txt" < $LOGS_DIR/latency_latency.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency report for latency profile"
> @@ -226,9 +266,12 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency report for latency profi
> $CMD_PERF report --hierarchy --sort latency,parallelism,comm,symbol --parallelism=1,2 --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/parallelism_hierarchy.log 2> $LOGS_DIR/parallelism_hierarchy.err
> PERF_EXIT_CODE=$?
>
> -../common/check_all_patterns_found.pl "# Latency Parallelism / Command / Symbol" < $LOGS_DIR/parallelism_hierarchy.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> + "# Latency Parallelism / Command / Symbol" \
> + < $LOGS_DIR/parallelism_hierarchy.log
> CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/parallelism_hierarchy.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> + "stderr-whitelist.txt" < $LOGS_DIR/parallelism_hierarchy.err
> (( CHECK_EXIT_CODE += $? ))
>
> print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "parallelism histogram"
> diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
> index 26c7525651e0..cbfc78bec974 100644
> --- a/tools/perf/tests/shell/common/init.sh
> +++ b/tools/perf/tests/shell/common/init.sh
> @@ -11,8 +11,8 @@
> #
>
>
> -. ../common/settings.sh
> -. ../common/patterns.sh
> +. "$(dirname $0)/../common/settings.sh"
> +. "$(dirname $0)/../common/patterns.sh"
>
> THIS_TEST_NAME=`basename $0 .sh`
>
> --
> 2.50.1
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 2/7] perf tests: Create a structure for shell tests
2025-09-30 16:09 ` [PATCH v4 2/7] perf tests: Create a structure for shell tests Jakub Brnak
@ 2025-09-30 18:49 ` Ian Rogers
0 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2025-09-30 18:49 UTC (permalink / raw)
To: Jakub Brnak; +Cc: namhyung, acme, acme, linux-perf-users, mpetlan, vmolnaro
On Tue, Sep 30, 2025 at 9:09 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> The general structure of test suites with test cases has been implemented
> for C tests for some time, while shell tests were just all put into a list
> without any possible structuring.
>
> Provide the same possibility of test suite structure for shell tests. The
> suite is created for each subdirectory located in the 'perf/tests/shell'
> directory that contains at least one test script. All of the deeper levels
> of subdirectories will be merged with the first level of test cases.
> The name of the test suite is the name of the subdirectory, where the test
> cases are located. For all of the test scripts that are not in any
> subdirectory, a test suite with a single test case is created as it has
> been till now.
>
> The new structure of the shell tests for 'perf test list':
> 77: build id cache operations
> 78: coresight
> 78:1: CoreSight / ASM Pure Loop
> 78:2: CoreSight / Memcpy 16k 10 Threads
> 78:3: CoreSight / Thread Loop 10 Threads - Check TID
> 78:4: CoreSight / Thread Loop 2 Threads - Check TID
> 78:5: CoreSight / Unroll Loop Thread 10
> 79: daemon operations
> 80: perf diff tests
>
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
> tools/perf/tests/tests-scripts.c | 229 ++++++++++++++++++++++++++-----
> tools/perf/tests/tests-scripts.h | 4 +
> 2 files changed, 195 insertions(+), 38 deletions(-)
>
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index f18c4cd337c8..e47f7eb50a73 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -151,14 +151,47 @@ static char *strdup_check(const char *str)
> return newstr;
> }
>
> -static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
> +/* Free the whole structure of test_suite with its test_cases */
> +static void free_suite(struct test_suite *suite)
> {
> - const char *file = test->priv;
> + if (suite->test_cases) {
> + int num = 0;
> +
> + while (suite->test_cases[num].name) { /* Last case has name set to NULL */
> + free((void *) suite->test_cases[num].name);
> + free((void *) suite->test_cases[num].desc);
> + num++;
> + }
> + free(suite->test_cases);
> + }
> + if (suite->desc)
> + free((void *) suite->desc);
> + if (suite->priv) {
> + struct shell_info *test_info = suite->priv;
> +
> + free((void *) test_info->base_path);
> + free(test_info);
> + }
> +
> + free(suite);
> +}
> +
> +static int shell_test__run(struct test_suite *test, int subtest)
> +{
> + const char *file;
> int err;
> char *cmd = NULL;
>
> + /* Get absolute file path */
> + if (subtest >= 0) {
> + file = test->test_cases[subtest].name;
> + } else { /* Single test case */
> + file = test->test_cases[0].name;
> + }
nit: I think style wise the curlies shouldn't be here:
https://www.kernel.org/doc/html/v4.10/process/coding-style.html#placing-braces-and-spaces
> +
> if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
> return TEST_FAIL;
> +
> err = system(cmd);
> free(cmd);
> if (!err)
> @@ -167,63 +200,155 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
> return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
> }
>
> -static void append_script(int dir_fd, const char *name, char *desc,
> - struct test_suite ***result,
> - size_t *result_sz)
> +static struct test_suite *prepare_test_suite(int dir_fd)
> {
> - char filename[PATH_MAX], link[128];
> - struct test_suite *test_suite, **result_tmp;
> - struct test_case *tests;
> + char dirpath[PATH_MAX], link[128];
> ssize_t len;
> - char *exclusive;
> + struct test_suite *test_suite = NULL;
> + struct shell_info *test_info;
>
> + /* Get dir absolute path */
> snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> - len = readlink(link, filename, sizeof(filename));
> + len = readlink(link, dirpath, sizeof(dirpath));
> if (len < 0) {
> pr_err("Failed to readlink %s", link);
> - return;
> + return NULL;
> }
> - filename[len++] = '/';
> - strcpy(&filename[len], name);
> + dirpath[len++] = '/';
> + dirpath[len] = '\0';
>
> - tests = calloc(2, sizeof(*tests));
> - if (!tests) {
> - pr_err("Out of memory while building script test suite list\n");
> - return;
> - }
> - tests[0].name = strdup_check(name);
> - exclusive = strstr(desc, " (exclusive)");
> - if (exclusive != NULL) {
> - tests[0].exclusive = true;
> - exclusive[0] = '\0';
> - }
> - tests[0].desc = strdup_check(desc);
> - tests[0].run_case = shell_test__run;
> test_suite = zalloc(sizeof(*test_suite));
> if (!test_suite) {
> pr_err("Out of memory while building script test suite list\n");
> - free(tests);
> - return;
> + return NULL;
> }
> - test_suite->desc = desc;
> - test_suite->test_cases = tests;
> - test_suite->priv = strdup_check(filename);
> +
> + test_info = zalloc(sizeof(*test_info));
> + if (!test_info) {
> + pr_err("Out of memory while building script test suite list\n");
> + return NULL;
> + }
> +
> + test_info->base_path = strdup_check(dirpath); /* Absolute path to dir */
> +
> + test_suite->priv = test_info;
> + test_suite->desc = NULL;
> + test_suite->test_cases = NULL;
> +
> + return test_suite;
> +}
> +
> +static void append_suite(struct test_suite ***result,
> + size_t *result_sz, struct test_suite *test_suite)
> +{
> + struct test_suite **result_tmp;
> +
> /* Realloc is good enough, though we could realloc by chunks, not that
> * anyone will ever measure performance here */
> result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
> if (result_tmp == NULL) {
> pr_err("Out of memory while building script test suite list\n");
> - free(tests);
> - free(test_suite);
> + free_suite(test_suite);
> return;
> }
> +
> /* Add file to end and NULL terminate the struct array */
> *result = result_tmp;
> (*result)[*result_sz] = test_suite;
> (*result_sz)++;
> }
>
> -static void append_scripts_in_dir(int dir_fd,
> +static void append_script_to_suite(int dir_fd, const char *name, char *desc,
> + struct test_suite *test_suite, size_t *tc_count)
> +{
> + char file_name[PATH_MAX], link[128];
> + struct test_case *tests;
> + size_t len;
> + char *exclusive;
> +
> + if (!test_suite)
> + return;
> +
> + /* Requires an empty test case at the end */
> + tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
> + if (!tests) {
> + pr_err("Out of memory while building script test suite list\n");
> + return;
> + }
> +
> + /* Get path to the test script */
> + snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> + len = readlink(link, file_name, sizeof(file_name));
> + if (len < 0) {
> + pr_err("Failed to readlink %s", link);
> + return;
> + }
> + file_name[len++] = '/';
> + strcpy(&file_name[len], name);
> +
> + /* Get path to the script from base dir */
> + tests[(*tc_count)].name = strdup_check(file_name);
> + tests[(*tc_count)].exclusive = false;
> + exclusive = strstr(desc, " (exclusive)");
> + if (exclusive != NULL) {
> + tests[(*tc_count)].exclusive = true;
> + exclusive[0] = '\0';
> + }
> + tests[(*tc_count)].desc = desc;
> + tests[(*tc_count)].skip_reason = NULL; /* Unused */
> + tests[(*tc_count)++].run_case = shell_test__run;
> +
> + tests[(*tc_count)].name = NULL; /* End the test cases */
> +
> + test_suite->test_cases = tests;
> +}
> +
> +static void append_scripts_in_subdir(int dir_fd,
> + struct test_suite *suite,
> + size_t *tc_count)
> +{
> + struct dirent **entlist;
> + struct dirent *ent;
> + int n_dirs, i;
> +
> + /* List files, sorted by alpha */
> + n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
> + if (n_dirs == -1)
> + return;
> + for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> + int fd;
> +
> + if (ent->d_name[0] == '.')
> + continue; /* Skip hidden files */
> + if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
> + char *desc = shell_test__description(dir_fd, ent->d_name);
> +
> + if (desc) /* It has a desc line - valid script */
> + append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
> + continue;
> + }
> +
> + if (ent->d_type != DT_DIR) {
> + struct stat st;
> +
> + if (ent->d_type != DT_UNKNOWN)
> + continue;
> + fstatat(dir_fd, ent->d_name, &st, 0);
> + if (!S_ISDIR(st.st_mode))
> + continue;
Fwiw, there is already logic doing something like this here:
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/lib/api/io_dir.h?h=perf-tools-next#n89
but iodir and not scandir so not suitably sorted. I wonder if we
should add a helper as this test seems to be common in this code?
> + }
> +
> + fd = openat(dir_fd, ent->d_name, O_PATH);
> +
> + /* Recurse into the dir */
> + append_scripts_in_subdir(fd, suite, tc_count);
> + }
> + for (i = 0; i < n_dirs; i++) /* Clean up */
> + zfree(&entlist[i]);
> + free(entlist);
> +}
> +
> +static void append_suites_in_dir(int dir_fd,
> struct test_suite ***result,
> size_t *result_sz)
> {
> @@ -237,16 +362,29 @@ static void append_scripts_in_dir(int dir_fd,
> return;
> for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> int fd;
> + struct test_suite *test_suite;
> + size_t cases_count = 0;
>
> if (ent->d_name[0] == '.')
> continue; /* Skip hidden files */
> if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
> char *desc = shell_test__description(dir_fd, ent->d_name);
>
> - if (desc) /* It has a desc line - valid script */
> - append_script(dir_fd, ent->d_name, desc, result, result_sz);
> + if (desc) { /* It has a desc line - valid script */
> + /* Create a test suite with a single test case */
> + test_suite = prepare_test_suite(dir_fd);
> + append_script_to_suite(dir_fd, ent->d_name, desc,
> + test_suite, &cases_count);
> + test_suite->desc = strdup_check(desc);
> +
> + if (cases_count)
> + append_suite(result, result_sz, test_suite);
> + else /* Wasn't able to create the test case */
> + free_suite(test_suite);
> + }
> continue;
> }
> +
> if (ent->d_type != DT_DIR) {
> struct stat st;
>
> @@ -258,8 +396,23 @@ static void append_scripts_in_dir(int dir_fd,
> }
> if (strncmp(ent->d_name, "base_", 5) == 0)
> continue; /* Skip scripts that have a separate driver. */
> +
> + /* Scan subdir for test cases*/
> fd = openat(dir_fd, ent->d_name, O_PATH);
> - append_scripts_in_dir(fd, result, result_sz);
> + test_suite = prepare_test_suite(fd); /* Prepare a testsuite with its path */
> + if (!test_suite)
> + continue;
> +
> + append_scripts_in_subdir(fd, test_suite, &cases_count);
> + if (cases_count == 0) {
> + free_suite(test_suite);
> + continue;
> + }
> +
> + /* If no setup, set name to the directory */
> + test_suite->desc = strdup_check(ent->d_name);
> +
> + append_suite(result, result_sz, test_suite);
> close(fd);
> }
> for (i = 0; i < n_dirs; i++) /* Clean up */
> @@ -278,7 +431,7 @@ struct test_suite **create_script_test_suites(void)
> * length array.
> */
> if (dir_fd >= 0)
> - append_scripts_in_dir(dir_fd, &result, &result_sz);
> + append_suites_in_dir(dir_fd, &result, &result_sz);
>
> result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
> if (result_tmp == NULL) {
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index b553ad26ea17..60a1a19a45c9 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -4,6 +4,10 @@
>
> #include "tests.h"
>
> +struct shell_info {
Maybe shell_test_info with a comment saying that this is the
additional information attached to shell tests?
Thanks,
Ian
> + const char *base_path;
> +};
> +
> struct test_suite **create_script_test_suites(void);
>
> #endif /* TESTS_SCRIPTS_H */
> --
> 2.50.1
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 3/7] perf test: Provide setup for the shell test suite
2025-09-30 16:09 ` [PATCH v4 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
@ 2025-09-30 18:51 ` Ian Rogers
0 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2025-09-30 18:51 UTC (permalink / raw)
To: Jakub Brnak; +Cc: namhyung, acme, acme, linux-perf-users, mpetlan, vmolnaro
On Tue, Sep 30, 2025 at 9:09 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> Some of the perftool-testsuite test cases require a setup to be done
> beforehand as may be recording data, setting up cache or restoring sample
> rate. The setup file also provides the possibility to set the name of
> the test suite, if the name of the directory is not good enough.
>
> Check for the existence of the "setup.sh" script for the shell test
> suites and run it before the any of the test cases. If the setup fails,
> skip all of the test cases of the test suite as the setup may be
> required for the result to be valid.
>
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Thanks,
Ian
> ---
> tools/perf/tests/builtin-test.c | 30 +++++++++++++++++++-----
> tools/perf/tests/tests-scripts.c | 39 +++++++++++++++++++++++++++++---
> tools/perf/tests/tests-scripts.h | 10 ++++++++
> tools/perf/tests/tests.h | 8 ++++---
> 4 files changed, 75 insertions(+), 12 deletions(-)
>
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 85142dfb3e01..6fc031ef50ea 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -258,6 +258,22 @@ static test_fnptr test_function(const struct test_suite *t, int test_case)
> return t->test_cases[test_case].run_case;
> }
>
> +/* If setup fails, skip all test cases */
> +static void check_shell_setup(const struct test_suite *t, int ret)
> +{
> + struct shell_info *test_info;
> +
> + if (!t->priv)
> + return;
> +
> + test_info = t->priv;
> +
> + if (ret == TEST_SETUP_FAIL)
> + test_info->has_setup = FAILED_SETUP;
> + else if (test_info->has_setup == RUN_SETUP)
> + test_info->has_setup = PASSED_SETUP;
> +}
> +
> static bool test_exclusive(const struct test_suite *t, int test_case)
> {
> if (test_case <= 0)
> @@ -347,10 +363,9 @@ static int run_test_child(struct child_process *process)
> return -err;
> }
>
> -#define TEST_RUNNING -3
> -
> -static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case,
> - int result, int width, int running)
> +static int print_test_result(struct test_suite *t, int curr_suite,
> + int curr_test_case, int result, int width,
> + int running)
> {
> if (test_suite__num_test_cases(t) > 1) {
> int subw = width > 2 ? width - 2 : width;
> @@ -367,7 +382,8 @@ static int print_test_result(struct test_suite *t, int curr_suite, int curr_test
> case TEST_OK:
> pr_info(" Ok\n");
> break;
> - case TEST_SKIP: {
> + case TEST_SKIP:
> + case TEST_SETUP_FAIL:{
> const char *reason = skip_reason(t, curr_test_case);
>
> if (reason)
> @@ -482,6 +498,7 @@ static void finish_test(struct child_test **child_tests, int running_test, int c
> }
> /* Clean up child process. */
> ret = finish_command(&child_test->process);
> + check_shell_setup(t, ret);
> if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
> fprintf(stderr, "%s", err_output.buf);
>
> @@ -504,7 +521,8 @@ static int start_test(struct test_suite *test, int curr_suite, int curr_test_cas
> err = test_function(test, curr_test_case)(test, curr_test_case);
> pr_debug("---- end ----\n");
> print_test_result(test, curr_suite, curr_test_case, err, width,
> - /*running=*/0);
> + /*running=*/0);
> + check_shell_setup(test, err);
> }
> return 0;
> }
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index e47f7eb50a73..10aab7c19ffe 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -138,6 +138,12 @@ static bool is_test_script(int dir_fd, const char *name)
> return is_shell_script(dir_fd, name);
> }
>
> +/* Filter for scandir */
> +static int setup_filter(const struct dirent *entry)
> +{
> + return strcmp(entry->d_name, SHELL_SETUP);
> +}
> +
> /* Duplicate a string and fall over and die if we run out of memory */
> static char *strdup_check(const char *str)
> {
> @@ -178,6 +184,7 @@ static void free_suite(struct test_suite *suite)
>
> static int shell_test__run(struct test_suite *test, int subtest)
> {
> + struct shell_info *test_info = test->priv;
> const char *file;
> int err;
> char *cmd = NULL;
> @@ -189,6 +196,23 @@ static int shell_test__run(struct test_suite *test, int subtest)
> file = test->test_cases[0].name;
> }
>
> + /* Run setup if needed */
> + if (test_info->has_setup == RUN_SETUP) {
> + char *setup_script;
> +
> + if (asprintf(&setup_script, "%s%s%s", test_info->base_path,
> + SHELL_SETUP, verbose ? " -v" : "") < 0)
> + return TEST_SETUP_FAIL;
> +
> + err = system(setup_script);
> + free(setup_script);
> +
> + if (err)
> + return TEST_SETUP_FAIL;
> + } else if (test_info->has_setup == FAILED_SETUP) {
> + return TEST_SKIP; /* Skip test suite if setup failed */
> + }
> +
> if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
> return TEST_FAIL;
>
> @@ -230,6 +254,7 @@ static struct test_suite *prepare_test_suite(int dir_fd)
> }
>
> test_info->base_path = strdup_check(dirpath); /* Absolute path to dir */
> + test_info->has_setup = NO_SETUP;
>
> test_suite->priv = test_info;
> test_suite->desc = NULL;
> @@ -312,7 +337,7 @@ static void append_scripts_in_subdir(int dir_fd,
> int n_dirs, i;
>
> /* List files, sorted by alpha */
> - n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
> + n_dirs = scandirat(dir_fd, ".", &entlist, setup_filter, alphasort);
> if (n_dirs == -1)
> return;
> for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> @@ -409,8 +434,16 @@ static void append_suites_in_dir(int dir_fd,
> continue;
> }
>
> - /* If no setup, set name to the directory */
> - test_suite->desc = strdup_check(ent->d_name);
> + if (is_test_script(fd, SHELL_SETUP)) { /* Check for setup existence */
> + char *desc = shell_test__description(fd, SHELL_SETUP);
> +
> + /* Set the suite name by the setup description */
> + test_suite->desc = desc;
> + ((struct shell_info *)(test_suite->priv))->has_setup = RUN_SETUP;
> + } else {
> + /* If no setup, set name to the directory */
> + test_suite->desc = strdup_check(ent->d_name);
> + }
>
> append_suite(result, result_sz, test_suite);
> close(fd);
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index 60a1a19a45c9..da4dcd26140c 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -4,8 +4,18 @@
>
> #include "tests.h"
>
> +#define SHELL_SETUP "setup.sh"
> +
> +enum shell_setup {
> + NO_SETUP = 0,
> + RUN_SETUP = 1,
> + FAILED_SETUP = 2,
> + PASSED_SETUP = 3,
> +};
> +
> struct shell_info {
> const char *base_path;
> + enum shell_setup has_setup;
> };
>
> struct test_suite **create_script_test_suites(void);
> diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
> index 97e62db8764a..9f3e3b90f1ac 100644
> --- a/tools/perf/tests/tests.h
> +++ b/tools/perf/tests/tests.h
> @@ -6,9 +6,11 @@
> #include "util/debug.h"
>
> enum {
> - TEST_OK = 0,
> - TEST_FAIL = -1,
> - TEST_SKIP = -2,
> + TEST_OK = 0,
> + TEST_FAIL = -1,
> + TEST_SKIP = -2,
> + TEST_RUNNING = -3,
> + TEST_SETUP_FAIL = -4,
> };
>
> #define TEST_ASSERT_VAL(text, cond) \
> --
> 2.50.1
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 4/7] perftool-testsuite: Add empty setup for base_probe
2025-09-30 16:09 ` [PATCH v4 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
@ 2025-09-30 18:52 ` Ian Rogers
0 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2025-09-30 18:52 UTC (permalink / raw)
To: Jakub Brnak; +Cc: namhyung, acme, acme, linux-perf-users, mpetlan, vmolnaro
On Tue, Sep 30, 2025 at 9:09 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> Add empty setup to set a proper name for base_probe testsuite, can be
> utilized for basic test setup for the future.
>
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Thanks,
Ian
> ---
> tools/perf/tests/shell/base_probe/setup.sh | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
> create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
>
> diff --git a/tools/perf/tests/shell/base_probe/setup.sh b/tools/perf/tests/shell/base_probe/setup.sh
> new file mode 100755
> index 000000000000..fbb99325b555
> --- /dev/null
> +++ b/tools/perf/tests/shell/base_probe/setup.sh
> @@ -0,0 +1,13 @@
> +#!/bin/bash
> +# perftool-testsuite :: perf_probe
> +# SPDX-License-Identifier: GPL-2.0
> +
> +#
> +# setup.sh of perf probe test
> +# Author: Michael Petlan <mpetlan@redhat.com>
> +#
> +# Description:
> +#
> +# Setting testsuite name, for future use
> +#
> +#
> --
> 2.50.1
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 5/7] perf test: Introduce storing logs for shell tests
2025-09-30 16:09 ` [PATCH v4 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
@ 2025-09-30 19:00 ` Ian Rogers
0 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2025-09-30 19:00 UTC (permalink / raw)
To: Jakub Brnak; +Cc: namhyung, acme, acme, linux-perf-users, mpetlan, vmolnaro
On Tue, Sep 30, 2025 at 9:09 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> Create temporary directories for storing log files for shell tests
> that could help while debugging. The log files are necessary for
> perftool testsuite test cases also. If the variable PERFTEST_KEEP_LOGS
> is set keep the logs, else delete them.
I think log files for tests are a good idea. We run tests in parallel
but read the stdout/stderr sequentially. For a failing test we may
fail to read stdout/stderr in some situations as the process has long
since died. Rather than add log files specifically for shell tests
perhaps update start_test to pass the file descriptors to the child
test processes and then clean up in finish_test?
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/builtin-test.c?h=perf-tools-next#n528
Thanks,
Ian
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
> tools/perf/tests/builtin-test.c | 91 ++++++++++++++++++++++++++++++++
> tools/perf/tests/tests-scripts.c | 3 ++
> tools/perf/tests/tests-scripts.h | 1 +
> 3 files changed, 95 insertions(+)
>
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 6fc031ef50ea..a943f66cbac0 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -6,6 +6,7 @@
> */
> #include <ctype.h>
> #include <fcntl.h>
> +#include <ftw.h>
> #include <errno.h>
> #ifdef HAVE_BACKTRACE_SUPPORT
> #include <execinfo.h>
> @@ -282,6 +283,85 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
> return t->test_cases[test_case].exclusive;
> }
>
> +static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
> + int typeflag, struct FTW *ftwbuf)
> +{
> + int rv = -1;
> +
> + /* Stop traversal if going too deep */
> + if (ftwbuf->level > 5) {
> + pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
> + return rv;
> + }
> +
> + /* Remove only expected directories */
> + if (typeflag == FTW_D || typeflag == FTW_DP) {
> + const char *dirname = fpath + ftwbuf->base;
> +
> + if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
> + strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
> + pr_err("Unknown directory %s", dirname);
> + return rv;
> + }
> + }
> +
> + /* Attempt to remove the file */
> + rv = remove(fpath);
> + if (rv)
> + pr_err("Failed to remove file: %s", fpath);
> +
> + return rv;
> +}
> +
> +static bool create_logs(struct test_suite *t, int pass)
> +{
> + bool store_logs = t->priv && ((struct shell_info *)(t->priv))->store_logs;
> +
> + if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
> + /* Sequential and non-exclusive tests run on the first pass. */
> + return store_logs;
> + } else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
> + /* Exclusive tests without sequential run on the second pass. */
> + return store_logs;
> + }
> + return false;
> +}
> +
> +static char *setup_shell_logs(const char *name)
> +{
> + char template[PATH_MAX];
> + char *temp_dir;
> +
> + if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
> + pr_err("Failed to create log dir template");
> + return NULL; /* Skip the testsuite */
> + }
> +
> + temp_dir = mkdtemp(template);
> + if (temp_dir) {
> + setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
> + return strdup(temp_dir);
> + }
> +
> + pr_err("Failed to create the temporary directory");
> +
> + return NULL; /* Skip the testsuite */
> +}
> +
> +static void cleanup_shell_logs(char *dirname)
> +{
> + char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
> +
> + /* Check if logs should be kept or do cleanup */
> + if (dirname) {
> + if (!keep_logs || strcmp(keep_logs, "y") != 0)
> + nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
> + free(dirname);
> + }
> +
> + unsetenv("PERFSUITE_RUN_DIR");
> +}
> +
> static bool perf_test__matches(const char *desc, int suite_num, int argc, const char *argv[])
> {
> int i;
> @@ -628,6 +708,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
> for (struct test_suite **t = suites; *t; t++, curr_suite++) {
> int curr_test_case;
> bool suite_matched = false;
> + char *tmpdir = NULL;
>
> if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) {
> /*
> @@ -657,6 +738,15 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
> }
>
> for (unsigned int run = 0; run < runs_per_test; run++) {
> + /* Setup temporary log directories for shell test suites */
> + if (create_logs(*t, pass)) {
> + tmpdir = setup_shell_logs((*t)->desc);
> +
> + /* Couldn't create log dir, skip test suite */
> + if (tmpdir == NULL)
> + ((struct shell_info *)((*t)->priv))->has_setup =
> + FAILED_SETUP;
> + }
> test_suite__for_each_test_case(*t, curr_test_case) {
> if (!suite_matched &&
> !perf_test__matches(test_description(*t, curr_test_case),
> @@ -669,6 +759,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
> goto err_out;
> }
> }
> + cleanup_shell_logs(tmpdir);
> }
> if (!sequential) {
> /* Parallel mode starts tests but doesn't finish them. Do that now. */
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index 10aab7c19ffe..9b4782bc1767 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -255,6 +255,7 @@ static struct test_suite *prepare_test_suite(int dir_fd)
>
> test_info->base_path = strdup_check(dirpath); /* Absolute path to dir */
> test_info->has_setup = NO_SETUP;
> + test_info->store_logs = false;
>
> test_suite->priv = test_info;
> test_suite->desc = NULL;
> @@ -434,6 +435,8 @@ static void append_suites_in_dir(int dir_fd,
> continue;
> }
>
> + /* Store logs for testsuite is sub-directories */
> + ((struct shell_info *)(test_suite->priv))->store_logs = true;
> if (is_test_script(fd, SHELL_SETUP)) { /* Check for setup existence */
> char *desc = shell_test__description(fd, SHELL_SETUP);
>
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index da4dcd26140c..41da0a175e4e 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -16,6 +16,7 @@ enum shell_setup {
> struct shell_info {
> const char *base_path;
> enum shell_setup has_setup;
> + bool store_logs;
> };
>
> struct test_suite **create_script_test_suites(void);
> --
> 2.50.1
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths
2025-09-30 18:28 ` Ian Rogers
@ 2025-10-01 12:37 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 14+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-01 12:37 UTC (permalink / raw)
To: Ian Rogers
Cc: Jakub Brnak, namhyung, acme, linux-perf-users, mpetlan, vmolnaro
On Tue, Sep 30, 2025 at 11:28:33AM -0700, Ian Rogers wrote:
> On Tue, Sep 30, 2025 at 9:09 AM Jakub Brnak <jbrnak@redhat.com> wrote:
> >
> > From: Veronika Molnarova <vmolnaro@redhat.com>
> >
> > Test cases from perftool_testsuite are affected by the current
> > directory where the test are run. For this reason, the test
> > driver has to change the directory to the base_dir for references to
> > work correctly.
> >
> > Utilize absolute paths when sourcing and referencing other scripts so
> > that the current working directory doesn't impact the test cases.
> >
> > Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> > Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> > Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
>
> Reviewed-by: Ian Rogers <irogers@google.com>
Thanks, applied to perf-tools-next,
- Arnaldo
> Although not changed here, any chance of making the naming of the file
> and variables better fitting to inclusive language guidelines?
> https://www.aswf.io/inclusive-language-guide/
>
> Thanks,
> Ian
>
> > ---
> > .../base_probe/test_adding_blacklisted.sh | 20 +++-
> > .../shell/base_probe/test_adding_kernel.sh | 97 ++++++++++++-----
> > .../perf/tests/shell/base_probe/test_basic.sh | 31 ++++--
> > .../shell/base_probe/test_invalid_options.sh | 14 ++-
> > .../shell/base_probe/test_line_semantics.sh | 7 +-
> > tools/perf/tests/shell/base_report/setup.sh | 10 +-
> > .../tests/shell/base_report/test_basic.sh | 103 +++++++++++++-----
> > tools/perf/tests/shell/common/init.sh | 4 +-
> > 8 files changed, 202 insertions(+), 84 deletions(-)
> >
> > diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> > index 8226449ac5c3..f74aab5c5d7f 100755
> > --- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> > @@ -13,11 +13,12 @@
> > # they must be skipped.
> > #
> >
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> > TEST_RESULT=0
> >
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> > # skip if not supported
> > BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
> > if [ -z "$BLACKFUNC_LIST" ]; then
> > @@ -53,7 +54,8 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
> > PERF_EXIT_CODE=$?
> >
> > # check for bad DWARF polluting the result
> > - ../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
> >
> > if [ $? -eq 0 ]; then
> > SKIP_DWARF=1
> > @@ -73,7 +75,11 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
> > fi
> > fi
> > else
> > - ../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> > + "$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" \
> > + "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" \
> > + "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" \
> > + "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> > CHECK_EXIT_CODE=$?
> >
> > SKIP_DWARF=0
> > @@ -94,7 +100,9 @@ fi
> > $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" \
> > + < $LOGS_DIR/adding_blacklisted_list.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
> > diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> > index df288cf90cd6..555a825d55f2 100755
> > --- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> > @@ -13,13 +13,14 @@
> > # and removing.
> > #
> >
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> > TEST_RESULT=0
> >
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> > # shellcheck source=lib/probe_vfs_getname.sh
> > -. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
> > +. "$DIR_PATH/../lib/probe_vfs_getname.sh"
> >
> > TEST_PROBE=${TEST_PROBE:-"inode_permission"}
> >
> > @@ -44,7 +45,9 @@ for opt in "" "-a" "--add"; do
> > $CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
> > PERF_EXIT_CODE=$?
> >
> > - ../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
> > + < $LOGS_DIR/adding_kernel_add$opt.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
> > @@ -58,7 +61,10 @@ done
> > $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$RE_LINE_EMPTY" "List of pre-defined events" \
> > + "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" \
> > + "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
> > @@ -71,7 +77,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
> > $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" \
> > + < $LOGS_DIR/adding_kernel_list-l.log
> > CHECK_EXIT_CODE=$?
> >
> > if [ $NO_DEBUGINFO ] ; then
> > @@ -93,9 +101,13 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
> > # the value should be greater than 1
> > REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
> > REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
> > -../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" \
> > + "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" \
> > + < $LOGS_DIR/adding_kernel_using_probe.log
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> > @@ -108,7 +120,8 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> > $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> > @@ -121,7 +134,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> > $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" \
> > + < $LOGS_DIR/adding_kernel_list_removed.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
> > @@ -135,7 +150,9 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
> > PERF_EXIT_CODE=$?
> >
> > # check for the output (should be the same as usual)
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
> > + < $LOGS_DIR/adding_kernel_dryrun.err
> > CHECK_EXIT_CODE=$?
> >
> > # check that no probe was added in real
> > @@ -152,7 +169,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
> > $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" \
> > + < $LOGS_DIR/adding_kernel_forceadd_01.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
> > @@ -162,7 +181,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
> > ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Error: event \"$TEST_PROBE\" already exists." \
> > + "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
> > @@ -173,7 +194,9 @@ NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l`
> > $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" \
> > + "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
> > @@ -187,7 +210,9 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
> > PERF_EXIT_CODE=$?
> >
> > REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
> > -../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" \
> > + < $LOGS_DIR/adding_kernel_using_two.log
> > CHECK_EXIT_CODE=$?
> >
> > VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
> > @@ -205,7 +230,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
> > $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "Removed event: probe:$TEST_PROBE" \
> > + "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> > @@ -217,7 +244,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> > $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" \
> > + "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> > CHECK_EXIT_CODE=$?
> >
> > if [ $NO_DEBUGINFO ] ; then
> > @@ -240,13 +269,22 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
> > PERF_EXIT_CODE=$?
> >
> > # check that the error message is reasonable
> > -../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Failed to find" \
> > + "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" \
> > + < $LOGS_DIR/adding_kernel_nonexisting.err
> > CHECK_EXIT_CODE=$?
> > -../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "in this function|at this address" "Error" "Failed to add events" \
> > + < $LOGS_DIR/adding_kernel_nonexisting.err
> > (( CHECK_EXIT_CODE += $? ))
> > -../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "Failed to find" "Error" "Probe point .+ not found" "optimized out" \
> > + "Use.+\-\-range option to show.+location range" \
> > + < $LOGS_DIR/adding_kernel_nonexisting.err
> > (( CHECK_EXIT_CODE += $? ))
> > -../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_no_patterns_found.pl" \
> > + "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > if [ $NO_DEBUGINFO ]; then
> > @@ -264,7 +302,10 @@ fi
> > $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Added new events?:" "probe:$TEST_PROBE" \
> > + "on $TEST_PROBE%return with \\\$retval" \
> > + < $LOGS_DIR/adding_kernel_func_retval_add.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> > @@ -274,7 +315,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> > $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" \
> > + < $LOGS_DIR/adding_kernel_func_retval_record.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
> > @@ -285,9 +328,11 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
> > PERF_EXIT_CODE=$?
> >
> > REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
> > -../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
> > diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
> > index 9d8b5afbeddd..162838ddc974 100755
> > --- a/tools/perf/tests/shell/base_probe/test_basic.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_basic.sh
> > @@ -12,11 +12,12 @@
> > # This test tests basic functionality of perf probe command.
> > #
> >
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> > TEST_RESULT=0
> >
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> > if ! check_kprobes_available; then
> > print_overall_skipped
> > exit 2
> > @@ -30,15 +31,25 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
> > $CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
> > PERF_EXIT_CODE=$?
> >
> > - ../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" \
> > + "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > CHECK_EXIT_CODE=$?
> > - ../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" \
> > + "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" \
> > + "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> > + "$DIR_PATH/../common/check_no_patterns_found.pl" \
> > + "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> > @@ -53,7 +64,9 @@ fi
> > # without any args perf-probe should print usage
> > $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
> >
> > -../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" \
> > + "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results 0 $CHECK_EXIT_CODE "usage message"
> > diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> > index 92f7254eb32a..44a3ae014bfa 100755
> > --- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> > @@ -12,11 +12,12 @@
> > # This test checks whether the invalid and incompatible options are reported
> > #
> >
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> > TEST_RESULT=0
> >
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> > if ! check_kprobes_available; then
> > print_overall_skipped
> > exit 2
> > @@ -33,7 +34,9 @@ for opt in '-a' '-d' '-L' '-V'; do
> > ! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
> > PERF_EXIT_CODE=$?
> >
> > - ../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Error: switch .* requires a value" \
> > + < $LOGS_DIR/invalid_options_missing_argument$opt.err
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
> > @@ -66,7 +69,8 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
> > ! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
> > PERF_EXIT_CODE=$?
> >
> > - ../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
> > diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> > index 20435b6bf6bc..576442d87a44 100755
> > --- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> > @@ -13,11 +13,12 @@
> > # arguments are properly reported.
> > #
> >
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> > TEST_RESULT=0
> >
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> > if ! check_kprobes_available; then
> > print_overall_skipped
> > exit 2
> > diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
> > index 8634e7e0dda6..bb49b0fabb11 100755
> > --- a/tools/perf/tests/shell/base_report/setup.sh
> > +++ b/tools/perf/tests/shell/base_report/setup.sh
> > @@ -12,8 +12,10 @@
> > #
> > #
> >
> > +DIR_PATH="$(dirname $0)"
> > +
> > # include working environment
> > -. ../common/init.sh
> > +. "$DIR_PATH/../common/init.sh"
> >
> > TEST_RESULT=0
> >
> > @@ -24,7 +26,8 @@ SW_EVENT="cpu-clock"
> > $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
> > @@ -38,7 +41,8 @@ echo ==================
> > cat $LOGS_DIR/setup-latency.log
> > echo ==================
> >
> > -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup-latency.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup-latency.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data.1 file"
> > diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
> > index adfd8713b8f8..0dfe7e5fd1ca 100755
> > --- a/tools/perf/tests/shell/base_report/test_basic.sh
> > +++ b/tools/perf/tests/shell/base_report/test_basic.sh
> > @@ -12,11 +12,12 @@
> > #
> > #
> >
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> > TEST_RESULT=0
> >
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >
> > ### help message
> >
> > @@ -25,19 +26,37 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
> > $CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
> > PERF_EXIT_CODE=$?
> >
> > - ../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" \
> > + "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> > CHECK_EXIT_CODE=$?
> > - ../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "input" "verbose" "show-nr-samples" "show-cpu-utilization" \
> > + "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "hide-unresolved" "sort" "fields" "parent" "exclude-other" \
> > + "column-widths" "field-separator" "dump-raw-trace" "children" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" \
> > + "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" \
> > + "show-total-period" "show-info" "branch-stack" "group" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
> > + "$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "branch-history" "objdump" "demangle" "percent-limit" "percentage" \
> > + "header" "itrace" "full-source-path" "show-ref-call-graph" \
> > + < $LOGS_DIR/basic_helpmsg.log
> > (( CHECK_EXIT_CODE += $? ))
> > - ../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> > + "$DIR_PATH/../common/check_no_patterns_found.pl" \
> > + "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> > @@ -57,9 +76,12 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
> > REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
> > REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
> > REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> > -../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" \
> > + "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
> > @@ -74,9 +96,11 @@ PERF_EXIT_CODE=$?
> >
> > REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
> > REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> > -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
> > @@ -98,7 +122,10 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
> > REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
> > # disable precise check for "nrcpus avail" in BASIC runmode
> > test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
> > -../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" \
> > + "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" \
> > + "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
> > @@ -129,9 +156,11 @@ PERF_EXIT_CODE=$?
> >
> > REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
> > REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> > -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> > @@ -144,9 +173,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> > $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
> > PERF_EXIT_CODE=$?
> >
> > -grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
> > +grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | \
> > + "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> > @@ -159,9 +190,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> > $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" \
> > + "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> > @@ -174,9 +207,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> > $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
> > PERF_EXIT_CODE=$?
> >
> > -grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
> > +grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | \
> > + "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
> > @@ -189,7 +224,8 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
> > $CMD_PERF report -i $CURRENT_TEST_DIR/perf.data.1 --stdio --header-only > $LOGS_DIR/latency_header.log
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl ", context_switch = 1, " < $LOGS_DIR/latency_header.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + ", context_switch = 1, " < $LOGS_DIR/latency_header.log
> > CHECK_EXIT_CODE=$?
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency header"
> > @@ -200,9 +236,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency header"
> > $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/latency_default.log 2> $LOGS_DIR/latency_default.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "# Overhead Latency Command" < $LOGS_DIR/latency_default.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "# Overhead Latency Command" < $LOGS_DIR/latency_default.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/latency_default.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "stderr-whitelist.txt" < $LOGS_DIR/latency_default.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "default report for latency profile"
> > @@ -213,9 +251,11 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "default report for latency profi
> > $CMD_PERF report --latency --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/latency_latency.log 2> $LOGS_DIR/latency_latency.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "# Latency Overhead Command" < $LOGS_DIR/latency_latency.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "# Latency Overhead Command" < $LOGS_DIR/latency_latency.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/latency_latency.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "stderr-whitelist.txt" < $LOGS_DIR/latency_latency.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency report for latency profile"
> > @@ -226,9 +266,12 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "latency report for latency profi
> > $CMD_PERF report --hierarchy --sort latency,parallelism,comm,symbol --parallelism=1,2 --stdio -i $CURRENT_TEST_DIR/perf.data.1 > $LOGS_DIR/parallelism_hierarchy.log 2> $LOGS_DIR/parallelism_hierarchy.err
> > PERF_EXIT_CODE=$?
> >
> > -../common/check_all_patterns_found.pl "# Latency Parallelism / Command / Symbol" < $LOGS_DIR/parallelism_hierarchy.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" \
> > + "# Latency Parallelism / Command / Symbol" \
> > + < $LOGS_DIR/parallelism_hierarchy.log
> > CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/parallelism_hierarchy.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" \
> > + "stderr-whitelist.txt" < $LOGS_DIR/parallelism_hierarchy.err
> > (( CHECK_EXIT_CODE += $? ))
> >
> > print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "parallelism histogram"
> > diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
> > index 26c7525651e0..cbfc78bec974 100644
> > --- a/tools/perf/tests/shell/common/init.sh
> > +++ b/tools/perf/tests/shell/common/init.sh
> > @@ -11,8 +11,8 @@
> > #
> >
> >
> > -. ../common/settings.sh
> > -. ../common/patterns.sh
> > +. "$(dirname $0)/../common/settings.sh"
> > +. "$(dirname $0)/../common/patterns.sh"
> >
> > THIS_TEST_NAME=`basename $0 .sh`
> >
> > --
> > 2.50.1
> >
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-10-01 12:37 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-30 16:09 [PATCH v4 0/7] Introduce structure for shell tests Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
2025-09-30 18:28 ` Ian Rogers
2025-10-01 12:37 ` Arnaldo Carvalho de Melo
2025-09-30 16:09 ` [PATCH v4 2/7] perf tests: Create a structure for shell tests Jakub Brnak
2025-09-30 18:49 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
2025-09-30 18:51 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
2025-09-30 18:52 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
2025-09-30 19:00 ` Ian Rogers
2025-09-30 16:09 ` [PATCH v4 6/7] perf test: Format log directories " Jakub Brnak
2025-09-30 16:09 ` [PATCH v4 7/7] perf test: Remove perftool drivers Jakub Brnak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).