linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] Introduce structure for shell tests
@ 2024-12-20 22:03 vmolnaro
  2024-12-20 22:03 ` [PATCH 01/10] perf test perftool_testsuite: Add missing description vmolnaro
                   ` (10 more replies)
  0 siblings, 11 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Hello,

Sending the third patchset from perftool-testsuite upstreaming effort,
which also contains new possibilities for shell tests, such as a two-level
structured test hierarchy, a setup file for the test suite and ability to
store logs.

The patches do not add any new test cases but instead provide the needed
environment that was temporarily replaced by the perftool test drivers.

We wanted to provide the possibility to have a shell test consisting 
of multiple subtests, as is already done for the C tests. The logical 
structuring of the test cases was a part of the perftool test suite,
and we saw this as an opportunity to introduce a structured approach for 
the perf shell tests.

A directory in the shell directory will be represented as a test suite 
if it contains at least one executable shell test. In case of two and 
more tests, the subtests are are visibly differentiated from the test 
suite by having a subtest index. All deeper levels of subdirectories 
are still searched for tests but do not create additional levels of 
hierarchy.

Some test suites require setup to be done before they are run, such 
recording samples or setting up test probes. This can be done by 
adding a "setup.sh" executable file in the test suite directory, which 
will be run before all of the tests. If the setup fails, all of 
the tests are skipped, as it is assumed that the setup is required 
for their execution. The setup file also gives us the possibility to 
name the test suite. If there is no setup file, the name is derived 
from the name of the directory.

Lastly, we wanted to provide a way to store the test logs after execution 
for debugging purposes, if necessary. The test logs for perftool tests 
are stored in a '/tmp/perf_test_*' temporary directory. By default,
these logs are cleared after the test finishes. However, if the env
variable PERFTEST_KEEP_LOGS is set to "y", the test logs are retained
for debugging.

For now, all of the perftool tests are marked as exclusive, preventing 
from running parallel. This may change in the future if we ensure that 
they will not interfere with other tests being run simultaneously.

Thoughts and ideas are welcome.

Thanks and regards,

Veronika

Michael Petlan (1):
  perf testsuite: Fix perf-report tests installation

Veronika Molnarova (9):
  perf test perftool_testsuite: Add missing description
  perf test perftool_testsuite: Return correct value for skipping
  perf test perftool_testsuite: Use absolute paths
  perf tests: Create a structure for shell tests
  perf test: Provide setup for the shell test suite
  perftool-testsuite: Add empty setup for base_probe
  perf test: Introduce storing logs for shell tests
  perf test: Format log directories for shell tests
  perf test: Remove perftool drivers

 tools/perf/Makefile.perf                      |   3 +-
 tools/perf/tests/builtin-test.c               | 151 +++++++++-
 tools/perf/tests/shell/base_probe/setup.sh    |  13 +
 .../base_probe/test_adding_blacklisted.sh     |  17 +-
 .../shell/base_probe/test_adding_kernel.sh    |  57 ++--
 .../perf/tests/shell/base_probe/test_basic.sh |  23 +-
 .../shell/base_probe/test_invalid_options.sh  |  15 +-
 .../shell/base_probe/test_line_semantics.sh   |  11 +-
 tools/perf/tests/shell/base_report/setup.sh   |   8 +-
 .../tests/shell/base_report/test_basic.sh     |  49 ++--
 tools/perf/tests/shell/common/init.sh         |   6 +-
 .../tests/shell/perftool-testsuite_probe.sh   |  23 --
 .../tests/shell/perftool-testsuite_report.sh  |  23 --
 tools/perf/tests/tests-scripts.c              | 258 +++++++++++++++---
 tools/perf/tests/tests-scripts.h              |  15 +
 tools/perf/tests/tests.h                      |   8 +-
 16 files changed, 489 insertions(+), 191 deletions(-)
 create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

-- 
2.43.0


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 01/10] perf test perftool_testsuite: Add missing description
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 02/10] perf test perftool_testsuite: Return correct value for skipping vmolnaro
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Properly name the test cases of perftool_testsuite instead of the
license being taken as the name for 'perf test'.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh | 2 +-
 tools/perf/tests/shell/base_probe/test_adding_kernel.sh      | 2 +-
 tools/perf/tests/shell/base_probe/test_basic.sh              | 2 +-
 tools/perf/tests/shell/base_probe/test_invalid_options.sh    | 2 +-
 tools/perf/tests/shell/base_probe/test_line_semantics.sh     | 2 +-
 tools/perf/tests/shell/base_report/setup.sh                  | 2 +-
 tools/perf/tests/shell/base_report/test_basic.sh             | 2 +-
 7 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index ac5a15c57fb38f14..4204e941fad99269 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Reject blacklisted probes (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index d541ffd44a9332b6..c276c2a3fc26ecde 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-# Add 'perf probe's, list and remove them
+# perf_probe :: Add probes, list and remove them (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index 09669ec479f23d2f..a69dc1c9f92c1b96 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Basic perf probe functionality (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 1fedfd8b0d0ddf30..491eae7ba09574b9 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Reject invalid options (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index d8f4bde0f585ac80..83f2db898d795c5a 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Check patterns for line semantics (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
index 4caa496660c64f5e..b03501b2e8fc5330 100755
--- a/tools/perf/tests/shell/base_report/setup.sh
+++ b/tools/perf/tests/shell/base_report/setup.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perftool-testsuite :: perf_report
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
index 47677cbd4df31f0a..2398eba4d3fdd3db 100755
--- a/tools/perf/tests/shell/base_report/test_basic.sh
+++ b/tools/perf/tests/shell/base_report/test_basic.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_report :: Basic perf report options (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 02/10] perf test perftool_testsuite: Return correct value for skipping
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
  2024-12-20 22:03 ` [PATCH 01/10] perf test perftool_testsuite: Add missing description vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 03/10] perf test perftool_testsuite: Use absolute paths vmolnaro
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

In 'perf test', a return value 2 represents that the test case was
skipped. Fix this value for perftool_testsuite test cases to
differentiate between skip and pass values.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh | 2 +-
 tools/perf/tests/shell/base_probe/test_adding_kernel.sh      | 2 +-
 tools/perf/tests/shell/base_probe/test_basic.sh              | 2 +-
 tools/perf/tests/shell/base_probe/test_invalid_options.sh    | 2 +-
 tools/perf/tests/shell/base_probe/test_line_semantics.sh     | 2 +-
 tools/perf/tests/shell/common/init.sh                        | 2 +-
 6 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index 4204e941fad99269..45c21673643641b3 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -22,7 +22,7 @@ TEST_RESULT=0
 BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
 if [ -z "$BLACKFUNC_LIST" ]; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 # try to find vmlinux with DWARF debug info
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index c276c2a3fc26ecde..24fe91550c672cc2 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -33,7 +33,7 @@ fi
 check_kprobes_available
 if [ $? -ne 0 ]; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index a69dc1c9f92c1b96..9d8b5afbeddda268 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -19,7 +19,7 @@ TEST_RESULT=0
 
 if ! check_kprobes_available; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 491eae7ba09574b9..59757a00e6d3e40a 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -19,7 +19,7 @@ TEST_RESULT=0
 
 if ! check_kprobes_available; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index 83f2db898d795c5a..da8999be4604e9d6 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -20,7 +20,7 @@ TEST_RESULT=0
 
 if ! check_kprobes_available; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 
diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
index 075f17623c8eaad0..13ec38c15c014252 100644
--- a/tools/perf/tests/shell/common/init.sh
+++ b/tools/perf/tests/shell/common/init.sh
@@ -85,7 +85,7 @@ consider_skipping()
 	# the runmode of a testcase needs to be at least the current suite's runmode
 	if [ $PERFTOOL_TESTSUITE_RUNMODE -lt $TESTCASE_RUNMODE ]; then
 		print_overall_skipped
-		exit 0
+		exit 2
 	fi
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 03/10] perf test perftool_testsuite: Use absolute paths
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
  2024-12-20 22:03 ` [PATCH 01/10] perf test perftool_testsuite: Add missing description vmolnaro
  2024-12-20 22:03 ` [PATCH 02/10] perf test perftool_testsuite: Return correct value for skipping vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 04/10] perf tests: Create a structure for shell tests vmolnaro
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Test cases from perftool_testsuite are affected by the current
directory where the test are run. For this reason, the test
driver has to change the directory to the base_dir for references to
work correctly.

Utilize absolute paths when sourcing and referencing other scripts so
that the current working directory doesn't impact the test cases.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 .../base_probe/test_adding_blacklisted.sh     | 13 ++---
 .../shell/base_probe/test_adding_kernel.sh    | 53 ++++++++++---------
 .../perf/tests/shell/base_probe/test_basic.sh | 19 +++----
 .../shell/base_probe/test_invalid_options.sh  | 11 ++--
 .../shell/base_probe/test_line_semantics.sh   |  7 +--
 tools/perf/tests/shell/base_report/setup.sh   |  6 ++-
 .../tests/shell/base_report/test_basic.sh     | 47 ++++++++--------
 tools/perf/tests/shell/common/init.sh         |  4 +-
 8 files changed, 84 insertions(+), 76 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index 45c21673643641b3..b60b0a58361d9ebe 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -13,11 +13,12 @@
 #	they must be skipped.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 # skip if not supported
 BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
 if [ -z "$BLACKFUNC_LIST" ]; then
@@ -53,7 +54,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
 	PERF_EXIT_CODE=$?
 
 	# check for bad DWARF polluting the result
-	../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
 
 	if [ $? -eq 0 ]; then
 		SKIP_DWARF=1
@@ -73,7 +74,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
 			fi
 		fi
 	else
-		../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
+		"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
 		CHECK_EXIT_CODE=$?
 
 		SKIP_DWARF=0
@@ -94,7 +95,7 @@ fi
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index 24fe91550c672cc2..5e4a3bf3a1cdaee3 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -13,13 +13,14 @@
 #		and removing.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 # shellcheck source=lib/probe_vfs_getname.sh
-. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
+. "$DIR_PATH/../lib/probe_vfs_getname.sh"
 
 TEST_PROBE=${TEST_PROBE:-"inode_permission"}
 
@@ -44,7 +45,7 @@ for opt in "" "-a" "--add"; do
 	$CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
@@ -58,7 +59,7 @@ done
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
@@ -71,7 +72,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
 $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
 CHECK_EXIT_CODE=$?
 
 if [ $NO_DEBUGINFO ] ; then
@@ -93,9 +94,9 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
 # the value should be greater than 1
 REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
 REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
-../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
@@ -108,7 +109,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
 $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
@@ -121,7 +122,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
@@ -135,7 +136,7 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
 PERF_EXIT_CODE=$?
 
 # check for the output (should be the same as usual)
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
 CHECK_EXIT_CODE=$?
 
 # check that no probe was added in real
@@ -152,7 +153,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
 $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
@@ -162,7 +163,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
 ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
@@ -173,7 +174,7 @@ NO_OF_PROBES=`$CMD_PERF probe -l | wc -l`
 $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
@@ -187,7 +188,7 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
 PERF_EXIT_CODE=$?
 
 REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
-../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
 CHECK_EXIT_CODE=$?
 
 VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
@@ -205,7 +206,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
 $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
@@ -217,7 +218,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
 $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
 CHECK_EXIT_CODE=$?
 
 if [ $NO_DEBUGINFO ] ; then
@@ -240,13 +241,13 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
 PERF_EXIT_CODE=$?
 
 # check that the error message is reasonable
-../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
-../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
-../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_no_patterns_found.pl" "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
 
 if [ $NO_DEBUGINFO ]; then
@@ -264,7 +265,7 @@ fi
 $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
@@ -274,7 +275,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
 $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
@@ -285,9 +286,9 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
 PERF_EXIT_CODE=$?
 
 REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
-../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index 9d8b5afbeddda268..e8fed67be9c1a8ee 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -12,11 +12,12 @@
 #		This test tests basic functionality of perf probe command.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
@@ -30,15 +31,15 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
 	$CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
 	CHECK_EXIT_CODE=$?
-	../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
 	(( CHECK_EXIT_CODE += $? ))
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -53,7 +54,7 @@ fi
 # without any args perf-probe should print usage
 $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
 
-../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
 CHECK_EXIT_CODE=$?
 
 print_results 0 $CHECK_EXIT_CODE "usage message"
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 59757a00e6d3e40a..b56c64a1f3619e4a 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -12,11 +12,12 @@
 #		This test checks whether the invalid and incompatible options are reported
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
@@ -30,7 +31,7 @@ for opt in '-a' '-d' '-L' '-V'; do
 	! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
@@ -63,7 +64,7 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
 	! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index da8999be4604e9d6..2a805b7a18b03315 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -13,11 +13,12 @@
 #		arguments are properly reported.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
index b03501b2e8fc5330..386e947d1c8bcda2 100755
--- a/tools/perf/tests/shell/base_report/setup.sh
+++ b/tools/perf/tests/shell/base_report/setup.sh
@@ -12,8 +12,10 @@
 #
 #
 
+DIR_PATH="$(dirname $0)"
+
 # include working environment
-. ../common/init.sh
+. "$DIR_PATH/../common/init.sh"
 
 test -d "$HEADER_TAR_DIR" || mkdir -p "$HEADER_TAR_DIR"
 
@@ -22,7 +24,7 @@ SW_EVENT="cpu-clock"
 $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
index 2398eba4d3fdd3db..4e931587f6ed9dfa 100755
--- a/tools/perf/tests/shell/base_report/test_basic.sh
+++ b/tools/perf/tests/shell/base_report/test_basic.sh
@@ -12,11 +12,12 @@
 #
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 
 ### help message
 
@@ -25,19 +26,19 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
 	$CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
 	CHECK_EXIT_CODE=$?
-	../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
 	(( CHECK_EXIT_CODE += $? ))
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -57,9 +58,9 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
 REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
@@ -74,9 +75,9 @@ PERF_EXIT_CODE=$?
 
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
@@ -98,7 +99,7 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
 REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
 # disable precise check for "nrcpus avail" in BASIC runmode
 test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
-../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
@@ -129,9 +130,9 @@ PERF_EXIT_CODE=$?
 
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
@@ -144,9 +145,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
 PERF_EXIT_CODE=$?
 
-grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
+grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
@@ -159,9 +160,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
@@ -174,9 +175,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
 PERF_EXIT_CODE=$?
 
-grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
+grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
index 13ec38c15c014252..36dabbd674fb1073 100644
--- a/tools/perf/tests/shell/common/init.sh
+++ b/tools/perf/tests/shell/common/init.sh
@@ -11,8 +11,8 @@
 #
 
 
-. ../common/settings.sh
-. ../common/patterns.sh
+. "$(dirname $0)/../common/settings.sh"
+. "$(dirname $0)/../common/patterns.sh"
 
 THIS_TEST_NAME=`basename $0 .sh`
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 04/10] perf tests: Create a structure for shell tests
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (2 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 03/10] perf test perftool_testsuite: Use absolute paths vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 05/10] perf testsuite: Fix perf-report tests installation vmolnaro
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

The general structure of test suites with test cases has been implemented
for C tests for some time, while shell tests were just all put into a list
without any possible structuring.

Provide the same possibility of test suite structure for shell tests. The
suite is created for each subdirectory located in the 'perf/tests/shell'
directory that contains at least one test script. All of the deeper levels
of subdirectories will be merged with the first level of test cases.
The name of the test suite is the name of the subdirectory, where the test
cases are located. For all of the test scripts that are not in any
subdirectory, a test suite with a single test case is created as it has
been till now.

The new structure of the shell tests for 'perf test list':
    77: build id cache operations
    78: coresight
    78:1: CoreSight / ASM Pure Loop
    78:2: CoreSight / Memcpy 16k 10 Threads
    78:3: CoreSight / Thread Loop 10 Threads - Check TID
    78:4: CoreSight / Thread Loop 2 Threads - Check TID
    78:5: CoreSight / Unroll Loop Thread 10
    79: daemon operations
    80: perf diff tests

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/tests-scripts.c | 223 +++++++++++++++++++++++++------
 tools/perf/tests/tests-scripts.h |   4 +
 2 files changed, 189 insertions(+), 38 deletions(-)

diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index cf3ae0c1d871742b..97f249ce0a36b03c 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -151,14 +151,45 @@ static char *strdup_check(const char *str)
 	return newstr;
 }
 
-static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
+/* Free the whole structure of test_suite with its test_cases */
+static void free_suite(struct test_suite *suite) {
+	if (suite->test_cases){
+		int num = 0;
+		while (suite->test_cases[num].name){ /* Last case has name set to NULL */
+			free((void*) suite->test_cases[num].name);
+			free((void*) suite->test_cases[num].desc);
+			num++;
+		}
+		free(suite->test_cases);
+	}
+	if (suite->desc)
+		free((void*) suite->desc);
+	if (suite->priv){
+		struct shell_info *test_info = suite->priv;
+		free((void*) test_info->base_path);
+		free(test_info);
+	}
+
+	free(suite);
+}
+
+static int shell_test__run(struct test_suite *test, int subtest)
 {
-	const char *file = test->priv;
+	const char *file;
 	int err;
 	char *cmd = NULL;
 
+	/* Get absolute file path */
+	if (subtest >= 0) {
+		file = test->test_cases[subtest].name;
+	}
+	else {		/* Single test case */
+		file = test->test_cases[0].name;
+	}
+
 	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
 		return TEST_FAIL;
+
 	err = system(cmd);
 	free(cmd);
 	if (!err)
@@ -167,63 +198,154 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
 	return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
 }
 
-static void append_script(int dir_fd, const char *name, char *desc,
-			  struct test_suite ***result,
-			  size_t *result_sz)
+static struct test_suite* prepare_test_suite(int dir_fd)
 {
-	char filename[PATH_MAX], link[128];
-	struct test_suite *test_suite, **result_tmp;
-	struct test_case *tests;
+	char dirpath[PATH_MAX], link[128];
 	size_t len;
-	char *exclusive;
+	struct test_suite *test_suite = NULL;
+	struct shell_info *test_info;
 
+	/* Get dir absolute path */
 	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
-	len = readlink(link, filename, sizeof(filename));
+	len = readlink(link, dirpath, sizeof(dirpath));
 	if (len < 0) {
 		pr_err("Failed to readlink %s", link);
-		return;
+		return NULL;
 	}
-	filename[len++] = '/';
-	strcpy(&filename[len], name);
+	dirpath[len++] = '/';
+	dirpath[len] = '\0';
 
-	tests = calloc(2, sizeof(*tests));
-	if (!tests) {
-		pr_err("Out of memory while building script test suite list\n");
-		return;
-	}
-	tests[0].name = strdup_check(name);
-	exclusive = strstr(desc, " (exclusive)");
-	if (exclusive != NULL) {
-		tests[0].exclusive = true;
-		exclusive[0] = '\0';
-	}
-	tests[0].desc = strdup_check(desc);
-	tests[0].run_case = shell_test__run;
 	test_suite = zalloc(sizeof(*test_suite));
 	if (!test_suite) {
 		pr_err("Out of memory while building script test suite list\n");
-		free(tests);
-		return;
+		return NULL;
 	}
-	test_suite->desc = desc;
-	test_suite->test_cases = tests;
-	test_suite->priv = strdup_check(filename);
+
+	test_info = zalloc(sizeof(*test_info));
+	if (!test_info) {
+		pr_err("Out of memory while building script test suite list\n");
+		return NULL;
+	}
+
+	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
+
+	test_suite->priv = test_info;
+	test_suite->desc = NULL;
+	test_suite->test_cases = NULL;
+
+	return test_suite;
+}
+
+static void append_suite(struct test_suite ***result,
+			  size_t *result_sz, struct test_suite *test_suite)
+{
+	struct test_suite **result_tmp;
+
 	/* Realloc is good enough, though we could realloc by chunks, not that
 	 * anyone will ever measure performance here */
 	result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
 	if (result_tmp == NULL) {
 		pr_err("Out of memory while building script test suite list\n");
-		free(tests);
-		free(test_suite);
+		free_suite(test_suite);
 		return;
 	}
+
 	/* Add file to end and NULL terminate the struct array */
 	*result = result_tmp;
 	(*result)[*result_sz] = test_suite;
 	(*result_sz)++;
 }
 
-static void append_scripts_in_dir(int dir_fd,
+static void append_script_to_suite(int dir_fd, const char *name, char *desc,
+					struct test_suite *test_suite, size_t *tc_count)
+{
+	char file_name[PATH_MAX], link[128];
+	struct test_case *tests;
+	size_t len;
+	char *exclusive;
+
+	if (!test_suite)
+		return;
+
+	/* Requires an empty test case at the end */
+	tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
+	if (!tests) {
+		pr_err("Out of memory while building script test suite list\n");
+		return;
+	}
+
+	/* Get path to the test script */
+	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
+	len = readlink(link, file_name, sizeof(file_name));
+	if (len < 0) {
+		pr_err("Failed to readlink %s", link);
+		return;
+	}
+	file_name[len++] = '/';
+	strcpy(&file_name[len], name);
+
+	tests[(*tc_count)].name = strdup_check(file_name);	/* Get path to the script from base dir */
+	tests[(*tc_count)].exclusive = false;
+	exclusive = strstr(desc, " (exclusive)");
+	if (exclusive != NULL) {
+		tests[(*tc_count)].exclusive = true;
+		exclusive[0] = '\0';
+	}
+	tests[(*tc_count)].desc = desc;
+	tests[(*tc_count)].skip_reason = NULL;	/* Unused */
+	tests[(*tc_count)++].run_case = shell_test__run;
+
+	tests[(*tc_count)].name = NULL;		/* End the test cases */
+
+	test_suite->test_cases = tests;
+}
+
+static void append_scripts_in_subdir(int dir_fd,
+				  struct test_suite *suite,
+				  size_t *tc_count)
+{
+	struct dirent **entlist;
+	struct dirent *ent;
+	int n_dirs, i;
+
+	/* List files, sorted by alpha */
+	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+	if (n_dirs == -1)
+		return;
+	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
+		int fd;
+
+		if (ent->d_name[0] == '.')
+			continue; /* Skip hidden files */
+		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
+			char *desc = shell_test__description(dir_fd, ent->d_name);
+
+			if (desc) /* It has a desc line - valid script */
+				append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
+			continue;
+		}
+
+		if (ent->d_type != DT_DIR) {
+			struct stat st;
+
+			if (ent->d_type != DT_UNKNOWN)
+				continue;
+			fstatat(dir_fd, ent->d_name, &st, 0);
+			if (!S_ISDIR(st.st_mode))
+				continue;
+		}
+
+		fd = openat(dir_fd, ent->d_name, O_PATH);
+
+		/* Recurse into the dir */
+		append_scripts_in_subdir(fd, suite, tc_count);
+	}
+	for (i = 0; i < n_dirs; i++) /* Clean up */
+		zfree(&entlist[i]);
+	free(entlist);
+}
+
+static void append_suits_in_dir(int dir_fd,
 				  struct test_suite ***result,
 				  size_t *result_sz)
 {
@@ -237,16 +359,27 @@ static void append_scripts_in_dir(int dir_fd,
 		return;
 	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
 		int fd;
+		struct test_suite *test_suite;
+		size_t cases_count = 0;
 
 		if (ent->d_name[0] == '.')
 			continue; /* Skip hidden files */
 		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
 			char *desc = shell_test__description(dir_fd, ent->d_name);
 
-			if (desc) /* It has a desc line - valid script */
-				append_script(dir_fd, ent->d_name, desc, result, result_sz);
+			if (desc) { /* It has a desc line - valid script */
+				test_suite = prepare_test_suite(dir_fd); /* Create a test suite with a single test case */
+				append_script_to_suite(dir_fd, ent->d_name, desc, test_suite, &cases_count);
+				test_suite->desc = strdup_check(desc);
+
+				if (cases_count)
+					append_suite(result, result_sz, test_suite);
+				else /* Wasn't able to create the test case */
+					free_suite(test_suite);
+			}
 			continue;
 		}
+
 		if (ent->d_type != DT_DIR) {
 			struct stat st;
 
@@ -258,8 +391,22 @@ static void append_scripts_in_dir(int dir_fd,
 		}
 		if (strncmp(ent->d_name, "base_", 5) == 0)
 			continue; /* Skip scripts that have a separate driver. */
+
+		/* Scan subdir for test cases*/
 		fd = openat(dir_fd, ent->d_name, O_PATH);
-		append_scripts_in_dir(fd, result, result_sz);
+		test_suite = prepare_test_suite(fd);	/* Prepare a testsuite with its path */
+		if (!test_suite)
+			continue;
+
+		append_scripts_in_subdir(fd, test_suite, &cases_count);
+		if (cases_count == 0){
+			free_suite(test_suite);
+			continue;
+		}
+
+		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+
+		append_suite(result, result_sz, test_suite);
 	}
 	for (i = 0; i < n_dirs; i++) /* Clean up */
 		zfree(&entlist[i]);
@@ -277,7 +424,7 @@ struct test_suite **create_script_test_suites(void)
 	 * length array.
 	 */
 	if (dir_fd >= 0)
-		append_scripts_in_dir(dir_fd, &result, &result_sz);
+		append_suits_in_dir(dir_fd, &result, &result_sz);
 
 	result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
 	if (result_tmp == NULL) {
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index b553ad26ea17642a..60a1a19a45c999f4 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,6 +4,10 @@
 
 #include "tests.h"
 
+struct shell_info {
+	const char *base_path;
+};
+
 struct test_suite **create_script_test_suites(void);
 
 #endif /* TESTS_SCRIPTS_H */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 05/10] perf testsuite: Fix perf-report tests installation
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (3 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 04/10] perf tests: Create a structure for shell tests vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 06/10] perf test: Provide setup for the shell test suite vmolnaro
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Michael Petlan <mpetlan@redhat.com>

There was a copy-paste mistake in the installation commands. Also, we
need to install stderr-whitelist.txt file, which contains allowed
messages that are printed on stderr and should not cause test fail.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/Makefile.perf | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index d74241a151313bd0..50cbc19b002213f5 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -1142,7 +1142,8 @@ install-tests: all install-gtk
 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
 		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
-		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+		$(INSTALL) tests/shell/base_report/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+		$(INSTALL) tests/shell/base_report/*.txt '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight' ; \
 		$(INSTALL) tests/shell/coresight/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight'
 	$(Q)$(MAKE) -C tests/shell/coresight install-tests
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 06/10] perf test: Provide setup for the shell test suite
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (4 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 05/10] perf testsuite: Fix perf-report tests installation vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 07/10] perftool-testsuite: Add empty setup for base_probe vmolnaro
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Some of the perftool-testsuite test cases require a setup to be done
beforehand as may be recording data, setting up cache or restoring sample
rate. The setup file also provides the possibility to set the name of
the test suite, if the name of the directory is not good enough.

Check for the existence of the "setup.sh" script for the shell test
suites and run it before the any of the test cases. If the setup fails,
skip all of the test cases of the test suite as the setup may be
required for the result to be valid.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/builtin-test.c  | 23 ++++++++++++++++++---
 tools/perf/tests/tests-scripts.c | 34 ++++++++++++++++++++++++++++++--
 tools/perf/tests/tests-scripts.h | 10 ++++++++++
 tools/perf/tests/tests.h         |  8 +++++---
 4 files changed, 67 insertions(+), 8 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index d2cabaa8ad922d68..585613446d8a3c8d 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -193,6 +193,22 @@ static test_fnptr test_function(const struct test_suite *t, int subtest)
 	return t->test_cases[subtest].run_case;
 }
 
+/* If setup fails, skip all test cases */
+static void check_shell_setup(const struct test_suite *t, int ret)
+{
+	struct shell_info* test_info;
+
+	if (!t->priv)
+		return;
+
+	test_info = t->priv;
+
+	if (ret == TEST_SETUP_FAIL)
+		test_info->has_setup = FAILED_SETUP;
+	else if (test_info->has_setup == RUN_SETUP)
+		test_info->has_setup = PASSED_SETUP;
+}
+
 static bool test_exclusive(const struct test_suite *t, int subtest)
 {
 	if (subtest <= 0)
@@ -269,8 +285,6 @@ static int run_test_child(struct child_process *process)
 	return -err;
 }
 
-#define TEST_RUNNING -3
-
 static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width,
 			     int running)
 {
@@ -288,7 +302,8 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
 	case TEST_OK:
 		pr_info(" Ok\n");
 		break;
-	case TEST_SKIP: {
+	case TEST_SKIP:
+	case TEST_SETUP_FAIL:{
 		const char *reason = skip_reason(t, subtest);
 
 		if (reason)
@@ -401,6 +416,7 @@ static void finish_test(struct child_test **child_tests, int running_test, int c
 	}
 	/* Clean up child process. */
 	ret = finish_command(&child_test->process);
+	check_shell_setup(t, ret);
 	if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
 		fprintf(stderr, "%s", err_output.buf);
 
@@ -423,6 +439,7 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
 			err = test_function(test, subi)(test, subi);
 			pr_debug("---- end ----\n");
 			print_test_result(test, i, subi, err, width, /*running=*/0);
+			check_shell_setup(test, err);
 		}
 		return 0;
 	}
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 97f249ce0a36b03c..77a6b8d2213e6e74 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -138,6 +138,11 @@ static bool is_test_script(int dir_fd, const char *name)
 	return is_shell_script(dir_fd, name);
 }
 
+/* Filter for scandir */
+static int setup_filter(const struct dirent *entry){
+	return strcmp(entry->d_name, SHELL_SETUP);
+}
+
 /* Duplicate a string and fall over and die if we run out of memory */
 static char *strdup_check(const char *str)
 {
@@ -175,6 +180,7 @@ static void free_suite(struct test_suite *suite) {
 
 static int shell_test__run(struct test_suite *test, int subtest)
 {
+	struct shell_info *test_info = test->priv;
 	const char *file;
 	int err;
 	char *cmd = NULL;
@@ -187,6 +193,22 @@ static int shell_test__run(struct test_suite *test, int subtest)
 		file = test->test_cases[0].name;
 	}
 
+	/* Run setup if needed */
+	if (test_info->has_setup == RUN_SETUP){
+		char *setup_script;
+		if (asprintf(&setup_script, "%s%s%s", test_info->base_path, SHELL_SETUP, verbose ? " -v" : "") < 0)
+			return TEST_SETUP_FAIL;
+
+		err = system(setup_script);
+		free(setup_script);
+
+		if (err)
+			return TEST_SETUP_FAIL;
+	}
+	else if (test_info->has_setup == FAILED_SETUP) {
+		return TEST_SKIP; /* Skip test suite if setup failed */
+	}
+
 	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
 		return TEST_FAIL;
 
@@ -228,6 +250,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
 	}
 
 	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
+	test_info->has_setup = NO_SETUP;
 
 	test_suite->priv = test_info;
 	test_suite->desc = NULL;
@@ -309,7 +332,7 @@ static void append_scripts_in_subdir(int dir_fd,
 	int n_dirs, i;
 
 	/* List files, sorted by alpha */
-	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+	n_dirs = scandirat(dir_fd, ".", &entlist, setup_filter, alphasort);
 	if (n_dirs == -1)
 		return;
 	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
@@ -404,7 +427,14 @@ static void append_suits_in_dir(int dir_fd,
 			continue;
 		}
 
-		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
+			char *desc = shell_test__description(fd, SHELL_SETUP);
+			test_suite->desc = desc;	/* Set the suite name by the setup description */
+			((struct shell_info*)(test_suite->priv))->has_setup = RUN_SETUP;
+		}
+		else {
+			test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+		}
 
 		append_suite(result, result_sz, test_suite);
 	}
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index 60a1a19a45c999f4..da4dcd26140cdfd2 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,8 +4,18 @@
 
 #include "tests.h"
 
+#define SHELL_SETUP "setup.sh"
+
+enum shell_setup {
+	NO_SETUP     = 0,
+	RUN_SETUP    = 1,
+	FAILED_SETUP = 2,
+	PASSED_SETUP = 3,
+};
+
 struct shell_info {
 	const char *base_path;
+	enum shell_setup has_setup;
 };
 
 struct test_suite **create_script_test_suites(void);
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index af284dd47e5c7855..2c3fb03ad633be61 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -5,9 +5,11 @@
 #include <stdbool.h>
 
 enum {
-	TEST_OK   =  0,
-	TEST_FAIL = -1,
-	TEST_SKIP = -2,
+	TEST_OK         =  0,
+	TEST_FAIL      	= -1,
+	TEST_SKIP       = -2,
+	TEST_RUNNING	= -3,
+	TEST_SETUP_FAIL = -4,
 };
 
 #define TEST_ASSERT_VAL(text, cond)					 \
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 07/10] perftool-testsuite: Add empty setup for base_probe
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (5 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 06/10] perf test: Provide setup for the shell test suite vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 08/10] perf test: Introduce storing logs for shell tests vmolnaro
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Add empty setup to set a proper name for base_probe testsuite, can be
utilized for basic test setup for the future.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/shell/base_probe/setup.sh | 13 +++++++++++++
 1 file changed, 13 insertions(+)
 create mode 100755 tools/perf/tests/shell/base_probe/setup.sh

diff --git a/tools/perf/tests/shell/base_probe/setup.sh b/tools/perf/tests/shell/base_probe/setup.sh
new file mode 100755
index 0000000000000000..fbb99325b555a723
--- /dev/null
+++ b/tools/perf/tests/shell/base_probe/setup.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+# perftool-testsuite :: perf_probe
+# SPDX-License-Identifier: GPL-2.0
+
+#
+#	setup.sh of perf probe test
+#	Author: Michael Petlan <mpetlan@redhat.com>
+#
+#	Description:
+#
+#		Setting testsuite name, for future use
+#
+#
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 08/10] perf test: Introduce storing logs for shell tests
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (6 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 07/10] perftool-testsuite: Add empty setup for base_probe vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 09/10] perf test: Format log directories " vmolnaro
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Create temporary directories for storing log files for shell tests
that could help while debugging. The log files are necessary for
perftool testsuite test cases also. If the variable KEEP_TEST_LOGS
is set keep the logs, else delete them.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/builtin-test.c  | 110 ++++++++++++++++++++++++++++---
 tools/perf/tests/tests-scripts.c |   3 +
 tools/perf/tests/tests-scripts.h |   1 +
 3 files changed, 105 insertions(+), 9 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 585613446d8a3c8d..3458db9c41e370a5 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -5,6 +5,7 @@
  * Builtin regression testing command: ever growing number of sanity tests
  */
 #include <fcntl.h>
+#include <ftw.h>
 #include <errno.h>
 #include <poll.h>
 #include <unistd.h>
@@ -217,6 +218,86 @@ static bool test_exclusive(const struct test_suite *t, int subtest)
 	return t->test_cases[subtest].exclusive;
 }
 
+static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
+						 int typeflag, struct FTW *ftwbuf)
+{
+	int rv = -1;
+
+	/* Stop traversal if going too deep */
+	if (ftwbuf->level > 5) {
+		pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
+		return rv;
+	}
+
+	/* Remove only expected directories */
+	if (typeflag == FTW_D || typeflag == FTW_DP){
+		const char *dirname = fpath + ftwbuf->base;
+
+		if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
+			strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
+				pr_err("Unknown directory %s", dirname);
+				return rv;
+			 }
+	}
+
+	/* Attempt to remove the file */
+	rv = remove(fpath);
+	if (rv)
+		pr_err("Failed to remove file: %s", fpath);
+
+	return rv;
+}
+
+static bool create_logs(struct test_suite *t, int pass){
+	bool store_logs = t->priv && ((struct shell_info*)(t->priv))->store_logs;
+	if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
+		/* Sequential and non-exclusive tests run on the first pass. */
+		return store_logs;
+	}
+	else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
+		/* Exclusive tests without sequential run on the second pass. */
+		return store_logs;
+	}
+	return false;
+}
+
+static char *setup_shell_logs(const char *name)
+{
+	char template[PATH_MAX];
+	char *temp_dir;
+
+	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+		pr_err("Failed to create log dir template");
+		return NULL; /* Skip the testsuite */
+	}
+
+	temp_dir = mkdtemp(template);
+	if (temp_dir) {
+		setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
+		return strdup(temp_dir);
+	}
+	else {
+		pr_err("Failed to create the temporary directory");
+	}
+
+	return NULL; /* Skip the testsuite */
+}
+
+static void cleanup_shell_logs(char *dirname)
+{
+	char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
+
+	/* Check if logs should be kept or do cleanup */
+	if (dirname) {
+		if (!keep_logs || strcmp(keep_logs, "y") != 0) {
+			nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
+		}
+		free(dirname);
+	}
+
+	unsetenv("PERFSUITE_RUN_DIR");
+}
+
 static bool perf_test__matches(const char *desc, int curr, int argc, const char *argv[])
 {
 	int i;
@@ -548,6 +629,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 
 		for (struct test_suite **t = suites; *t; t++) {
 			int curr = i++;
+			char *tmpdir = NULL;
 
 			if (!perf_test__matches(test_description(*t, -1), curr, argc, argv)) {
 				/*
@@ -572,23 +654,33 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 				continue;
 			}
 
+			/* Setup temporary log directories for shell test suites */
+			if (create_logs(*t, pass)) {
+				tmpdir = setup_shell_logs((*t)->desc);
+
+				if (tmpdir == NULL)  /* Couldn't create log dir, skip test suite */
+					((struct shell_info*)((*t)->priv))->has_setup = FAILED_SETUP;
+			}
+
 			if (!has_subtests(*t)) {
 				err = start_test(*t, curr, -1, &child_tests[child_test_num++],
 						 width, pass);
 				if (err)
 					goto err_out;
-				continue;
 			}
-			for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) {
-				if (!perf_test__matches(test_description(*t, subi),
-							curr, argc, argv))
-					continue;
+			else {
+				for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) {
+					if (!perf_test__matches(test_description(*t, subi),
+								curr, argc, argv))
+						continue;
 
-				err = start_test(*t, curr, subi, &child_tests[child_test_num++],
-						 width, pass);
-				if (err)
-					goto err_out;
+					err = start_test(*t, curr, subi, &child_tests[child_test_num++],
+							width, pass);
+					if (err)
+						goto err_out;
+				}
 			}
+			cleanup_shell_logs(tmpdir);
 		}
 		if (!sequential) {
 			/* Parallel mode starts tests but doesn't finish them. Do that now. */
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 77a6b8d2213e6e74..2dab7324ed05e7e9 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -251,6 +251,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
 
 	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
 	test_info->has_setup = NO_SETUP;
+	test_info->store_logs = false;
 
 	test_suite->priv = test_info;
 	test_suite->desc = NULL;
@@ -427,6 +428,8 @@ static void append_suits_in_dir(int dir_fd,
 			continue;
 		}
 
+		/* Store logs for testsuite is sub-directories */
+		((struct shell_info*)(test_suite->priv))->store_logs = true;
 		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
 			char *desc = shell_test__description(fd, SHELL_SETUP);
 			test_suite->desc = desc;	/* Set the suite name by the setup description */
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index da4dcd26140cdfd2..41da0a175e4e7033 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -16,6 +16,7 @@ enum shell_setup {
 struct shell_info {
 	const char *base_path;
 	enum shell_setup has_setup;
+	bool store_logs;
 };
 
 struct test_suite **create_script_test_suites(void);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 09/10] perf test: Format log directories for shell tests
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (7 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 08/10] perf test: Introduce storing logs for shell tests vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2024-12-20 22:03 ` [PATCH 10/10] perf test: Remove perftool drivers vmolnaro
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

The name of the log directory can be taken from the test suite
description, which possibly could contain whitespace characters. This
can cause further issues if the name is not quoted correctly.

Replace the whitespace characters with an underscore to prevent the
possible issues caused by the name splitting.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/builtin-test.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 3458db9c41e370a5..eb32284b3bbed59d 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -4,6 +4,7 @@
  *
  * Builtin regression testing command: ever growing number of sanity tests
  */
+#include <ctype.h>
 #include <fcntl.h>
 #include <ftw.h>
 #include <errno.h>
@@ -218,6 +219,21 @@ static bool test_exclusive(const struct test_suite *t, int subtest)
 	return t->test_cases[subtest].exclusive;
 }
 
+/* Replace non-alphanumeric characters with _ */
+static void check_dir_name(const char *src, char *dst)
+{
+	size_t i;
+	size_t len = strlen(src);
+
+	for (i = 0; i < len; i++) {
+		if (!isalnum(src[i]))
+			dst[i] = '_';
+		else
+			dst[i] = src[i];
+	}
+	dst[i] = '\0';
+}
+
 static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
 						 int typeflag, struct FTW *ftwbuf)
 {
@@ -263,10 +279,12 @@ static bool create_logs(struct test_suite *t, int pass){
 
 static char *setup_shell_logs(const char *name)
 {
-	char template[PATH_MAX];
+	char template[PATH_MAX], valid_name[strlen(name)+1];
 	char *temp_dir;
 
-	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+	check_dir_name(name, valid_name);
+
+	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", valid_name) < 0) {
 		pr_err("Failed to create log dir template");
 		return NULL; /* Skip the testsuite */
 	}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 10/10] perf test: Remove perftool drivers
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (8 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 09/10] perf test: Format log directories " vmolnaro
@ 2024-12-20 22:03 ` vmolnaro
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2024-12-20 22:03 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, namhyung, mpetlan; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

The perf now provides all of the features required for running the
perftool test cases, such as creating log directories, running setup scripts
and the tests are structured by the base_ directories.

Remove the drivers as they are no longer necessary together with
the condition of skipping the base_ directories and run the
test cases by the default perf test structure.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 .../tests/shell/perftool-testsuite_probe.sh   | 23 -------------------
 .../tests/shell/perftool-testsuite_report.sh  | 23 -------------------
 tools/perf/tests/tests-scripts.c              |  2 --
 3 files changed, 48 deletions(-)
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

diff --git a/tools/perf/tests/shell/perftool-testsuite_probe.sh b/tools/perf/tests/shell/perftool-testsuite_probe.sh
deleted file mode 100755
index a0fec33a0358aeff..0000000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_probe.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_probe
-# SPDX-License-Identifier: GPL-2.0
-
-test -d "$(dirname "$0")/base_probe" || exit 2
-cd "$(dirname "$0")/base_probe" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
-     test -x "$testcase" || continue
-     ./"$testcase"
-     (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
-	rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
deleted file mode 100755
index a8cf75b4e77ec1a3..0000000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_report.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_report (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-test -d "$(dirname "$0")/base_report" || exit 2
-cd "$(dirname "$0")/base_report" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
-     test -x "$testcase" || continue
-     ./"$testcase"
-     (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
-	rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 2dab7324ed05e7e9..51d2ffaf31a0103a 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -413,8 +413,6 @@ static void append_suits_in_dir(int dir_fd,
 			if (!S_ISDIR(st.st_mode))
 				continue;
 		}
-		if (strncmp(ent->d_name, "base_", 5) == 0)
-			continue; /* Skip scripts that have a separate driver. */
 
 		/* Scan subdir for test cases*/
 		fd = openat(dir_fd, ent->d_name, O_PATH);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/10] Introduce structure for shell tests
  2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
                   ` (9 preceding siblings ...)
  2024-12-20 22:03 ` [PATCH 10/10] perf test: Remove perftool drivers vmolnaro
@ 2025-01-13 15:24 ` Arnaldo Carvalho de Melo
  2025-01-13 18:25   ` [PATCH v2 " vmolnaro
                     ` (10 more replies)
  10 siblings, 11 replies; 43+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-01-13 15:24 UTC (permalink / raw)
  To: vmolnaro; +Cc: linux-perf-users, acme, namhyung, mpetlan, irogers

On Fri, Dec 20, 2024 at 11:03:24PM +0100, vmolnaro@redhat.com wrote:
> From: Veronika Molnarova <vmolnaro@redhat.com>
> 
> Hello,
> 
> Sending the third patchset from perftool-testsuite upstreaming effort,
> which also contains new possibilities for shell tests, such as a two-level
> structured test hierarchy, a setup file for the test suite and ability to
> store logs.
> 
> The patches do not add any new test cases but instead provide the needed
> environment that was temporarily replaced by the perftool test drivers.
> 
> We wanted to provide the possibility to have a shell test consisting 
> of multiple subtests, as is already done for the C tests. The logical 
> structuring of the test cases was a part of the perftool test suite,
> and we saw this as an opportunity to introduce a structured approach for 
> the perf shell tests.
> 
> A directory in the shell directory will be represented as a test suite 
> if it contains at least one executable shell test. In case of two and 
> more tests, the subtests are are visibly differentiated from the test 
> suite by having a subtest index. All deeper levels of subdirectories 
> are still searched for tests but do not create additional levels of 
> hierarchy.
> 
> Some test suites require setup to be done before they are run, such 
> recording samples or setting up test probes. This can be done by 
> adding a "setup.sh" executable file in the test suite directory, which 
> will be run before all of the tests. If the setup fails, all of 
> the tests are skipped, as it is assumed that the setup is required 
> for their execution. The setup file also gives us the possibility to 
> name the test suite. If there is no setup file, the name is derived 
> from the name of the directory.
> 
> Lastly, we wanted to provide a way to store the test logs after execution 
> for debugging purposes, if necessary. The test logs for perftool tests 
> are stored in a '/tmp/perf_test_*' temporary directory. By default,
> these logs are cleared after the test finishes. However, if the env
> variable PERFTEST_KEEP_LOGS is set to "y", the test logs are retained
> for debugging.
> 
> For now, all of the perftool tests are marked as exclusive, preventing 
> from running parallel. This may change in the future if we ensure that 
> they will not interfere with other tests being run simultaneously.

I tried to apply it now to perf-tools-next and didn't manage to :-\

Can you take a look please? I was going to test it.

- Arnaldo
 
> Thoughts and ideas are welcome.
> 
> Thanks and regards,
> 
> Veronika
> 
> Michael Petlan (1):
>   perf testsuite: Fix perf-report tests installation
> 
> Veronika Molnarova (9):
>   perf test perftool_testsuite: Add missing description
>   perf test perftool_testsuite: Return correct value for skipping
>   perf test perftool_testsuite: Use absolute paths
>   perf tests: Create a structure for shell tests
>   perf test: Provide setup for the shell test suite
>   perftool-testsuite: Add empty setup for base_probe
>   perf test: Introduce storing logs for shell tests
>   perf test: Format log directories for shell tests
>   perf test: Remove perftool drivers
> 
>  tools/perf/Makefile.perf                      |   3 +-
>  tools/perf/tests/builtin-test.c               | 151 +++++++++-
>  tools/perf/tests/shell/base_probe/setup.sh    |  13 +
>  .../base_probe/test_adding_blacklisted.sh     |  17 +-
>  .../shell/base_probe/test_adding_kernel.sh    |  57 ++--
>  .../perf/tests/shell/base_probe/test_basic.sh |  23 +-
>  .../shell/base_probe/test_invalid_options.sh  |  15 +-
>  .../shell/base_probe/test_line_semantics.sh   |  11 +-
>  tools/perf/tests/shell/base_report/setup.sh   |   8 +-
>  .../tests/shell/base_report/test_basic.sh     |  49 ++--
>  tools/perf/tests/shell/common/init.sh         |   6 +-
>  .../tests/shell/perftool-testsuite_probe.sh   |  23 --
>  .../tests/shell/perftool-testsuite_report.sh  |  23 --
>  tools/perf/tests/tests-scripts.c              | 258 +++++++++++++++---
>  tools/perf/tests/tests-scripts.h              |  15 +
>  tools/perf/tests/tests.h                      |   8 +-
>  16 files changed, 489 insertions(+), 191 deletions(-)
>  create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
>  delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
>  delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh
> 
> -- 
> 2.43.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2 00/10] Introduce structure for shell tests
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
@ 2025-01-13 18:25   ` vmolnaro
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
  2025-01-13 18:25   ` [PATCH v2 01/10] perf test perftool_testsuite: Add missing description vmolnaro
                     ` (9 subsequent siblings)
  10 siblings, 1 reply; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:25 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Rebased the v2 onto the perf-tools-next, should be applying now.

Thanks for letting me know,
Veronika

Michael Petlan (1):
  perf testsuite: Fix perf-report tests installation

Veronika Molnarova (9):
  perf test perftool_testsuite: Add missing description
  perf test perftool_testsuite: Return correct value for skipping
  perf test perftool_testsuite: Use absolute paths
  perf tests: Create a structure for shell tests
  perf test: Provide setup for the shell test suite
  perftool-testsuite: Add empty setup for base_probe
  perf test: Introduce storing logs for shell tests
  perf test: Format log directories for shell tests
  perf test: Remove perftool drivers

 tools/perf/Makefile.perf                      |   3 +-
 tools/perf/tests/builtin-test.c               | 151 +++++++++-
 tools/perf/tests/shell/base_probe/setup.sh    |  13 +
 .../base_probe/test_adding_blacklisted.sh     |  17 +-
 .../shell/base_probe/test_adding_kernel.sh    |  57 ++--
 .../perf/tests/shell/base_probe/test_basic.sh |  23 +-
 .../shell/base_probe/test_invalid_options.sh  |  15 +-
 .../shell/base_probe/test_line_semantics.sh   |  11 +-
 tools/perf/tests/shell/base_report/setup.sh   |   8 +-
 .../tests/shell/base_report/test_basic.sh     |  49 ++--
 tools/perf/tests/shell/common/init.sh         |   6 +-
 .../tests/shell/perftool-testsuite_probe.sh   |  23 --
 .../tests/shell/perftool-testsuite_report.sh  |  23 --
 tools/perf/tests/tests-scripts.c              | 258 +++++++++++++++---
 tools/perf/tests/tests-scripts.h              |  15 +
 tools/perf/tests/tests.h                      |   8 +-
 16 files changed, 489 insertions(+), 191 deletions(-)
 create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

-- 
2.43.0


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2 01/10] perf test perftool_testsuite: Add missing description
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
  2025-01-13 18:25   ` [PATCH v2 " vmolnaro
@ 2025-01-13 18:25   ` vmolnaro
  2025-01-13 18:25   ` [PATCH v2 02/10] perf test perftool_testsuite: Return correct value for skipping vmolnaro
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:25 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Properly name the test cases of perftool_testsuite instead of the
license being taken as the name for 'perf test'.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh | 2 +-
 tools/perf/tests/shell/base_probe/test_adding_kernel.sh      | 2 +-
 tools/perf/tests/shell/base_probe/test_basic.sh              | 2 +-
 tools/perf/tests/shell/base_probe/test_invalid_options.sh    | 2 +-
 tools/perf/tests/shell/base_probe/test_line_semantics.sh     | 2 +-
 tools/perf/tests/shell/base_report/setup.sh                  | 2 +-
 tools/perf/tests/shell/base_report/test_basic.sh             | 2 +-
 7 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index ac5a15c57fb38f14..4204e941fad99269 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Reject blacklisted probes (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index d541ffd44a9332b6..c276c2a3fc26ecde 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-# Add 'perf probe's, list and remove them
+# perf_probe :: Add probes, list and remove them (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index 09669ec479f23d2f..a69dc1c9f92c1b96 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Basic perf probe functionality (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 0f835558a14b2069..8d1570c44a54ac75 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Reject invalid options (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index b114f3e50b7fe131..2ab70a543087c543 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_probe :: Check patterns for line semantics (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
index 4caa496660c64f5e..b03501b2e8fc5330 100755
--- a/tools/perf/tests/shell/base_report/setup.sh
+++ b/tools/perf/tests/shell/base_report/setup.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perftool-testsuite :: perf_report
 # SPDX-License-Identifier: GPL-2.0
 
 #
diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
index 47677cbd4df31f0a..2398eba4d3fdd3db 100755
--- a/tools/perf/tests/shell/base_report/test_basic.sh
+++ b/tools/perf/tests/shell/base_report/test_basic.sh
@@ -1,5 +1,5 @@
 #!/bin/bash
-
+# perf_report :: Basic perf report options (exclusive)
 # SPDX-License-Identifier: GPL-2.0
 
 #
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 02/10] perf test perftool_testsuite: Return correct value for skipping
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
  2025-01-13 18:25   ` [PATCH v2 " vmolnaro
  2025-01-13 18:25   ` [PATCH v2 01/10] perf test perftool_testsuite: Add missing description vmolnaro
@ 2025-01-13 18:25   ` vmolnaro
  2025-01-13 18:25   ` [PATCH v2 03/10] perf test perftool_testsuite: Use absolute paths vmolnaro
                     ` (7 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:25 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

In 'perf test', a return value 2 represents that the test case was
skipped. Fix this value for perftool_testsuite test cases to
differentiate between skip and pass values.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh | 2 +-
 tools/perf/tests/shell/base_probe/test_adding_kernel.sh      | 2 +-
 tools/perf/tests/shell/base_probe/test_basic.sh              | 2 +-
 tools/perf/tests/shell/base_probe/test_invalid_options.sh    | 2 +-
 tools/perf/tests/shell/base_probe/test_line_semantics.sh     | 2 +-
 tools/perf/tests/shell/common/init.sh                        | 2 +-
 6 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index 4204e941fad99269..45c21673643641b3 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -22,7 +22,7 @@ TEST_RESULT=0
 BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
 if [ -z "$BLACKFUNC_LIST" ]; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 # try to find vmlinux with DWARF debug info
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index c276c2a3fc26ecde..24fe91550c672cc2 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -33,7 +33,7 @@ fi
 check_kprobes_available
 if [ $? -ne 0 ]; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index a69dc1c9f92c1b96..9d8b5afbeddda268 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -19,7 +19,7 @@ TEST_RESULT=0
 
 if ! check_kprobes_available; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 8d1570c44a54ac75..92f7254eb32a31d0 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -19,7 +19,7 @@ TEST_RESULT=0
 
 if ! check_kprobes_available; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 # Check for presence of DWARF
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index 2ab70a543087c543..20435b6bf6bc654d 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -20,7 +20,7 @@ TEST_RESULT=0
 
 if ! check_kprobes_available; then
 	print_overall_skipped
-	exit 0
+	exit 2
 fi
 
 # Check for presence of DWARF
diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
index 259706ef58994bf9..26c7525651e084fc 100644
--- a/tools/perf/tests/shell/common/init.sh
+++ b/tools/perf/tests/shell/common/init.sh
@@ -88,7 +88,7 @@ consider_skipping()
 	# the runmode of a testcase needs to be at least the current suite's runmode
 	if [ $PERFTOOL_TESTSUITE_RUNMODE -lt $TESTCASE_RUNMODE ]; then
 		print_overall_skipped
-		exit 0
+		exit 2
 	fi
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 03/10] perf test perftool_testsuite: Use absolute paths
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (2 preceding siblings ...)
  2025-01-13 18:25   ` [PATCH v2 02/10] perf test perftool_testsuite: Return correct value for skipping vmolnaro
@ 2025-01-13 18:25   ` vmolnaro
  2025-01-13 18:25   ` [PATCH v2 04/10] perf tests: Create a structure for shell tests vmolnaro
                     ` (6 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:25 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Test cases from perftool_testsuite are affected by the current
directory where the test are run. For this reason, the test
driver has to change the directory to the base_dir for references to
work correctly.

Utilize absolute paths when sourcing and referencing other scripts so
that the current working directory doesn't impact the test cases.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 .../base_probe/test_adding_blacklisted.sh     | 13 ++---
 .../shell/base_probe/test_adding_kernel.sh    | 53 ++++++++++---------
 .../perf/tests/shell/base_probe/test_basic.sh | 19 +++----
 .../shell/base_probe/test_invalid_options.sh  | 11 ++--
 .../shell/base_probe/test_line_semantics.sh   |  7 +--
 tools/perf/tests/shell/base_report/setup.sh   |  6 ++-
 .../tests/shell/base_report/test_basic.sh     | 47 ++++++++--------
 tools/perf/tests/shell/common/init.sh         |  4 +-
 8 files changed, 84 insertions(+), 76 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index 45c21673643641b3..b60b0a58361d9ebe 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -13,11 +13,12 @@
 #	they must be skipped.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 # skip if not supported
 BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
 if [ -z "$BLACKFUNC_LIST" ]; then
@@ -53,7 +54,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
 	PERF_EXIT_CODE=$?
 
 	# check for bad DWARF polluting the result
-	../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
 
 	if [ $? -eq 0 ]; then
 		SKIP_DWARF=1
@@ -73,7 +74,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
 			fi
 		fi
 	else
-		../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
+		"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
 		CHECK_EXIT_CODE=$?
 
 		SKIP_DWARF=0
@@ -94,7 +95,7 @@ fi
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index 24fe91550c672cc2..5e4a3bf3a1cdaee3 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -13,13 +13,14 @@
 #		and removing.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 # shellcheck source=lib/probe_vfs_getname.sh
-. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
+. "$DIR_PATH/../lib/probe_vfs_getname.sh"
 
 TEST_PROBE=${TEST_PROBE:-"inode_permission"}
 
@@ -44,7 +45,7 @@ for opt in "" "-a" "--add"; do
 	$CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
@@ -58,7 +59,7 @@ done
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
@@ -71,7 +72,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
 $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
 CHECK_EXIT_CODE=$?
 
 if [ $NO_DEBUGINFO ] ; then
@@ -93,9 +94,9 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
 # the value should be greater than 1
 REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
 REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
-../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
@@ -108,7 +109,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
 $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
@@ -121,7 +122,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
@@ -135,7 +136,7 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
 PERF_EXIT_CODE=$?
 
 # check for the output (should be the same as usual)
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
 CHECK_EXIT_CODE=$?
 
 # check that no probe was added in real
@@ -152,7 +153,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
 $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
@@ -162,7 +163,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
 ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
@@ -173,7 +174,7 @@ NO_OF_PROBES=`$CMD_PERF probe -l | wc -l`
 $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
@@ -187,7 +188,7 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
 PERF_EXIT_CODE=$?
 
 REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
-../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
 CHECK_EXIT_CODE=$?
 
 VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
@@ -205,7 +206,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
 $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
@@ -217,7 +218,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
 $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
 CHECK_EXIT_CODE=$?
 
 if [ $NO_DEBUGINFO ] ; then
@@ -240,13 +241,13 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
 PERF_EXIT_CODE=$?
 
 # check that the error message is reasonable
-../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
-../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
-../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_no_patterns_found.pl" "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
 
 if [ $NO_DEBUGINFO ]; then
@@ -264,7 +265,7 @@ fi
 $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
@@ -274,7 +275,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
 $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
@@ -285,9 +286,9 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
 PERF_EXIT_CODE=$?
 
 REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
-../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index 9d8b5afbeddda268..e8fed67be9c1a8ee 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -12,11 +12,12 @@
 #		This test tests basic functionality of perf probe command.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
@@ -30,15 +31,15 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
 	$CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
 	CHECK_EXIT_CODE=$?
-	../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
 	(( CHECK_EXIT_CODE += $? ))
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -53,7 +54,7 @@ fi
 # without any args perf-probe should print usage
 $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
 
-../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
 CHECK_EXIT_CODE=$?
 
 print_results 0 $CHECK_EXIT_CODE "usage message"
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 92f7254eb32a31d0..9caeab2fe77cd207 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -12,11 +12,12 @@
 #		This test checks whether the invalid and incompatible options are reported
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
@@ -33,7 +34,7 @@ for opt in '-a' '-d' '-L' '-V'; do
 	! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
@@ -66,7 +67,7 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
 	! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index 20435b6bf6bc654d..576442d87a44400a 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -13,11 +13,12 @@
 #		arguments are properly reported.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
index b03501b2e8fc5330..386e947d1c8bcda2 100755
--- a/tools/perf/tests/shell/base_report/setup.sh
+++ b/tools/perf/tests/shell/base_report/setup.sh
@@ -12,8 +12,10 @@
 #
 #
 
+DIR_PATH="$(dirname $0)"
+
 # include working environment
-. ../common/init.sh
+. "$DIR_PATH/../common/init.sh"
 
 test -d "$HEADER_TAR_DIR" || mkdir -p "$HEADER_TAR_DIR"
 
@@ -22,7 +24,7 @@ SW_EVENT="cpu-clock"
 $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
index 2398eba4d3fdd3db..4e931587f6ed9dfa 100755
--- a/tools/perf/tests/shell/base_report/test_basic.sh
+++ b/tools/perf/tests/shell/base_report/test_basic.sh
@@ -12,11 +12,12 @@
 #
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 
 ### help message
 
@@ -25,19 +26,19 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
 	$CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
 	CHECK_EXIT_CODE=$?
-	../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
 	(( CHECK_EXIT_CODE += $? ))
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -57,9 +58,9 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
 REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
@@ -74,9 +75,9 @@ PERF_EXIT_CODE=$?
 
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
@@ -98,7 +99,7 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
 REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
 # disable precise check for "nrcpus avail" in BASIC runmode
 test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
-../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
@@ -129,9 +130,9 @@ PERF_EXIT_CODE=$?
 
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
@@ -144,9 +145,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
 PERF_EXIT_CODE=$?
 
-grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
+grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
@@ -159,9 +160,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
@@ -174,9 +175,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
 PERF_EXIT_CODE=$?
 
-grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
+grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
index 26c7525651e084fc..cbfc78bec974261e 100644
--- a/tools/perf/tests/shell/common/init.sh
+++ b/tools/perf/tests/shell/common/init.sh
@@ -11,8 +11,8 @@
 #
 
 
-. ../common/settings.sh
-. ../common/patterns.sh
+. "$(dirname $0)/../common/settings.sh"
+. "$(dirname $0)/../common/patterns.sh"
 
 THIS_TEST_NAME=`basename $0 .sh`
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 04/10] perf tests: Create a structure for shell tests
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (3 preceding siblings ...)
  2025-01-13 18:25   ` [PATCH v2 03/10] perf test perftool_testsuite: Use absolute paths vmolnaro
@ 2025-01-13 18:25   ` vmolnaro
  2025-01-13 18:26   ` [PATCH v2 05/10] perf testsuite: Fix perf-report tests installation vmolnaro
                     ` (5 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:25 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

The general structure of test suites with test cases has been implemented
for C tests for some time, while shell tests were just all put into a list
without any possible structuring.

Provide the same possibility of test suite structure for shell tests. The
suite is created for each subdirectory located in the 'perf/tests/shell'
directory that contains at least one test script. All of the deeper levels
of subdirectories will be merged with the first level of test cases.
The name of the test suite is the name of the subdirectory, where the test
cases are located. For all of the test scripts that are not in any
subdirectory, a test suite with a single test case is created as it has
been till now.

The new structure of the shell tests for 'perf test list':
    77: build id cache operations
    78: coresight
    78:1: CoreSight / ASM Pure Loop
    78:2: CoreSight / Memcpy 16k 10 Threads
    78:3: CoreSight / Thread Loop 10 Threads - Check TID
    78:4: CoreSight / Thread Loop 2 Threads - Check TID
    78:5: CoreSight / Unroll Loop Thread 10
    79: daemon operations
    80: perf diff tests

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/tests-scripts.c | 223 +++++++++++++++++++++++++------
 tools/perf/tests/tests-scripts.h |   4 +
 2 files changed, 189 insertions(+), 38 deletions(-)

diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 1d5759d08141749d..c742a3d0a6a26bb4 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -151,14 +151,45 @@ static char *strdup_check(const char *str)
 	return newstr;
 }
 
-static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
+/* Free the whole structure of test_suite with its test_cases */
+static void free_suite(struct test_suite *suite) {
+	if (suite->test_cases){
+		int num = 0;
+		while (suite->test_cases[num].name){ /* Last case has name set to NULL */
+			free((void*) suite->test_cases[num].name);
+			free((void*) suite->test_cases[num].desc);
+			num++;
+		}
+		free(suite->test_cases);
+	}
+	if (suite->desc)
+		free((void*) suite->desc);
+	if (suite->priv){
+		struct shell_info *test_info = suite->priv;
+		free((void*) test_info->base_path);
+		free(test_info);
+	}
+
+	free(suite);
+}
+
+static int shell_test__run(struct test_suite *test, int subtest)
 {
-	const char *file = test->priv;
+	const char *file;
 	int err;
 	char *cmd = NULL;
 
+	/* Get absolute file path */
+	if (subtest >= 0) {
+		file = test->test_cases[subtest].name;
+	}
+	else {		/* Single test case */
+		file = test->test_cases[0].name;
+	}
+
 	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
 		return TEST_FAIL;
+
 	err = system(cmd);
 	free(cmd);
 	if (!err)
@@ -167,63 +198,154 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
 	return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
 }
 
-static void append_script(int dir_fd, const char *name, char *desc,
-			  struct test_suite ***result,
-			  size_t *result_sz)
+static struct test_suite* prepare_test_suite(int dir_fd)
 {
-	char filename[PATH_MAX], link[128];
-	struct test_suite *test_suite, **result_tmp;
-	struct test_case *tests;
+	char dirpath[PATH_MAX], link[128];
 	ssize_t len;
-	char *exclusive;
+	struct test_suite *test_suite = NULL;
+	struct shell_info *test_info;
 
+	/* Get dir absolute path */
 	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
-	len = readlink(link, filename, sizeof(filename));
+	len = readlink(link, dirpath, sizeof(dirpath));
 	if (len < 0) {
 		pr_err("Failed to readlink %s", link);
-		return;
+		return NULL;
 	}
-	filename[len++] = '/';
-	strcpy(&filename[len], name);
+	dirpath[len++] = '/';
+	dirpath[len] = '\0';
 
-	tests = calloc(2, sizeof(*tests));
-	if (!tests) {
-		pr_err("Out of memory while building script test suite list\n");
-		return;
-	}
-	tests[0].name = strdup_check(name);
-	exclusive = strstr(desc, " (exclusive)");
-	if (exclusive != NULL) {
-		tests[0].exclusive = true;
-		exclusive[0] = '\0';
-	}
-	tests[0].desc = strdup_check(desc);
-	tests[0].run_case = shell_test__run;
 	test_suite = zalloc(sizeof(*test_suite));
 	if (!test_suite) {
 		pr_err("Out of memory while building script test suite list\n");
-		free(tests);
-		return;
+		return NULL;
 	}
-	test_suite->desc = desc;
-	test_suite->test_cases = tests;
-	test_suite->priv = strdup_check(filename);
+
+	test_info = zalloc(sizeof(*test_info));
+	if (!test_info) {
+		pr_err("Out of memory while building script test suite list\n");
+		return NULL;
+	}
+
+	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
+
+	test_suite->priv = test_info;
+	test_suite->desc = NULL;
+	test_suite->test_cases = NULL;
+
+	return test_suite;
+}
+
+static void append_suite(struct test_suite ***result,
+			  size_t *result_sz, struct test_suite *test_suite)
+{
+	struct test_suite **result_tmp;
+
 	/* Realloc is good enough, though we could realloc by chunks, not that
 	 * anyone will ever measure performance here */
 	result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
 	if (result_tmp == NULL) {
 		pr_err("Out of memory while building script test suite list\n");
-		free(tests);
-		free(test_suite);
+		free_suite(test_suite);
 		return;
 	}
+
 	/* Add file to end and NULL terminate the struct array */
 	*result = result_tmp;
 	(*result)[*result_sz] = test_suite;
 	(*result_sz)++;
 }
 
-static void append_scripts_in_dir(int dir_fd,
+static void append_script_to_suite(int dir_fd, const char *name, char *desc,
+					struct test_suite *test_suite, size_t *tc_count)
+{
+	char file_name[PATH_MAX], link[128];
+	struct test_case *tests;
+	size_t len;
+	char *exclusive;
+
+	if (!test_suite)
+		return;
+
+	/* Requires an empty test case at the end */
+	tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
+	if (!tests) {
+		pr_err("Out of memory while building script test suite list\n");
+		return;
+	}
+
+	/* Get path to the test script */
+	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
+	len = readlink(link, file_name, sizeof(file_name));
+	if (len < 0) {
+		pr_err("Failed to readlink %s", link);
+		return;
+	}
+	file_name[len++] = '/';
+	strcpy(&file_name[len], name);
+
+	tests[(*tc_count)].name = strdup_check(file_name);	/* Get path to the script from base dir */
+	tests[(*tc_count)].exclusive = false;
+	exclusive = strstr(desc, " (exclusive)");
+	if (exclusive != NULL) {
+		tests[(*tc_count)].exclusive = true;
+		exclusive[0] = '\0';
+	}
+	tests[(*tc_count)].desc = desc;
+	tests[(*tc_count)].skip_reason = NULL;	/* Unused */
+	tests[(*tc_count)++].run_case = shell_test__run;
+
+	tests[(*tc_count)].name = NULL;		/* End the test cases */
+
+	test_suite->test_cases = tests;
+}
+
+static void append_scripts_in_subdir(int dir_fd,
+				  struct test_suite *suite,
+				  size_t *tc_count)
+{
+	struct dirent **entlist;
+	struct dirent *ent;
+	int n_dirs, i;
+
+	/* List files, sorted by alpha */
+	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+	if (n_dirs == -1)
+		return;
+	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
+		int fd;
+
+		if (ent->d_name[0] == '.')
+			continue; /* Skip hidden files */
+		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
+			char *desc = shell_test__description(dir_fd, ent->d_name);
+
+			if (desc) /* It has a desc line - valid script */
+				append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
+			continue;
+		}
+
+		if (ent->d_type != DT_DIR) {
+			struct stat st;
+
+			if (ent->d_type != DT_UNKNOWN)
+				continue;
+			fstatat(dir_fd, ent->d_name, &st, 0);
+			if (!S_ISDIR(st.st_mode))
+				continue;
+		}
+
+		fd = openat(dir_fd, ent->d_name, O_PATH);
+
+		/* Recurse into the dir */
+		append_scripts_in_subdir(fd, suite, tc_count);
+	}
+	for (i = 0; i < n_dirs; i++) /* Clean up */
+		zfree(&entlist[i]);
+	free(entlist);
+}
+
+static void append_suits_in_dir(int dir_fd,
 				  struct test_suite ***result,
 				  size_t *result_sz)
 {
@@ -237,16 +359,27 @@ static void append_scripts_in_dir(int dir_fd,
 		return;
 	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
 		int fd;
+		struct test_suite *test_suite;
+		size_t cases_count = 0;
 
 		if (ent->d_name[0] == '.')
 			continue; /* Skip hidden files */
 		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
 			char *desc = shell_test__description(dir_fd, ent->d_name);
 
-			if (desc) /* It has a desc line - valid script */
-				append_script(dir_fd, ent->d_name, desc, result, result_sz);
+			if (desc) { /* It has a desc line - valid script */
+				test_suite = prepare_test_suite(dir_fd); /* Create a test suite with a single test case */
+				append_script_to_suite(dir_fd, ent->d_name, desc, test_suite, &cases_count);
+				test_suite->desc = strdup_check(desc);
+
+				if (cases_count)
+					append_suite(result, result_sz, test_suite);
+				else /* Wasn't able to create the test case */
+					free_suite(test_suite);
+			}
 			continue;
 		}
+
 		if (ent->d_type != DT_DIR) {
 			struct stat st;
 
@@ -258,8 +391,22 @@ static void append_scripts_in_dir(int dir_fd,
 		}
 		if (strncmp(ent->d_name, "base_", 5) == 0)
 			continue; /* Skip scripts that have a separate driver. */
+
+		/* Scan subdir for test cases*/
 		fd = openat(dir_fd, ent->d_name, O_PATH);
-		append_scripts_in_dir(fd, result, result_sz);
+		test_suite = prepare_test_suite(fd);	/* Prepare a testsuite with its path */
+		if (!test_suite)
+			continue;
+
+		append_scripts_in_subdir(fd, test_suite, &cases_count);
+		if (cases_count == 0){
+			free_suite(test_suite);
+			continue;
+		}
+
+		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+
+		append_suite(result, result_sz, test_suite);
 	}
 	for (i = 0; i < n_dirs; i++) /* Clean up */
 		zfree(&entlist[i]);
@@ -277,7 +424,7 @@ struct test_suite **create_script_test_suites(void)
 	 * length array.
 	 */
 	if (dir_fd >= 0)
-		append_scripts_in_dir(dir_fd, &result, &result_sz);
+		append_suits_in_dir(dir_fd, &result, &result_sz);
 
 	result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
 	if (result_tmp == NULL) {
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index b553ad26ea17642a..60a1a19a45c999f4 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,6 +4,10 @@
 
 #include "tests.h"
 
+struct shell_info {
+	const char *base_path;
+};
+
 struct test_suite **create_script_test_suites(void);
 
 #endif /* TESTS_SCRIPTS_H */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 05/10] perf testsuite: Fix perf-report tests installation
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (4 preceding siblings ...)
  2025-01-13 18:25   ` [PATCH v2 04/10] perf tests: Create a structure for shell tests vmolnaro
@ 2025-01-13 18:26   ` vmolnaro
  2025-01-13 18:26   ` [PATCH v2 06/10] perf test: Provide setup for the shell test suite vmolnaro
                     ` (4 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:26 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Michael Petlan <mpetlan@redhat.com>

There was a copy-paste mistake in the installation commands. Also, we
need to install stderr-whitelist.txt file, which contains allowed
messages that are printed on stderr and should not cause test fail.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/Makefile.perf | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index a449d00155364422..641ac4df8865131e 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -1135,7 +1135,8 @@ install-tests: all install-gtk
 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
 		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
-		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+		$(INSTALL) tests/shell/base_report/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+		$(INSTALL) tests/shell/base_report/*.txt '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight' ; \
 		$(INSTALL) tests/shell/coresight/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight'
 	$(Q)$(MAKE) -C tests/shell/coresight install-tests
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 06/10] perf test: Provide setup for the shell test suite
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (5 preceding siblings ...)
  2025-01-13 18:26   ` [PATCH v2 05/10] perf testsuite: Fix perf-report tests installation vmolnaro
@ 2025-01-13 18:26   ` vmolnaro
  2025-01-13 18:26   ` [PATCH v2 07/10] perftool-testsuite: Add empty setup for base_probe vmolnaro
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:26 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Some of the perftool-testsuite test cases require a setup to be done
beforehand as may be recording data, setting up cache or restoring sample
rate. The setup file also provides the possibility to set the name of
the test suite, if the name of the directory is not good enough.

Check for the existence of the "setup.sh" script for the shell test
suites and run it before the any of the test cases. If the setup fails,
skip all of the test cases of the test suite as the setup may be
required for the result to be valid.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/builtin-test.c  | 23 ++++++++++++++++++---
 tools/perf/tests/tests-scripts.c | 34 ++++++++++++++++++++++++++++++--
 tools/perf/tests/tests-scripts.h | 10 ++++++++++
 tools/perf/tests/tests.h         |  8 +++++---
 4 files changed, 67 insertions(+), 8 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index a5b9ccd0033a8484..4a4dc86ebcf60d0d 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -192,6 +192,22 @@ static test_fnptr test_function(const struct test_suite *t, int subtest)
 	return t->test_cases[subtest].run_case;
 }
 
+/* If setup fails, skip all test cases */
+static void check_shell_setup(const struct test_suite *t, int ret)
+{
+	struct shell_info* test_info;
+
+	if (!t->priv)
+		return;
+
+	test_info = t->priv;
+
+	if (ret == TEST_SETUP_FAIL)
+		test_info->has_setup = FAILED_SETUP;
+	else if (test_info->has_setup == RUN_SETUP)
+		test_info->has_setup = PASSED_SETUP;
+}
+
 static bool test_exclusive(const struct test_suite *t, int subtest)
 {
 	if (subtest <= 0)
@@ -268,8 +284,6 @@ static int run_test_child(struct child_process *process)
 	return -err;
 }
 
-#define TEST_RUNNING -3
-
 static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width,
 			     int running)
 {
@@ -287,7 +301,8 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
 	case TEST_OK:
 		pr_info(" Ok\n");
 		break;
-	case TEST_SKIP: {
+	case TEST_SKIP:
+	case TEST_SETUP_FAIL:{
 		const char *reason = skip_reason(t, subtest);
 
 		if (reason)
@@ -400,6 +415,7 @@ static void finish_test(struct child_test **child_tests, int running_test, int c
 	}
 	/* Clean up child process. */
 	ret = finish_command(&child_test->process);
+	check_shell_setup(t, ret);
 	if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
 		fprintf(stderr, "%s", err_output.buf);
 
@@ -422,6 +438,7 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
 			err = test_function(test, subi)(test, subi);
 			pr_debug("---- end ----\n");
 			print_test_result(test, i, subi, err, width, /*running=*/0);
+			check_shell_setup(test, err);
 		}
 		return 0;
 	}
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index c742a3d0a6a26bb4..fa8f18cbcd2ae2bc 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -138,6 +138,11 @@ static bool is_test_script(int dir_fd, const char *name)
 	return is_shell_script(dir_fd, name);
 }
 
+/* Filter for scandir */
+static int setup_filter(const struct dirent *entry){
+	return strcmp(entry->d_name, SHELL_SETUP);
+}
+
 /* Duplicate a string and fall over and die if we run out of memory */
 static char *strdup_check(const char *str)
 {
@@ -175,6 +180,7 @@ static void free_suite(struct test_suite *suite) {
 
 static int shell_test__run(struct test_suite *test, int subtest)
 {
+	struct shell_info *test_info = test->priv;
 	const char *file;
 	int err;
 	char *cmd = NULL;
@@ -187,6 +193,22 @@ static int shell_test__run(struct test_suite *test, int subtest)
 		file = test->test_cases[0].name;
 	}
 
+	/* Run setup if needed */
+	if (test_info->has_setup == RUN_SETUP){
+		char *setup_script;
+		if (asprintf(&setup_script, "%s%s%s", test_info->base_path, SHELL_SETUP, verbose ? " -v" : "") < 0)
+			return TEST_SETUP_FAIL;
+
+		err = system(setup_script);
+		free(setup_script);
+
+		if (err)
+			return TEST_SETUP_FAIL;
+	}
+	else if (test_info->has_setup == FAILED_SETUP) {
+		return TEST_SKIP; /* Skip test suite if setup failed */
+	}
+
 	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
 		return TEST_FAIL;
 
@@ -228,6 +250,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
 	}
 
 	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
+	test_info->has_setup = NO_SETUP;
 
 	test_suite->priv = test_info;
 	test_suite->desc = NULL;
@@ -309,7 +332,7 @@ static void append_scripts_in_subdir(int dir_fd,
 	int n_dirs, i;
 
 	/* List files, sorted by alpha */
-	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+	n_dirs = scandirat(dir_fd, ".", &entlist, setup_filter, alphasort);
 	if (n_dirs == -1)
 		return;
 	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
@@ -404,7 +427,14 @@ static void append_suits_in_dir(int dir_fd,
 			continue;
 		}
 
-		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
+			char *desc = shell_test__description(fd, SHELL_SETUP);
+			test_suite->desc = desc;	/* Set the suite name by the setup description */
+			((struct shell_info*)(test_suite->priv))->has_setup = RUN_SETUP;
+		}
+		else {
+			test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+		}
 
 		append_suite(result, result_sz, test_suite);
 	}
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index 60a1a19a45c999f4..da4dcd26140cdfd2 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,8 +4,18 @@
 
 #include "tests.h"
 
+#define SHELL_SETUP "setup.sh"
+
+enum shell_setup {
+	NO_SETUP     = 0,
+	RUN_SETUP    = 1,
+	FAILED_SETUP = 2,
+	PASSED_SETUP = 3,
+};
+
 struct shell_info {
 	const char *base_path;
+	enum shell_setup has_setup;
 };
 
 struct test_suite **create_script_test_suites(void);
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 8aea344536b8ab7d..2c5e665d1805b908 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -5,9 +5,11 @@
 #include <stdbool.h>
 
 enum {
-	TEST_OK   =  0,
-	TEST_FAIL = -1,
-	TEST_SKIP = -2,
+	TEST_OK         =  0,
+	TEST_FAIL      	= -1,
+	TEST_SKIP       = -2,
+	TEST_RUNNING	= -3,
+	TEST_SETUP_FAIL = -4,
 };
 
 #define TEST_ASSERT_VAL(text, cond)					 \
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 07/10] perftool-testsuite: Add empty setup for base_probe
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (6 preceding siblings ...)
  2025-01-13 18:26   ` [PATCH v2 06/10] perf test: Provide setup for the shell test suite vmolnaro
@ 2025-01-13 18:26   ` vmolnaro
  2025-01-13 18:26   ` [PATCH v2 08/10] perf test: Introduce storing logs for shell tests vmolnaro
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:26 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Add empty setup to set a proper name for base_probe testsuite, can be
utilized for basic test setup for the future.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/shell/base_probe/setup.sh | 13 +++++++++++++
 1 file changed, 13 insertions(+)
 create mode 100755 tools/perf/tests/shell/base_probe/setup.sh

diff --git a/tools/perf/tests/shell/base_probe/setup.sh b/tools/perf/tests/shell/base_probe/setup.sh
new file mode 100755
index 0000000000000000..fbb99325b555a723
--- /dev/null
+++ b/tools/perf/tests/shell/base_probe/setup.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+# perftool-testsuite :: perf_probe
+# SPDX-License-Identifier: GPL-2.0
+
+#
+#	setup.sh of perf probe test
+#	Author: Michael Petlan <mpetlan@redhat.com>
+#
+#	Description:
+#
+#		Setting testsuite name, for future use
+#
+#
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 08/10] perf test: Introduce storing logs for shell tests
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (7 preceding siblings ...)
  2025-01-13 18:26   ` [PATCH v2 07/10] perftool-testsuite: Add empty setup for base_probe vmolnaro
@ 2025-01-13 18:26   ` vmolnaro
  2025-01-13 18:26   ` [PATCH v2 09/10] perf test: Format log directories " vmolnaro
  2025-01-13 18:26   ` [PATCH v2 10/10] perf test: Remove perftool drivers vmolnaro
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:26 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

Create temporary directories for storing log files for shell tests
that could help while debugging. The log files are necessary for
perftool testsuite test cases also. If the variable KEEP_TEST_LOGS
is set keep the logs, else delete them.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/builtin-test.c  | 110 ++++++++++++++++++++++++++++---
 tools/perf/tests/tests-scripts.c |   3 +
 tools/perf/tests/tests-scripts.h |   1 +
 3 files changed, 105 insertions(+), 9 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 4a4dc86ebcf60d0d..d9c1453051e5c99c 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -5,6 +5,7 @@
  * Builtin regression testing command: ever growing number of sanity tests
  */
 #include <fcntl.h>
+#include <ftw.h>
 #include <errno.h>
 #include <poll.h>
 #include <unistd.h>
@@ -216,6 +217,86 @@ static bool test_exclusive(const struct test_suite *t, int subtest)
 	return t->test_cases[subtest].exclusive;
 }
 
+static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
+						 int typeflag, struct FTW *ftwbuf)
+{
+	int rv = -1;
+
+	/* Stop traversal if going too deep */
+	if (ftwbuf->level > 5) {
+		pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
+		return rv;
+	}
+
+	/* Remove only expected directories */
+	if (typeflag == FTW_D || typeflag == FTW_DP){
+		const char *dirname = fpath + ftwbuf->base;
+
+		if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
+			strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
+				pr_err("Unknown directory %s", dirname);
+				return rv;
+			 }
+	}
+
+	/* Attempt to remove the file */
+	rv = remove(fpath);
+	if (rv)
+		pr_err("Failed to remove file: %s", fpath);
+
+	return rv;
+}
+
+static bool create_logs(struct test_suite *t, int pass){
+	bool store_logs = t->priv && ((struct shell_info*)(t->priv))->store_logs;
+	if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
+		/* Sequential and non-exclusive tests run on the first pass. */
+		return store_logs;
+	}
+	else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
+		/* Exclusive tests without sequential run on the second pass. */
+		return store_logs;
+	}
+	return false;
+}
+
+static char *setup_shell_logs(const char *name)
+{
+	char template[PATH_MAX];
+	char *temp_dir;
+
+	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+		pr_err("Failed to create log dir template");
+		return NULL; /* Skip the testsuite */
+	}
+
+	temp_dir = mkdtemp(template);
+	if (temp_dir) {
+		setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
+		return strdup(temp_dir);
+	}
+	else {
+		pr_err("Failed to create the temporary directory");
+	}
+
+	return NULL; /* Skip the testsuite */
+}
+
+static void cleanup_shell_logs(char *dirname)
+{
+	char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
+
+	/* Check if logs should be kept or do cleanup */
+	if (dirname) {
+		if (!keep_logs || strcmp(keep_logs, "y") != 0) {
+			nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
+		}
+		free(dirname);
+	}
+
+	unsetenv("PERFSUITE_RUN_DIR");
+}
+
 static bool perf_test__matches(const char *desc, int curr, int argc, const char *argv[])
 {
 	int i;
@@ -547,6 +628,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 
 		for (struct test_suite **t = suites; *t; t++) {
 			int curr = i++;
+			char *tmpdir = NULL;
 
 			if (!perf_test__matches(test_description(*t, -1), curr, argc, argv)) {
 				/*
@@ -571,23 +653,33 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 				continue;
 			}
 
+			/* Setup temporary log directories for shell test suites */
+			if (create_logs(*t, pass)) {
+				tmpdir = setup_shell_logs((*t)->desc);
+
+				if (tmpdir == NULL)  /* Couldn't create log dir, skip test suite */
+					((struct shell_info*)((*t)->priv))->has_setup = FAILED_SETUP;
+			}
+
 			if (!has_subtests(*t)) {
 				err = start_test(*t, curr, -1, &child_tests[child_test_num++],
 						 width, pass);
 				if (err)
 					goto err_out;
-				continue;
 			}
-			for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) {
-				if (!perf_test__matches(test_description(*t, subi),
-							curr, argc, argv))
-					continue;
+			else {
+				for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) {
+					if (!perf_test__matches(test_description(*t, subi),
+								curr, argc, argv))
+						continue;
 
-				err = start_test(*t, curr, subi, &child_tests[child_test_num++],
-						 width, pass);
-				if (err)
-					goto err_out;
+					err = start_test(*t, curr, subi, &child_tests[child_test_num++],
+							width, pass);
+					if (err)
+						goto err_out;
+				}
 			}
+			cleanup_shell_logs(tmpdir);
 		}
 		if (!sequential) {
 			/* Parallel mode starts tests but doesn't finish them. Do that now. */
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index fa8f18cbcd2ae2bc..91ef0b47d2a8425c 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -251,6 +251,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
 
 	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
 	test_info->has_setup = NO_SETUP;
+	test_info->store_logs = false;
 
 	test_suite->priv = test_info;
 	test_suite->desc = NULL;
@@ -427,6 +428,8 @@ static void append_suits_in_dir(int dir_fd,
 			continue;
 		}
 
+		/* Store logs for testsuite is sub-directories */
+		((struct shell_info*)(test_suite->priv))->store_logs = true;
 		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
 			char *desc = shell_test__description(fd, SHELL_SETUP);
 			test_suite->desc = desc;	/* Set the suite name by the setup description */
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index da4dcd26140cdfd2..41da0a175e4e7033 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -16,6 +16,7 @@ enum shell_setup {
 struct shell_info {
 	const char *base_path;
 	enum shell_setup has_setup;
+	bool store_logs;
 };
 
 struct test_suite **create_script_test_suites(void);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 09/10] perf test: Format log directories for shell tests
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (8 preceding siblings ...)
  2025-01-13 18:26   ` [PATCH v2 08/10] perf test: Introduce storing logs for shell tests vmolnaro
@ 2025-01-13 18:26   ` vmolnaro
  2025-01-13 18:26   ` [PATCH v2 10/10] perf test: Remove perftool drivers vmolnaro
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:26 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

The name of the log directory can be taken from the test suite
description, which possibly could contain whitespace characters. This
can cause further issues if the name is not quoted correctly.

Replace the whitespace characters with an underscore to prevent the
possible issues caused by the name splitting.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 tools/perf/tests/builtin-test.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index d9c1453051e5c99c..fd4e315a6f90f477 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -4,6 +4,7 @@
  *
  * Builtin regression testing command: ever growing number of sanity tests
  */
+#include <ctype.h>
 #include <fcntl.h>
 #include <ftw.h>
 #include <errno.h>
@@ -217,6 +218,21 @@ static bool test_exclusive(const struct test_suite *t, int subtest)
 	return t->test_cases[subtest].exclusive;
 }
 
+/* Replace non-alphanumeric characters with _ */
+static void check_dir_name(const char *src, char *dst)
+{
+	size_t i;
+	size_t len = strlen(src);
+
+	for (i = 0; i < len; i++) {
+		if (!isalnum(src[i]))
+			dst[i] = '_';
+		else
+			dst[i] = src[i];
+	}
+	dst[i] = '\0';
+}
+
 static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
 						 int typeflag, struct FTW *ftwbuf)
 {
@@ -262,10 +278,12 @@ static bool create_logs(struct test_suite *t, int pass){
 
 static char *setup_shell_logs(const char *name)
 {
-	char template[PATH_MAX];
+	char template[PATH_MAX], valid_name[strlen(name)+1];
 	char *temp_dir;
 
-	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+	check_dir_name(name, valid_name);
+
+	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", valid_name) < 0) {
 		pr_err("Failed to create log dir template");
 		return NULL; /* Skip the testsuite */
 	}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 10/10] perf test: Remove perftool drivers
  2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
                     ` (9 preceding siblings ...)
  2025-01-13 18:26   ` [PATCH v2 09/10] perf test: Format log directories " vmolnaro
@ 2025-01-13 18:26   ` vmolnaro
  10 siblings, 0 replies; 43+ messages in thread
From: vmolnaro @ 2025-01-13 18:26 UTC (permalink / raw)
  To: linux-perf-users, acme, acme, mpetlan, namhyung; +Cc: irogers

From: Veronika Molnarova <vmolnaro@redhat.com>

The perf now provides all of the features required for running the
perftool test cases, such as creating log directories, running setup scripts
and the tests are structured by the base_ directories.

Remove the drivers as they are no longer necessary together with
the condition of skipping the base_ directories and run the
test cases by the default perf test structure.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
---
 .../tests/shell/perftool-testsuite_probe.sh   | 23 -------------------
 .../tests/shell/perftool-testsuite_report.sh  | 23 -------------------
 tools/perf/tests/tests-scripts.c              |  2 --
 3 files changed, 48 deletions(-)
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

diff --git a/tools/perf/tests/shell/perftool-testsuite_probe.sh b/tools/perf/tests/shell/perftool-testsuite_probe.sh
deleted file mode 100755
index 7b1bfd0f888fc30c..0000000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_probe.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_probe (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-test -d "$(dirname "$0")/base_probe" || exit 2
-cd "$(dirname "$0")/base_probe" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
-     test -x "$testcase" || continue
-     ./"$testcase"
-     (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
-	rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
deleted file mode 100755
index a8cf75b4e77ec1a3..0000000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_report.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_report (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-test -d "$(dirname "$0")/base_report" || exit 2
-cd "$(dirname "$0")/base_report" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
-     test -x "$testcase" || continue
-     ./"$testcase"
-     (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
-	rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 91ef0b47d2a8425c..e09a52c0f692dcfe 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -413,8 +413,6 @@ static void append_suits_in_dir(int dir_fd,
 			if (!S_ISDIR(st.st_mode))
 				continue;
 		}
-		if (strncmp(ent->d_name, "base_", 5) == 0)
-			continue; /* Skip scripts that have a separate driver. */
 
 		/* Scan subdir for test cases*/
 		fd = openat(dir_fd, ent->d_name, O_PATH);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 0/7] Introduce structure for shell tests
  2025-01-13 18:25   ` [PATCH v2 " vmolnaro
@ 2025-07-21 13:26     ` Jakub Brnak
  2025-07-21 13:26       ` [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
                         ` (7 more replies)
  0 siblings, 8 replies; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

Hi Arnaldo,

This series of Veronika's patches as a part of upstreaming effort of perftool-testsuite has been rebased on the latest perf-tools-next branch and should now apply cleanly. 
Patches 01/10, 02/10, and 05/10 from the v2 have been dropped as they were already accepted upstream. 

Thanks,
Jakub Brnak

Veronika Molnarova (7):
  perf test perftool_testsuite: Use absolute paths
  perf tests: Create a structure for shell tests
  perf test: Provide setup for the shell test suite
  perftool-testsuite: Add empty setup for base_probe
  perf test: Introduce storing logs for shell tests
  perf test: Format log directories for shell tests
  perf test: Remove perftool drivers

 tools/perf/tests/builtin-test.c               | 137 +++++++++-
 tools/perf/tests/shell/base_probe/setup.sh    |  13 +
 .../base_probe/test_adding_blacklisted.sh     |  13 +-
 .../shell/base_probe/test_adding_kernel.sh    |  53 ++--
 .../perf/tests/shell/base_probe/test_basic.sh |  19 +-
 .../shell/base_probe/test_invalid_options.sh  |  11 +-
 .../shell/base_probe/test_line_semantics.sh   |   7 +-
 tools/perf/tests/shell/base_report/setup.sh   |   6 +-
 .../tests/shell/base_report/test_basic.sh     |  47 ++--
 tools/perf/tests/shell/common/init.sh         |   4 +-
 .../tests/shell/perftool-testsuite_probe.sh   |  24 --
 .../tests/shell/perftool-testsuite_report.sh  |  23 --
 tools/perf/tests/tests-scripts.c              | 258 +++++++++++++++---
 tools/perf/tests/tests-scripts.h              |  15 +
 tools/perf/tests/tests.h                      |   8 +-
 15 files changed, 465 insertions(+), 173 deletions(-)
 create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

-- 
2.50.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-26  6:00         ` Namhyung Kim
  2025-07-21 13:26       ` [PATCH v3 2/7] perf tests: Create a structure for shell tests Jakub Brnak
                         ` (6 subsequent siblings)
  7 siblings, 1 reply; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

Test cases from perftool_testsuite are affected by the current
directory where the test are run. For this reason, the test
driver has to change the directory to the base_dir for references to
work correctly.

Utilize absolute paths when sourcing and referencing other scripts so
that the current working directory doesn't impact the test cases.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 .../base_probe/test_adding_blacklisted.sh     | 13 ++---
 .../shell/base_probe/test_adding_kernel.sh    | 53 ++++++++++---------
 .../perf/tests/shell/base_probe/test_basic.sh | 19 +++----
 .../shell/base_probe/test_invalid_options.sh  | 11 ++--
 .../shell/base_probe/test_line_semantics.sh   |  7 +--
 tools/perf/tests/shell/base_report/setup.sh   |  6 ++-
 .../tests/shell/base_report/test_basic.sh     | 47 ++++++++--------
 tools/perf/tests/shell/common/init.sh         |  4 +-
 8 files changed, 84 insertions(+), 76 deletions(-)

diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
index 8226449ac5c3..c409ca8520f8 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
@@ -13,11 +13,12 @@
 #	they must be skipped.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 # skip if not supported
 BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
 if [ -z "$BLACKFUNC_LIST" ]; then
@@ -53,7 +54,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
 	PERF_EXIT_CODE=$?
 
 	# check for bad DWARF polluting the result
-	../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
 
 	if [ $? -eq 0 ]; then
 		SKIP_DWARF=1
@@ -73,7 +74,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
 			fi
 		fi
 	else
-		../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
+		"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
 		CHECK_EXIT_CODE=$?
 
 		SKIP_DWARF=0
@@ -94,7 +95,7 @@ fi
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
index df288cf90cd6..3548faf60c8e 100755
--- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
+++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
@@ -13,13 +13,14 @@
 #		and removing.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 # shellcheck source=lib/probe_vfs_getname.sh
-. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
+. "$DIR_PATH/../lib/probe_vfs_getname.sh"
 
 TEST_PROBE=${TEST_PROBE:-"inode_permission"}
 
@@ -44,7 +45,7 @@ for opt in "" "-a" "--add"; do
 	$CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
@@ -58,7 +59,7 @@ done
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
@@ -71,7 +72,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
 $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
 CHECK_EXIT_CODE=$?
 
 if [ $NO_DEBUGINFO ] ; then
@@ -93,9 +94,9 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
 # the value should be greater than 1
 REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
 REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
-../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
@@ -108,7 +109,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
 $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
@@ -121,7 +122,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
 $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
@@ -135,7 +136,7 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
 PERF_EXIT_CODE=$?
 
 # check for the output (should be the same as usual)
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
 CHECK_EXIT_CODE=$?
 
 # check that no probe was added in real
@@ -152,7 +153,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
 $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
@@ -162,7 +163,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
 ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
@@ -173,7 +174,7 @@ NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l`
 $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
@@ -187,7 +188,7 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
 PERF_EXIT_CODE=$?
 
 REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
-../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
 CHECK_EXIT_CODE=$?
 
 VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
@@ -205,7 +206,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
 $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
@@ -217,7 +218,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
 $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
 CHECK_EXIT_CODE=$?
 
 if [ $NO_DEBUGINFO ] ; then
@@ -240,13 +241,13 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
 PERF_EXIT_CODE=$?
 
 # check that the error message is reasonable
-../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
-../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_all_lines_matched.pl" "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
-../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
+"$DIR_PATH/../common/check_no_patterns_found.pl" "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
 (( CHECK_EXIT_CODE += $? ))
 
 if [ $NO_DEBUGINFO ]; then
@@ -264,7 +265,7 @@ fi
 $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
@@ -274,7 +275,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
 $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
@@ -285,9 +286,9 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
 PERF_EXIT_CODE=$?
 
 REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
-../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
 CHECK_EXIT_CODE=$?
-../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
index 9d8b5afbeddd..e8fed67be9c1 100755
--- a/tools/perf/tests/shell/base_probe/test_basic.sh
+++ b/tools/perf/tests/shell/base_probe/test_basic.sh
@@ -12,11 +12,12 @@
 #		This test tests basic functionality of perf probe command.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
@@ -30,15 +31,15 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
 	$CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
 	CHECK_EXIT_CODE=$?
-	../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
 	(( CHECK_EXIT_CODE += $? ))
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -53,7 +54,7 @@ fi
 # without any args perf-probe should print usage
 $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
 
-../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
 CHECK_EXIT_CODE=$?
 
 print_results 0 $CHECK_EXIT_CODE "usage message"
diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
index 92f7254eb32a..9caeab2fe77c 100755
--- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
+++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
@@ -12,11 +12,12 @@
 #		This test checks whether the invalid and incompatible options are reported
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
@@ -33,7 +34,7 @@ for opt in '-a' '-d' '-L' '-V'; do
 	! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
@@ -66,7 +67,7 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
 	! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
 	CHECK_EXIT_CODE=$?
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
index 20435b6bf6bc..576442d87a44 100755
--- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
+++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
@@ -13,11 +13,12 @@
 #		arguments are properly reported.
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 if ! check_kprobes_available; then
 	print_overall_skipped
 	exit 2
diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
index 8634e7e0dda6..2fd5c97f9822 100755
--- a/tools/perf/tests/shell/base_report/setup.sh
+++ b/tools/perf/tests/shell/base_report/setup.sh
@@ -12,8 +12,10 @@
 #
 #
 
+DIR_PATH="$(dirname $0)"
+
 # include working environment
-. ../common/init.sh
+. "$DIR_PATH/../common/init.sh"
 
 TEST_RESULT=0
 
@@ -24,7 +26,7 @@ SW_EVENT="cpu-clock"
 $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
 PERF_EXIT_CODE=$?
 
-../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
index adfd8713b8f8..a15d3007f449 100755
--- a/tools/perf/tests/shell/base_report/test_basic.sh
+++ b/tools/perf/tests/shell/base_report/test_basic.sh
@@ -12,11 +12,12 @@
 #
 #
 
-# include working environment
-. ../common/init.sh
-
+DIR_PATH="$(dirname $0)"
 TEST_RESULT=0
 
+# include working environment
+. "$DIR_PATH/../common/init.sh"
+
 
 ### help message
 
@@ -25,19 +26,19 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
 	$CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
 	PERF_EXIT_CODE=$?
 
-	../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
 	CHECK_EXIT_CODE=$?
-	../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
+	"$DIR_PATH/../common/check_all_patterns_found.pl" "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
 	(( CHECK_EXIT_CODE += $? ))
-	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
+	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
 	(( CHECK_EXIT_CODE += $? ))
 
 	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
@@ -57,9 +58,9 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
 REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
@@ -74,9 +75,9 @@ PERF_EXIT_CODE=$?
 
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
@@ -98,7 +99,7 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
 REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
 # disable precise check for "nrcpus avail" in BASIC runmode
 test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
-../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
 CHECK_EXIT_CODE=$?
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
@@ -129,9 +130,9 @@ PERF_EXIT_CODE=$?
 
 REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
 REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
-../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
+"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
@@ -144,9 +145,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
 PERF_EXIT_CODE=$?
 
-grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
+grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
@@ -159,9 +160,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
 PERF_EXIT_CODE=$?
 
-../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
+"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
@@ -174,9 +175,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
 $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
 PERF_EXIT_CODE=$?
 
-grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
+grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
 CHECK_EXIT_CODE=$?
-../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
+"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
 (( CHECK_EXIT_CODE += $? ))
 
 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
index 26c7525651e0..cbfc78bec974 100644
--- a/tools/perf/tests/shell/common/init.sh
+++ b/tools/perf/tests/shell/common/init.sh
@@ -11,8 +11,8 @@
 #
 
 
-. ../common/settings.sh
-. ../common/patterns.sh
+. "$(dirname $0)/../common/settings.sh"
+. "$(dirname $0)/../common/patterns.sh"
 
 THIS_TEST_NAME=`basename $0 .sh`
 
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 2/7] perf tests: Create a structure for shell tests
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
  2025-07-21 13:26       ` [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-21 19:39         ` Ian Rogers
  2025-07-26  6:03         ` Namhyung Kim
  2025-07-21 13:26       ` [PATCH v3 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
                         ` (5 subsequent siblings)
  7 siblings, 2 replies; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

The general structure of test suites with test cases has been implemented
for C tests for some time, while shell tests were just all put into a list
without any possible structuring.

Provide the same possibility of test suite structure for shell tests. The
suite is created for each subdirectory located in the 'perf/tests/shell'
directory that contains at least one test script. All of the deeper levels
of subdirectories will be merged with the first level of test cases.
The name of the test suite is the name of the subdirectory, where the test
cases are located. For all of the test scripts that are not in any
subdirectory, a test suite with a single test case is created as it has
been till now.

The new structure of the shell tests for 'perf test list':
    77: build id cache operations
    78: coresight
    78:1: CoreSight / ASM Pure Loop
    78:2: CoreSight / Memcpy 16k 10 Threads
    78:3: CoreSight / Thread Loop 10 Threads - Check TID
    78:4: CoreSight / Thread Loop 2 Threads - Check TID
    78:5: CoreSight / Unroll Loop Thread 10
    79: daemon operations
    80: perf diff tests

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 tools/perf/tests/tests-scripts.c | 223 +++++++++++++++++++++++++------
 tools/perf/tests/tests-scripts.h |   4 +
 2 files changed, 189 insertions(+), 38 deletions(-)

diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index f18c4cd337c8..21a6ede330e9 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -151,14 +151,45 @@ static char *strdup_check(const char *str)
 	return newstr;
 }
 
-static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
+/* Free the whole structure of test_suite with its test_cases */
+static void free_suite(struct test_suite *suite) {
+	if (suite->test_cases){
+		int num = 0;
+		while (suite->test_cases[num].name){ /* Last case has name set to NULL */
+			free((void*) suite->test_cases[num].name);
+			free((void*) suite->test_cases[num].desc);
+			num++;
+		}
+		free(suite->test_cases);
+	}
+	if (suite->desc)
+		free((void*) suite->desc);
+	if (suite->priv){
+		struct shell_info *test_info = suite->priv;
+		free((void*) test_info->base_path);
+		free(test_info);
+	}
+
+	free(suite);
+}
+
+static int shell_test__run(struct test_suite *test, int subtest)
 {
-	const char *file = test->priv;
+	const char *file;
 	int err;
 	char *cmd = NULL;
 
+	/* Get absolute file path */
+	if (subtest >= 0) {
+		file = test->test_cases[subtest].name;
+	}
+	else {		/* Single test case */
+		file = test->test_cases[0].name;
+	}
+
 	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
 		return TEST_FAIL;
+
 	err = system(cmd);
 	free(cmd);
 	if (!err)
@@ -167,63 +198,154 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
 	return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
 }
 
-static void append_script(int dir_fd, const char *name, char *desc,
-			  struct test_suite ***result,
-			  size_t *result_sz)
+static struct test_suite* prepare_test_suite(int dir_fd)
 {
-	char filename[PATH_MAX], link[128];
-	struct test_suite *test_suite, **result_tmp;
-	struct test_case *tests;
+	char dirpath[PATH_MAX], link[128];
 	ssize_t len;
-	char *exclusive;
+	struct test_suite *test_suite = NULL;
+	struct shell_info *test_info;
 
+	/* Get dir absolute path */
 	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
-	len = readlink(link, filename, sizeof(filename));
+	len = readlink(link, dirpath, sizeof(dirpath));
 	if (len < 0) {
 		pr_err("Failed to readlink %s", link);
-		return;
+		return NULL;
 	}
-	filename[len++] = '/';
-	strcpy(&filename[len], name);
+	dirpath[len++] = '/';
+	dirpath[len] = '\0';
 
-	tests = calloc(2, sizeof(*tests));
-	if (!tests) {
-		pr_err("Out of memory while building script test suite list\n");
-		return;
-	}
-	tests[0].name = strdup_check(name);
-	exclusive = strstr(desc, " (exclusive)");
-	if (exclusive != NULL) {
-		tests[0].exclusive = true;
-		exclusive[0] = '\0';
-	}
-	tests[0].desc = strdup_check(desc);
-	tests[0].run_case = shell_test__run;
 	test_suite = zalloc(sizeof(*test_suite));
 	if (!test_suite) {
 		pr_err("Out of memory while building script test suite list\n");
-		free(tests);
-		return;
+		return NULL;
 	}
-	test_suite->desc = desc;
-	test_suite->test_cases = tests;
-	test_suite->priv = strdup_check(filename);
+
+	test_info = zalloc(sizeof(*test_info));
+	if (!test_info) {
+		pr_err("Out of memory while building script test suite list\n");
+		return NULL;
+	}
+
+	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
+
+	test_suite->priv = test_info;
+	test_suite->desc = NULL;
+	test_suite->test_cases = NULL;
+
+	return test_suite;
+}
+
+static void append_suite(struct test_suite ***result,
+			  size_t *result_sz, struct test_suite *test_suite)
+{
+	struct test_suite **result_tmp;
+
 	/* Realloc is good enough, though we could realloc by chunks, not that
 	 * anyone will ever measure performance here */
 	result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
 	if (result_tmp == NULL) {
 		pr_err("Out of memory while building script test suite list\n");
-		free(tests);
-		free(test_suite);
+		free_suite(test_suite);
 		return;
 	}
+
 	/* Add file to end and NULL terminate the struct array */
 	*result = result_tmp;
 	(*result)[*result_sz] = test_suite;
 	(*result_sz)++;
 }
 
-static void append_scripts_in_dir(int dir_fd,
+static void append_script_to_suite(int dir_fd, const char *name, char *desc,
+					struct test_suite *test_suite, size_t *tc_count)
+{
+	char file_name[PATH_MAX], link[128];
+	struct test_case *tests;
+	size_t len;
+	char *exclusive;
+
+	if (!test_suite)
+		return;
+
+	/* Requires an empty test case at the end */
+	tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
+	if (!tests) {
+		pr_err("Out of memory while building script test suite list\n");
+		return;
+	}
+
+	/* Get path to the test script */
+	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
+	len = readlink(link, file_name, sizeof(file_name));
+	if (len < 0) {
+		pr_err("Failed to readlink %s", link);
+		return;
+	}
+	file_name[len++] = '/';
+	strcpy(&file_name[len], name);
+
+	tests[(*tc_count)].name = strdup_check(file_name);	/* Get path to the script from base dir */
+	tests[(*tc_count)].exclusive = false;
+	exclusive = strstr(desc, " (exclusive)");
+	if (exclusive != NULL) {
+		tests[(*tc_count)].exclusive = true;
+		exclusive[0] = '\0';
+	}
+	tests[(*tc_count)].desc = desc;
+	tests[(*tc_count)].skip_reason = NULL;	/* Unused */
+	tests[(*tc_count)++].run_case = shell_test__run;
+
+	tests[(*tc_count)].name = NULL;		/* End the test cases */
+
+	test_suite->test_cases = tests;
+}
+
+static void append_scripts_in_subdir(int dir_fd,
+				  struct test_suite *suite,
+				  size_t *tc_count)
+{
+	struct dirent **entlist;
+	struct dirent *ent;
+	int n_dirs, i;
+
+	/* List files, sorted by alpha */
+	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+	if (n_dirs == -1)
+		return;
+	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
+		int fd;
+
+		if (ent->d_name[0] == '.')
+			continue; /* Skip hidden files */
+		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
+			char *desc = shell_test__description(dir_fd, ent->d_name);
+
+			if (desc) /* It has a desc line - valid script */
+				append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
+			continue;
+		}
+
+		if (ent->d_type != DT_DIR) {
+			struct stat st;
+
+			if (ent->d_type != DT_UNKNOWN)
+				continue;
+			fstatat(dir_fd, ent->d_name, &st, 0);
+			if (!S_ISDIR(st.st_mode))
+				continue;
+		}
+
+		fd = openat(dir_fd, ent->d_name, O_PATH);
+
+		/* Recurse into the dir */
+		append_scripts_in_subdir(fd, suite, tc_count);
+	}
+	for (i = 0; i < n_dirs; i++) /* Clean up */
+		zfree(&entlist[i]);
+	free(entlist);
+}
+
+static void append_suits_in_dir(int dir_fd,
 				  struct test_suite ***result,
 				  size_t *result_sz)
 {
@@ -237,16 +359,27 @@ static void append_scripts_in_dir(int dir_fd,
 		return;
 	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
 		int fd;
+		struct test_suite *test_suite;
+		size_t cases_count = 0;
 
 		if (ent->d_name[0] == '.')
 			continue; /* Skip hidden files */
 		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
 			char *desc = shell_test__description(dir_fd, ent->d_name);
 
-			if (desc) /* It has a desc line - valid script */
-				append_script(dir_fd, ent->d_name, desc, result, result_sz);
+			if (desc) { /* It has a desc line - valid script */
+				test_suite = prepare_test_suite(dir_fd); /* Create a test suite with a single test case */
+				append_script_to_suite(dir_fd, ent->d_name, desc, test_suite, &cases_count);
+				test_suite->desc = strdup_check(desc);
+
+				if (cases_count)
+					append_suite(result, result_sz, test_suite);
+				else /* Wasn't able to create the test case */
+					free_suite(test_suite);
+			}
 			continue;
 		}
+
 		if (ent->d_type != DT_DIR) {
 			struct stat st;
 
@@ -258,8 +391,22 @@ static void append_scripts_in_dir(int dir_fd,
 		}
 		if (strncmp(ent->d_name, "base_", 5) == 0)
 			continue; /* Skip scripts that have a separate driver. */
+
+		/* Scan subdir for test cases*/
 		fd = openat(dir_fd, ent->d_name, O_PATH);
-		append_scripts_in_dir(fd, result, result_sz);
+		test_suite = prepare_test_suite(fd);	/* Prepare a testsuite with its path */
+		if (!test_suite)
+			continue;
+
+		append_scripts_in_subdir(fd, test_suite, &cases_count);
+		if (cases_count == 0){
+			free_suite(test_suite);
+			continue;
+		}
+
+		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+
+		append_suite(result, result_sz, test_suite);
 		close(fd);
 	}
 	for (i = 0; i < n_dirs; i++) /* Clean up */
@@ -278,7 +425,7 @@ struct test_suite **create_script_test_suites(void)
 	 * length array.
 	 */
 	if (dir_fd >= 0)
-		append_scripts_in_dir(dir_fd, &result, &result_sz);
+		append_suits_in_dir(dir_fd, &result, &result_sz);
 
 	result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
 	if (result_tmp == NULL) {
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index b553ad26ea17..60a1a19a45c9 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,6 +4,10 @@
 
 #include "tests.h"
 
+struct shell_info {
+	const char *base_path;
+};
+
 struct test_suite **create_script_test_suites(void);
 
 #endif /* TESTS_SCRIPTS_H */
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 3/7] perf test: Provide setup for the shell test suite
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
  2025-07-21 13:26       ` [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
  2025-07-21 13:26       ` [PATCH v3 2/7] perf tests: Create a structure for shell tests Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-26  6:07         ` Namhyung Kim
  2025-07-21 13:26       ` [PATCH v3 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
                         ` (4 subsequent siblings)
  7 siblings, 1 reply; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

Some of the perftool-testsuite test cases require a setup to be done
beforehand as may be recording data, setting up cache or restoring sample
rate. The setup file also provides the possibility to set the name of
the test suite, if the name of the directory is not good enough.

Check for the existence of the "setup.sh" script for the shell test
suites and run it before the any of the test cases. If the setup fails,
skip all of the test cases of the test suite as the setup may be
required for the result to be valid.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 tools/perf/tests/builtin-test.c  | 30 +++++++++++++++++++++-------
 tools/perf/tests/tests-scripts.c | 34 ++++++++++++++++++++++++++++++--
 tools/perf/tests/tests-scripts.h | 10 ++++++++++
 tools/perf/tests/tests.h         |  8 +++++---
 4 files changed, 70 insertions(+), 12 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 85142dfb3e01..4e3d2f779b01 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -258,6 +258,22 @@ static test_fnptr test_function(const struct test_suite *t, int test_case)
 	return t->test_cases[test_case].run_case;
 }
 
+/* If setup fails, skip all test cases */
+static void check_shell_setup(const struct test_suite *t, int ret)
+{
+	struct shell_info* test_info;
+
+	if (!t->priv)
+		return;
+
+	test_info = t->priv;
+
+	if (ret == TEST_SETUP_FAIL)
+		test_info->has_setup = FAILED_SETUP;
+	else if (test_info->has_setup == RUN_SETUP)
+		test_info->has_setup = PASSED_SETUP;
+}
+
 static bool test_exclusive(const struct test_suite *t, int test_case)
 {
 	if (test_case <= 0)
@@ -347,10 +363,8 @@ static int run_test_child(struct child_process *process)
 	return -err;
 }
 
-#define TEST_RUNNING -3
-
-static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case,
-			     int result, int width, int running)
+static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case, int result, int width,
+			     int running)
 {
 	if (test_suite__num_test_cases(t) > 1) {
 		int subw = width > 2 ? width - 2 : width;
@@ -367,7 +381,8 @@ static int print_test_result(struct test_suite *t, int curr_suite, int curr_test
 	case TEST_OK:
 		pr_info(" Ok\n");
 		break;
-	case TEST_SKIP: {
+	case TEST_SKIP:
+	case TEST_SETUP_FAIL:{
 		const char *reason = skip_reason(t, curr_test_case);
 
 		if (reason)
@@ -482,6 +497,7 @@ static void finish_test(struct child_test **child_tests, int running_test, int c
 	}
 	/* Clean up child process. */
 	ret = finish_command(&child_test->process);
+	check_shell_setup(t, ret);
 	if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
 		fprintf(stderr, "%s", err_output.buf);
 
@@ -503,8 +519,8 @@ static int start_test(struct test_suite *test, int curr_suite, int curr_test_cas
 			pr_debug("--- start ---\n");
 			err = test_function(test, curr_test_case)(test, curr_test_case);
 			pr_debug("---- end ----\n");
-			print_test_result(test, curr_suite, curr_test_case, err, width,
-					  /*running=*/0);
+			print_test_result(test, curr_suite, curr_test_case, err, width, /*running=*/0);
+			check_shell_setup(test, err);
 		}
 		return 0;
 	}
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index 21a6ede330e9..d680a878800f 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -138,6 +138,11 @@ static bool is_test_script(int dir_fd, const char *name)
 	return is_shell_script(dir_fd, name);
 }
 
+/* Filter for scandir */
+static int setup_filter(const struct dirent *entry){
+	return strcmp(entry->d_name, SHELL_SETUP);
+}
+
 /* Duplicate a string and fall over and die if we run out of memory */
 static char *strdup_check(const char *str)
 {
@@ -175,6 +180,7 @@ static void free_suite(struct test_suite *suite) {
 
 static int shell_test__run(struct test_suite *test, int subtest)
 {
+	struct shell_info *test_info = test->priv;
 	const char *file;
 	int err;
 	char *cmd = NULL;
@@ -187,6 +193,22 @@ static int shell_test__run(struct test_suite *test, int subtest)
 		file = test->test_cases[0].name;
 	}
 
+	/* Run setup if needed */
+	if (test_info->has_setup == RUN_SETUP){
+		char *setup_script;
+		if (asprintf(&setup_script, "%s%s%s", test_info->base_path, SHELL_SETUP, verbose ? " -v" : "") < 0)
+			return TEST_SETUP_FAIL;
+
+		err = system(setup_script);
+		free(setup_script);
+
+		if (err)
+			return TEST_SETUP_FAIL;
+	}
+	else if (test_info->has_setup == FAILED_SETUP) {
+		return TEST_SKIP; /* Skip test suite if setup failed */
+	}
+
 	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
 		return TEST_FAIL;
 
@@ -228,6 +250,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
 	}
 
 	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
+	test_info->has_setup = NO_SETUP;
 
 	test_suite->priv = test_info;
 	test_suite->desc = NULL;
@@ -309,7 +332,7 @@ static void append_scripts_in_subdir(int dir_fd,
 	int n_dirs, i;
 
 	/* List files, sorted by alpha */
-	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
+	n_dirs = scandirat(dir_fd, ".", &entlist, setup_filter, alphasort);
 	if (n_dirs == -1)
 		return;
 	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
@@ -404,7 +427,14 @@ static void append_suits_in_dir(int dir_fd,
 			continue;
 		}
 
-		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
+			char *desc = shell_test__description(fd, SHELL_SETUP);
+			test_suite->desc = desc;	/* Set the suite name by the setup description */
+			((struct shell_info*)(test_suite->priv))->has_setup = RUN_SETUP;
+		}
+		else {
+			test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
+		}
 
 		append_suite(result, result_sz, test_suite);
 		close(fd);
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index 60a1a19a45c9..da4dcd26140c 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -4,8 +4,18 @@
 
 #include "tests.h"
 
+#define SHELL_SETUP "setup.sh"
+
+enum shell_setup {
+	NO_SETUP     = 0,
+	RUN_SETUP    = 1,
+	FAILED_SETUP = 2,
+	PASSED_SETUP = 3,
+};
+
 struct shell_info {
 	const char *base_path;
+	enum shell_setup has_setup;
 };
 
 struct test_suite **create_script_test_suites(void);
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 97e62db8764a..0545c9429000 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -6,9 +6,11 @@
 #include "util/debug.h"
 
 enum {
-	TEST_OK   =  0,
-	TEST_FAIL = -1,
-	TEST_SKIP = -2,
+	TEST_OK         =  0,
+	TEST_FAIL      	= -1,
+	TEST_SKIP       = -2,
+	TEST_RUNNING	= -3,
+	TEST_SETUP_FAIL = -4,
 };
 
 #define TEST_ASSERT_VAL(text, cond)					 \
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 4/7] perftool-testsuite: Add empty setup for base_probe
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
                         ` (2 preceding siblings ...)
  2025-07-21 13:26       ` [PATCH v3 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-21 13:26       ` [PATCH v3 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
                         ` (3 subsequent siblings)
  7 siblings, 0 replies; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

Add empty setup to set a proper name for base_probe testsuite, can be
utilized for basic test setup for the future.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 tools/perf/tests/shell/base_probe/setup.sh | 13 +++++++++++++
 1 file changed, 13 insertions(+)
 create mode 100755 tools/perf/tests/shell/base_probe/setup.sh

diff --git a/tools/perf/tests/shell/base_probe/setup.sh b/tools/perf/tests/shell/base_probe/setup.sh
new file mode 100755
index 000000000000..fbb99325b555
--- /dev/null
+++ b/tools/perf/tests/shell/base_probe/setup.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+# perftool-testsuite :: perf_probe
+# SPDX-License-Identifier: GPL-2.0
+
+#
+#	setup.sh of perf probe test
+#	Author: Michael Petlan <mpetlan@redhat.com>
+#
+#	Description:
+#
+#		Setting testsuite name, for future use
+#
+#
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 5/7] perf test: Introduce storing logs for shell tests
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
                         ` (3 preceding siblings ...)
  2025-07-21 13:26       ` [PATCH v3 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-21 19:43         ` Ian Rogers
  2025-07-26  6:17         ` Namhyung Kim
  2025-07-21 13:26       ` [PATCH v3 6/7] perf test: Format log directories " Jakub Brnak
                         ` (2 subsequent siblings)
  7 siblings, 2 replies; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

Create temporary directories for storing log files for shell tests
that could help while debugging. The log files are necessary for
perftool testsuite test cases also. If the variable KEEP_TEST_LOGS
is set keep the logs, else delete them.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 tools/perf/tests/builtin-test.c  | 90 ++++++++++++++++++++++++++++++++
 tools/perf/tests/tests-scripts.c |  3 ++
 tools/perf/tests/tests-scripts.h |  1 +
 3 files changed, 94 insertions(+)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 4e3d2f779b01..89b180798224 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -6,6 +6,7 @@
  */
 #include <ctype.h>
 #include <fcntl.h>
+#include <ftw.h>
 #include <errno.h>
 #ifdef HAVE_BACKTRACE_SUPPORT
 #include <execinfo.h>
@@ -282,6 +283,86 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
 	return t->test_cases[test_case].exclusive;
 }
 
+static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
+						 int typeflag, struct FTW *ftwbuf)
+{
+	int rv = -1;
+
+	/* Stop traversal if going too deep */
+	if (ftwbuf->level > 5) {
+		pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
+		return rv;
+	}
+
+	/* Remove only expected directories */
+	if (typeflag == FTW_D || typeflag == FTW_DP){
+		const char *dirname = fpath + ftwbuf->base;
+
+		if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
+			strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
+				pr_err("Unknown directory %s", dirname);
+				return rv;
+			 }
+	}
+
+	/* Attempt to remove the file */
+	rv = remove(fpath);
+	if (rv)
+		pr_err("Failed to remove file: %s", fpath);
+
+	return rv;
+}
+
+static bool create_logs(struct test_suite *t, int pass){
+	bool store_logs = t->priv && ((struct shell_info*)(t->priv))->store_logs;
+	if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
+		/* Sequential and non-exclusive tests run on the first pass. */
+		return store_logs;
+	}
+	else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
+		/* Exclusive tests without sequential run on the second pass. */
+		return store_logs;
+	}
+	return false;
+}
+
+static char *setup_shell_logs(const char *name)
+{
+	char template[PATH_MAX];
+	char *temp_dir;
+
+	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+		pr_err("Failed to create log dir template");
+		return NULL; /* Skip the testsuite */
+	}
+
+	temp_dir = mkdtemp(template);
+	if (temp_dir) {
+		setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
+		return strdup(temp_dir);
+	}
+	else {
+		pr_err("Failed to create the temporary directory");
+	}
+
+	return NULL; /* Skip the testsuite */
+}
+
+static void cleanup_shell_logs(char *dirname)
+{
+	char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
+
+	/* Check if logs should be kept or do cleanup */
+	if (dirname) {
+		if (!keep_logs || strcmp(keep_logs, "y") != 0) {
+			nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
+		}
+		free(dirname);
+	}
+
+	unsetenv("PERFSUITE_RUN_DIR");
+}
+
 static bool perf_test__matches(const char *desc, int suite_num, int argc, const char *argv[])
 {
 	int i;
@@ -626,6 +707,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 		for (struct test_suite **t = suites; *t; t++, curr_suite++) {
 			int curr_test_case;
 			bool suite_matched = false;
+			char *tmpdir = NULL;
 
 			if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) {
 				/*
@@ -655,6 +737,13 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 			}
 
 			for (unsigned int run = 0; run < runs_per_test; run++) {
+				/* Setup temporary log directories for shell test suites */
+				if (create_logs(*t, pass)) {
+					tmpdir = setup_shell_logs((*t)->desc);
+
+					if (tmpdir == NULL)  /* Couldn't create log dir, skip test suite */
+						((struct shell_info*)((*t)->priv))->has_setup = FAILED_SETUP;
+				}
 				test_suite__for_each_test_case(*t, curr_test_case) {
 					if (!suite_matched &&
 					    !perf_test__matches(test_description(*t, curr_test_case),
@@ -667,6 +756,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
 						goto err_out;
 				}
 			}
+			cleanup_shell_logs(tmpdir);
 		}
 		if (!sequential) {
 			/* Parallel mode starts tests but doesn't finish them. Do that now. */
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index d680a878800f..d4e382898a30 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -251,6 +251,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
 
 	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
 	test_info->has_setup = NO_SETUP;
+	test_info->store_logs = false;
 
 	test_suite->priv = test_info;
 	test_suite->desc = NULL;
@@ -427,6 +428,8 @@ static void append_suits_in_dir(int dir_fd,
 			continue;
 		}
 
+		/* Store logs for testsuite is sub-directories */
+		((struct shell_info*)(test_suite->priv))->store_logs = true;
 		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
 			char *desc = shell_test__description(fd, SHELL_SETUP);
 			test_suite->desc = desc;	/* Set the suite name by the setup description */
diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
index da4dcd26140c..41da0a175e4e 100644
--- a/tools/perf/tests/tests-scripts.h
+++ b/tools/perf/tests/tests-scripts.h
@@ -16,6 +16,7 @@ enum shell_setup {
 struct shell_info {
 	const char *base_path;
 	enum shell_setup has_setup;
+	bool store_logs;
 };
 
 struct test_suite **create_script_test_suites(void);
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 6/7] perf test: Format log directories for shell tests
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
                         ` (4 preceding siblings ...)
  2025-07-21 13:26       ` [PATCH v3 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-26  6:21         ` Namhyung Kim
  2025-07-21 13:26       ` [PATCH v3 7/7] perf test: Remove perftool drivers Jakub Brnak
  2025-07-31 12:54       ` [PATCH v3 0/7] Introduce structure for shell tests tejas05
  7 siblings, 1 reply; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

The name of the log directory can be taken from the test suite
description, which possibly could contain whitespace characters. This
can cause further issues if the name is not quoted correctly.

Replace the whitespace characters with an underscore to prevent the
possible issues caused by the name splitting.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 tools/perf/tests/builtin-test.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 89b180798224..9cb0788d3307 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -283,6 +283,21 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
 	return t->test_cases[test_case].exclusive;
 }
 
+/* Replace non-alphanumeric characters with _ */
+static void check_dir_name(const char *src, char *dst)
+{
+	size_t i;
+	size_t len = strlen(src);
+
+	for (i = 0; i < len; i++) {
+		if (!isalnum(src[i]))
+			dst[i] = '_';
+		else
+			dst[i] = src[i];
+	}
+	dst[i] = '\0';
+}
+
 static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
 						 int typeflag, struct FTW *ftwbuf)
 {
@@ -328,10 +343,12 @@ static bool create_logs(struct test_suite *t, int pass){
 
 static char *setup_shell_logs(const char *name)
 {
-	char template[PATH_MAX];
+	char template[PATH_MAX], valid_name[strlen(name)+1];
 	char *temp_dir;
 
-	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
+	check_dir_name(name, valid_name);
+
+	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", valid_name) < 0) {
 		pr_err("Failed to create log dir template");
 		return NULL; /* Skip the testsuite */
 	}
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v3 7/7] perf test: Remove perftool drivers
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
                         ` (5 preceding siblings ...)
  2025-07-21 13:26       ` [PATCH v3 6/7] perf test: Format log directories " Jakub Brnak
@ 2025-07-21 13:26       ` Jakub Brnak
  2025-07-21 19:46         ` Ian Rogers
  2025-07-31 12:54       ` [PATCH v3 0/7] Introduce structure for shell tests tejas05
  7 siblings, 1 reply; 43+ messages in thread
From: Jakub Brnak @ 2025-07-21 13:26 UTC (permalink / raw)
  To: vmolnaro; +Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung

From: Veronika Molnarova <vmolnaro@redhat.com>

The perf now provides all of the features required for running the
perftool test cases, such as creating log directories, running setup scripts
and the tests are structured by the base_ directories.

Remove the drivers as they are no longer necessary together with
the condition of skipping the base_ directories and run the
test cases by the default perf test structure.

Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
---
 .../tests/shell/perftool-testsuite_probe.sh   | 24 -------------------
 .../tests/shell/perftool-testsuite_report.sh  | 23 ------------------
 tools/perf/tests/tests-scripts.c              |  2 --
 3 files changed, 49 deletions(-)
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
 delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

diff --git a/tools/perf/tests/shell/perftool-testsuite_probe.sh b/tools/perf/tests/shell/perftool-testsuite_probe.sh
deleted file mode 100755
index 3863df16c19b..000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_probe.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_probe (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-[ "$(id -u)" = 0 ] || exit 2
-test -d "$(dirname "$0")/base_probe" || exit 2
-cd "$(dirname "$0")/base_probe" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
-     test -x "$testcase" || continue
-     ./"$testcase"
-     (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
-	rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
deleted file mode 100755
index a8cf75b4e77e..000000000000
--- a/tools/perf/tests/shell/perftool-testsuite_report.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# perftool-testsuite_report (exclusive)
-# SPDX-License-Identifier: GPL-2.0
-
-test -d "$(dirname "$0")/base_report" || exit 2
-cd "$(dirname "$0")/base_report" || exit 2
-status=0
-
-PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
-export PERFSUITE_RUN_DIR
-
-for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
-     test -x "$testcase" || continue
-     ./"$testcase"
-     (( status += $? ))
-done
-
-if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
-	rm -rf "$PERFSUITE_RUN_DIR"
-fi
-
-test $status -ne 0 && exit 1
-exit 0
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index d4e382898a30..79b75b83a4bf 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -413,8 +413,6 @@ static void append_suits_in_dir(int dir_fd,
 			if (!S_ISDIR(st.st_mode))
 				continue;
 		}
-		if (strncmp(ent->d_name, "base_", 5) == 0)
-			continue; /* Skip scripts that have a separate driver. */
 
 		/* Scan subdir for test cases*/
 		fd = openat(dir_fd, ent->d_name, O_PATH);
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 2/7] perf tests: Create a structure for shell tests
  2025-07-21 13:26       ` [PATCH v3 2/7] perf tests: Create a structure for shell tests Jakub Brnak
@ 2025-07-21 19:39         ` Ian Rogers
  2025-07-26  6:03         ` Namhyung Kim
  1 sibling, 0 replies; 43+ messages in thread
From: Ian Rogers @ 2025-07-21 19:39 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, linux-perf-users, mpetlan, namhyung

On Mon, Jul 21, 2025 at 6:26 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> The general structure of test suites with test cases has been implemented
> for C tests for some time, while shell tests were just all put into a list
> without any possible structuring.
>
> Provide the same possibility of test suite structure for shell tests. The
> suite is created for each subdirectory located in the 'perf/tests/shell'
> directory that contains at least one test script. All of the deeper levels
> of subdirectories will be merged with the first level of test cases.
> The name of the test suite is the name of the subdirectory, where the test
> cases are located. For all of the test scripts that are not in any
> subdirectory, a test suite with a single test case is created as it has
> been till now.
>
> The new structure of the shell tests for 'perf test list':
>     77: build id cache operations
>     78: coresight
>     78:1: CoreSight / ASM Pure Loop
>     78:2: CoreSight / Memcpy 16k 10 Threads
>     78:3: CoreSight / Thread Loop 10 Threads - Check TID
>     78:4: CoreSight / Thread Loop 2 Threads - Check TID
>     78:5: CoreSight / Unroll Loop Thread 10
>     79: daemon operations
>     80: perf diff tests
>
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
>  tools/perf/tests/tests-scripts.c | 223 +++++++++++++++++++++++++------
>  tools/perf/tests/tests-scripts.h |   4 +
>  2 files changed, 189 insertions(+), 38 deletions(-)
>
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index f18c4cd337c8..21a6ede330e9 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -151,14 +151,45 @@ static char *strdup_check(const char *str)
>         return newstr;
>  }
>
> -static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
> +/* Free the whole structure of test_suite with its test_cases */
> +static void free_suite(struct test_suite *suite) {
> +       if (suite->test_cases){
> +               int num = 0;
> +               while (suite->test_cases[num].name){ /* Last case has name set to NULL */
> +                       free((void*) suite->test_cases[num].name);
> +                       free((void*) suite->test_cases[num].desc);
> +                       num++;
> +               }
> +               free(suite->test_cases);
> +       }
> +       if (suite->desc)
> +               free((void*) suite->desc);
> +       if (suite->priv){
> +               struct shell_info *test_info = suite->priv;
> +               free((void*) test_info->base_path);
> +               free(test_info);
> +       }
> +
> +       free(suite);
> +}
> +
> +static int shell_test__run(struct test_suite *test, int subtest)
>  {
> -       const char *file = test->priv;
> +       const char *file;
>         int err;
>         char *cmd = NULL;
>
> +       /* Get absolute file path */
> +       if (subtest >= 0) {
> +               file = test->test_cases[subtest].name;
> +       }
> +       else {          /* Single test case */
> +               file = test->test_cases[0].name;
> +       }
> +
>         if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
>                 return TEST_FAIL;
> +
>         err = system(cmd);
>         free(cmd);
>         if (!err)
> @@ -167,63 +198,154 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
>         return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
>  }
>
> -static void append_script(int dir_fd, const char *name, char *desc,
> -                         struct test_suite ***result,
> -                         size_t *result_sz)
> +static struct test_suite* prepare_test_suite(int dir_fd)
>  {
> -       char filename[PATH_MAX], link[128];
> -       struct test_suite *test_suite, **result_tmp;
> -       struct test_case *tests;
> +       char dirpath[PATH_MAX], link[128];
>         ssize_t len;
> -       char *exclusive;
> +       struct test_suite *test_suite = NULL;
> +       struct shell_info *test_info;
>
> +       /* Get dir absolute path */
>         snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> -       len = readlink(link, filename, sizeof(filename));
> +       len = readlink(link, dirpath, sizeof(dirpath));
>         if (len < 0) {
>                 pr_err("Failed to readlink %s", link);
> -               return;
> +               return NULL;
>         }
> -       filename[len++] = '/';
> -       strcpy(&filename[len], name);
> +       dirpath[len++] = '/';
> +       dirpath[len] = '\0';
>
> -       tests = calloc(2, sizeof(*tests));
> -       if (!tests) {
> -               pr_err("Out of memory while building script test suite list\n");
> -               return;
> -       }
> -       tests[0].name = strdup_check(name);
> -       exclusive = strstr(desc, " (exclusive)");
> -       if (exclusive != NULL) {
> -               tests[0].exclusive = true;
> -               exclusive[0] = '\0';
> -       }
> -       tests[0].desc = strdup_check(desc);
> -       tests[0].run_case = shell_test__run;
>         test_suite = zalloc(sizeof(*test_suite));
>         if (!test_suite) {
>                 pr_err("Out of memory while building script test suite list\n");
> -               free(tests);
> -               return;
> +               return NULL;
>         }
> -       test_suite->desc = desc;
> -       test_suite->test_cases = tests;
> -       test_suite->priv = strdup_check(filename);
> +
> +       test_info = zalloc(sizeof(*test_info));
> +       if (!test_info) {
> +               pr_err("Out of memory while building script test suite list\n");
> +               return NULL;
> +       }
> +
> +       test_info->base_path = strdup_check(dirpath);           /* Absolute path to dir */
> +
> +       test_suite->priv = test_info;
> +       test_suite->desc = NULL;
> +       test_suite->test_cases = NULL;
> +
> +       return test_suite;
> +}
> +
> +static void append_suite(struct test_suite ***result,
> +                         size_t *result_sz, struct test_suite *test_suite)
> +{
> +       struct test_suite **result_tmp;
> +
>         /* Realloc is good enough, though we could realloc by chunks, not that
>          * anyone will ever measure performance here */
>         result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
>         if (result_tmp == NULL) {
>                 pr_err("Out of memory while building script test suite list\n");
> -               free(tests);
> -               free(test_suite);
> +               free_suite(test_suite);
>                 return;
>         }
> +
>         /* Add file to end and NULL terminate the struct array */
>         *result = result_tmp;
>         (*result)[*result_sz] = test_suite;
>         (*result_sz)++;
>  }
>
> -static void append_scripts_in_dir(int dir_fd,
> +static void append_script_to_suite(int dir_fd, const char *name, char *desc,
> +                                       struct test_suite *test_suite, size_t *tc_count)
> +{
> +       char file_name[PATH_MAX], link[128];
> +       struct test_case *tests;
> +       size_t len;
> +       char *exclusive;
> +
> +       if (!test_suite)
> +               return;
> +
> +       /* Requires an empty test case at the end */
> +       tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
> +       if (!tests) {
> +               pr_err("Out of memory while building script test suite list\n");
> +               return;
> +       }
> +
> +       /* Get path to the test script */
> +       snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> +       len = readlink(link, file_name, sizeof(file_name));
> +       if (len < 0) {
> +               pr_err("Failed to readlink %s", link);
> +               return;
> +       }
> +       file_name[len++] = '/';
> +       strcpy(&file_name[len], name);
> +
> +       tests[(*tc_count)].name = strdup_check(file_name);      /* Get path to the script from base dir */
> +       tests[(*tc_count)].exclusive = false;
> +       exclusive = strstr(desc, " (exclusive)");
> +       if (exclusive != NULL) {
> +               tests[(*tc_count)].exclusive = true;
> +               exclusive[0] = '\0';
> +       }
> +       tests[(*tc_count)].desc = desc;
> +       tests[(*tc_count)].skip_reason = NULL;  /* Unused */
> +       tests[(*tc_count)++].run_case = shell_test__run;
> +
> +       tests[(*tc_count)].name = NULL;         /* End the test cases */
> +
> +       test_suite->test_cases = tests;
> +}
> +
> +static void append_scripts_in_subdir(int dir_fd,
> +                                 struct test_suite *suite,
> +                                 size_t *tc_count)
> +{
> +       struct dirent **entlist;
> +       struct dirent *ent;
> +       int n_dirs, i;
> +
> +       /* List files, sorted by alpha */
> +       n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
> +       if (n_dirs == -1)
> +               return;
> +       for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> +               int fd;
> +
> +               if (ent->d_name[0] == '.')
> +                       continue; /* Skip hidden files */
> +               if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
> +                       char *desc = shell_test__description(dir_fd, ent->d_name);
> +
> +                       if (desc) /* It has a desc line - valid script */
> +                               append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
> +                       continue;
> +               }
> +
> +               if (ent->d_type != DT_DIR) {
> +                       struct stat st;
> +
> +                       if (ent->d_type != DT_UNKNOWN)
> +                               continue;
> +                       fstatat(dir_fd, ent->d_name, &st, 0);
> +                       if (!S_ISDIR(st.st_mode))

Note: we have io_dir that have a io_dir__is_dir that does something
similar to this:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/lib/api/io_dir.h?h=perf-tools-next#n89
but scandirat is used here for the extra effect of alphabetic sorting.

> +                               continue;
> +               }
> +
> +               fd = openat(dir_fd, ent->d_name, O_PATH);
> +
> +               /* Recurse into the dir */
> +               append_scripts_in_subdir(fd, suite, tc_count);
> +       }
> +       for (i = 0; i < n_dirs; i++) /* Clean up */
> +               zfree(&entlist[i]);
> +       free(entlist);
> +}
> +
> +static void append_suits_in_dir(int dir_fd,

nit: typo "suits" should be "suites"

Thanks,
Ian

>                                   struct test_suite ***result,
>                                   size_t *result_sz)
>  {
> @@ -237,16 +359,27 @@ static void append_scripts_in_dir(int dir_fd,
>                 return;
>         for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
>                 int fd;
> +               struct test_suite *test_suite;
> +               size_t cases_count = 0;
>
>                 if (ent->d_name[0] == '.')
>                         continue; /* Skip hidden files */
>                 if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
>                         char *desc = shell_test__description(dir_fd, ent->d_name);
>
> -                       if (desc) /* It has a desc line - valid script */
> -                               append_script(dir_fd, ent->d_name, desc, result, result_sz);
> +                       if (desc) { /* It has a desc line - valid script */
> +                               test_suite = prepare_test_suite(dir_fd); /* Create a test suite with a single test case */
> +                               append_script_to_suite(dir_fd, ent->d_name, desc, test_suite, &cases_count);
> +                               test_suite->desc = strdup_check(desc);
> +
> +                               if (cases_count)
> +                                       append_suite(result, result_sz, test_suite);
> +                               else /* Wasn't able to create the test case */
> +                                       free_suite(test_suite);
> +                       }
>                         continue;
>                 }
> +
>                 if (ent->d_type != DT_DIR) {
>                         struct stat st;
>
> @@ -258,8 +391,22 @@ static void append_scripts_in_dir(int dir_fd,
>                 }
>                 if (strncmp(ent->d_name, "base_", 5) == 0)
>                         continue; /* Skip scripts that have a separate driver. */
> +
> +               /* Scan subdir for test cases*/
>                 fd = openat(dir_fd, ent->d_name, O_PATH);
> -               append_scripts_in_dir(fd, result, result_sz);
> +               test_suite = prepare_test_suite(fd);    /* Prepare a testsuite with its path */
> +               if (!test_suite)
> +                       continue;
> +
> +               append_scripts_in_subdir(fd, test_suite, &cases_count);
> +               if (cases_count == 0){
> +                       free_suite(test_suite);
> +                       continue;
> +               }
> +
> +               test_suite->desc = strdup_check(ent->d_name);   /* If no setup, set name to the directory */
> +
> +               append_suite(result, result_sz, test_suite);
>                 close(fd);
>         }
>         for (i = 0; i < n_dirs; i++) /* Clean up */
> @@ -278,7 +425,7 @@ struct test_suite **create_script_test_suites(void)
>          * length array.
>          */
>         if (dir_fd >= 0)
> -               append_scripts_in_dir(dir_fd, &result, &result_sz);
> +               append_suits_in_dir(dir_fd, &result, &result_sz);
>
>         result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
>         if (result_tmp == NULL) {
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index b553ad26ea17..60a1a19a45c9 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -4,6 +4,10 @@
>
>  #include "tests.h"
>
> +struct shell_info {
> +       const char *base_path;
> +};
> +
>  struct test_suite **create_script_test_suites(void);
>
>  #endif /* TESTS_SCRIPTS_H */
> --
> 2.50.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 5/7] perf test: Introduce storing logs for shell tests
  2025-07-21 13:26       ` [PATCH v3 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
@ 2025-07-21 19:43         ` Ian Rogers
  2025-07-26  6:17         ` Namhyung Kim
  1 sibling, 0 replies; 43+ messages in thread
From: Ian Rogers @ 2025-07-21 19:43 UTC (permalink / raw)
  To: Jakub Brnak, open list:KERNEL SELFTEST FRAMEWORK
  Cc: vmolnaro, acme, acme, linux-perf-users, mpetlan, namhyung

On Mon, Jul 21, 2025 at 6:27 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> Create temporary directories for storing log files for shell tests
> that could help while debugging. The log files are necessary for
> perftool testsuite test cases also. If the variable KEEP_TEST_LOGS
> is set keep the logs, else delete them.

Is there perhaps a kunit equivalent of log files so we could keep the
implementations as similar as possible?

Thanks,
Ian

> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
>  tools/perf/tests/builtin-test.c  | 90 ++++++++++++++++++++++++++++++++
>  tools/perf/tests/tests-scripts.c |  3 ++
>  tools/perf/tests/tests-scripts.h |  1 +
>  3 files changed, 94 insertions(+)
>
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 4e3d2f779b01..89b180798224 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -6,6 +6,7 @@
>   */
>  #include <ctype.h>
>  #include <fcntl.h>
> +#include <ftw.h>
>  #include <errno.h>
>  #ifdef HAVE_BACKTRACE_SUPPORT
>  #include <execinfo.h>
> @@ -282,6 +283,86 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
>         return t->test_cases[test_case].exclusive;
>  }
>
> +static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
> +                                                int typeflag, struct FTW *ftwbuf)
> +{
> +       int rv = -1;
> +
> +       /* Stop traversal if going too deep */
> +       if (ftwbuf->level > 5) {
> +               pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
> +               return rv;
> +       }
> +
> +       /* Remove only expected directories */
> +       if (typeflag == FTW_D || typeflag == FTW_DP){
> +               const char *dirname = fpath + ftwbuf->base;
> +
> +               if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
> +                       strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
> +                               pr_err("Unknown directory %s", dirname);
> +                               return rv;
> +                        }
> +       }
> +
> +       /* Attempt to remove the file */
> +       rv = remove(fpath);
> +       if (rv)
> +               pr_err("Failed to remove file: %s", fpath);
> +
> +       return rv;
> +}
> +
> +static bool create_logs(struct test_suite *t, int pass){
> +       bool store_logs = t->priv && ((struct shell_info*)(t->priv))->store_logs;
> +       if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
> +               /* Sequential and non-exclusive tests run on the first pass. */
> +               return store_logs;
> +       }
> +       else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
> +               /* Exclusive tests without sequential run on the second pass. */
> +               return store_logs;
> +       }
> +       return false;
> +}
> +
> +static char *setup_shell_logs(const char *name)
> +{
> +       char template[PATH_MAX];
> +       char *temp_dir;
> +
> +       if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
> +               pr_err("Failed to create log dir template");
> +               return NULL; /* Skip the testsuite */
> +       }
> +
> +       temp_dir = mkdtemp(template);
> +       if (temp_dir) {
> +               setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
> +               return strdup(temp_dir);
> +       }
> +       else {
> +               pr_err("Failed to create the temporary directory");
> +       }
> +
> +       return NULL; /* Skip the testsuite */
> +}
> +
> +static void cleanup_shell_logs(char *dirname)
> +{
> +       char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
> +
> +       /* Check if logs should be kept or do cleanup */
> +       if (dirname) {
> +               if (!keep_logs || strcmp(keep_logs, "y") != 0) {
> +                       nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
> +               }
> +               free(dirname);
> +       }
> +
> +       unsetenv("PERFSUITE_RUN_DIR");
> +}
> +
>  static bool perf_test__matches(const char *desc, int suite_num, int argc, const char *argv[])
>  {
>         int i;
> @@ -626,6 +707,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
>                 for (struct test_suite **t = suites; *t; t++, curr_suite++) {
>                         int curr_test_case;
>                         bool suite_matched = false;
> +                       char *tmpdir = NULL;
>
>                         if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) {
>                                 /*
> @@ -655,6 +737,13 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
>                         }
>
>                         for (unsigned int run = 0; run < runs_per_test; run++) {
> +                               /* Setup temporary log directories for shell test suites */
> +                               if (create_logs(*t, pass)) {
> +                                       tmpdir = setup_shell_logs((*t)->desc);
> +
> +                                       if (tmpdir == NULL)  /* Couldn't create log dir, skip test suite */
> +                                               ((struct shell_info*)((*t)->priv))->has_setup = FAILED_SETUP;
> +                               }
>                                 test_suite__for_each_test_case(*t, curr_test_case) {
>                                         if (!suite_matched &&
>                                             !perf_test__matches(test_description(*t, curr_test_case),
> @@ -667,6 +756,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
>                                                 goto err_out;
>                                 }
>                         }
> +                       cleanup_shell_logs(tmpdir);
>                 }
>                 if (!sequential) {
>                         /* Parallel mode starts tests but doesn't finish them. Do that now. */
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index d680a878800f..d4e382898a30 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -251,6 +251,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
>
>         test_info->base_path = strdup_check(dirpath);           /* Absolute path to dir */
>         test_info->has_setup = NO_SETUP;
> +       test_info->store_logs = false;
>
>         test_suite->priv = test_info;
>         test_suite->desc = NULL;
> @@ -427,6 +428,8 @@ static void append_suits_in_dir(int dir_fd,
>                         continue;
>                 }
>
> +               /* Store logs for testsuite is sub-directories */
> +               ((struct shell_info*)(test_suite->priv))->store_logs = true;
>                 if (is_test_script(fd, SHELL_SETUP)) {  /* Check for setup existance */
>                         char *desc = shell_test__description(fd, SHELL_SETUP);
>                         test_suite->desc = desc;        /* Set the suite name by the setup description */
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index da4dcd26140c..41da0a175e4e 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -16,6 +16,7 @@ enum shell_setup {
>  struct shell_info {
>         const char *base_path;
>         enum shell_setup has_setup;
> +       bool store_logs;
>  };
>
>  struct test_suite **create_script_test_suites(void);
> --
> 2.50.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 7/7] perf test: Remove perftool drivers
  2025-07-21 13:26       ` [PATCH v3 7/7] perf test: Remove perftool drivers Jakub Brnak
@ 2025-07-21 19:46         ` Ian Rogers
  0 siblings, 0 replies; 43+ messages in thread
From: Ian Rogers @ 2025-07-21 19:46 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, linux-perf-users, mpetlan, namhyung

On Mon, Jul 21, 2025 at 6:27 AM Jakub Brnak <jbrnak@redhat.com> wrote:
>
> From: Veronika Molnarova <vmolnaro@redhat.com>
>
> The perf now provides all of the features required for running the
> perftool test cases, such as creating log directories, running setup scripts
> and the tests are structured by the base_ directories.
>
> Remove the drivers as they are no longer necessary together with
> the condition of skipping the base_ directories and run the
> test cases by the default perf test structure.
>
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>

Awesome work! I'm very happy with the improvement this will make to
testing. I'd like to test it and I noted some nits, but thanks for
pushing on this!

Ian

> ---
>  .../tests/shell/perftool-testsuite_probe.sh   | 24 -------------------
>  .../tests/shell/perftool-testsuite_report.sh  | 23 ------------------
>  tools/perf/tests/tests-scripts.c              |  2 --
>  3 files changed, 49 deletions(-)
>  delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
>  delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh
>
> diff --git a/tools/perf/tests/shell/perftool-testsuite_probe.sh b/tools/perf/tests/shell/perftool-testsuite_probe.sh
> deleted file mode 100755
> index 3863df16c19b..000000000000
> --- a/tools/perf/tests/shell/perftool-testsuite_probe.sh
> +++ /dev/null
> @@ -1,24 +0,0 @@
> -#!/bin/bash
> -# perftool-testsuite_probe (exclusive)
> -# SPDX-License-Identifier: GPL-2.0
> -
> -[ "$(id -u)" = 0 ] || exit 2
> -test -d "$(dirname "$0")/base_probe" || exit 2
> -cd "$(dirname "$0")/base_probe" || exit 2
> -status=0
> -
> -PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
> -export PERFSUITE_RUN_DIR
> -
> -for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
> -     test -x "$testcase" || continue
> -     ./"$testcase"
> -     (( status += $? ))
> -done
> -
> -if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
> -       rm -rf "$PERFSUITE_RUN_DIR"
> -fi
> -
> -test $status -ne 0 && exit 1
> -exit 0
> diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
> deleted file mode 100755
> index a8cf75b4e77e..000000000000
> --- a/tools/perf/tests/shell/perftool-testsuite_report.sh
> +++ /dev/null
> @@ -1,23 +0,0 @@
> -#!/bin/bash
> -# perftool-testsuite_report (exclusive)
> -# SPDX-License-Identifier: GPL-2.0
> -
> -test -d "$(dirname "$0")/base_report" || exit 2
> -cd "$(dirname "$0")/base_report" || exit 2
> -status=0
> -
> -PERFSUITE_RUN_DIR=$(mktemp -d /tmp/"$(basename "$0" .sh)".XXX)
> -export PERFSUITE_RUN_DIR
> -
> -for testcase in setup.sh test_*; do                  # skip setup.sh if not present or not executable
> -     test -x "$testcase" || continue
> -     ./"$testcase"
> -     (( status += $? ))
> -done
> -
> -if ! [ "$PERFTEST_KEEP_LOGS" = "y" ]; then
> -       rm -rf "$PERFSUITE_RUN_DIR"
> -fi
> -
> -test $status -ne 0 && exit 1
> -exit 0
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index d4e382898a30..79b75b83a4bf 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -413,8 +413,6 @@ static void append_suits_in_dir(int dir_fd,
>                         if (!S_ISDIR(st.st_mode))
>                                 continue;
>                 }
> -               if (strncmp(ent->d_name, "base_", 5) == 0)
> -                       continue; /* Skip scripts that have a separate driver. */
>
>                 /* Scan subdir for test cases*/
>                 fd = openat(dir_fd, ent->d_name, O_PATH);
> --
> 2.50.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths
  2025-07-21 13:26       ` [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
@ 2025-07-26  6:00         ` Namhyung Kim
  2025-08-21 11:01           ` Jakub Brnak
  0 siblings, 1 reply; 43+ messages in thread
From: Namhyung Kim @ 2025-07-26  6:00 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

Hello,

On Mon, Jul 21, 2025 at 03:26:36PM +0200, Jakub Brnak wrote:
> From: Veronika Molnarova <vmolnaro@redhat.com>
> 
> Test cases from perftool_testsuite are affected by the current
> directory where the test are run. For this reason, the test
> driver has to change the directory to the base_dir for references to
> work correctly.
> 
> Utilize absolute paths when sourcing and referencing other scripts so
> that the current working directory doesn't impact the test cases.
> 
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>

I'm ok with this change but can you please remove long lines?  I'm not
sure if we should follow the same coding style in shell scripts but long
lines would harm readability IMHO.

Of course it can be on top of this series.

Thanks,
Namhyung

> ---
>  .../base_probe/test_adding_blacklisted.sh     | 13 ++---
>  .../shell/base_probe/test_adding_kernel.sh    | 53 ++++++++++---------
>  .../perf/tests/shell/base_probe/test_basic.sh | 19 +++----
>  .../shell/base_probe/test_invalid_options.sh  | 11 ++--
>  .../shell/base_probe/test_line_semantics.sh   |  7 +--
>  tools/perf/tests/shell/base_report/setup.sh   |  6 ++-
>  .../tests/shell/base_report/test_basic.sh     | 47 ++++++++--------
>  tools/perf/tests/shell/common/init.sh         |  4 +-
>  8 files changed, 84 insertions(+), 76 deletions(-)
> 
> diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> index 8226449ac5c3..c409ca8520f8 100755
> --- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> +++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> @@ -13,11 +13,12 @@
>  #	they must be skipped.
>  #
>  
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
>  TEST_RESULT=0
>  
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>  # skip if not supported
>  BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
>  if [ -z "$BLACKFUNC_LIST" ]; then
> @@ -53,7 +54,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
>  	PERF_EXIT_CODE=$?
>  
>  	# check for bad DWARF polluting the result
> -	../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
>  
>  	if [ $? -eq 0 ]; then
>  		SKIP_DWARF=1
> @@ -73,7 +74,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
>  			fi
>  		fi
>  	else
> -		../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> +		"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
>  		CHECK_EXIT_CODE=$?
>  
>  		SKIP_DWARF=0
> @@ -94,7 +95,7 @@ fi
>  $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
> diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> index df288cf90cd6..3548faf60c8e 100755
> --- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> +++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> @@ -13,13 +13,14 @@
>  #		and removing.
>  #
>  
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
>  TEST_RESULT=0
>  
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>  # shellcheck source=lib/probe_vfs_getname.sh
> -. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
> +. "$DIR_PATH/../lib/probe_vfs_getname.sh"
>  
>  TEST_PROBE=${TEST_PROBE:-"inode_permission"}
>  
> @@ -44,7 +45,7 @@ for opt in "" "-a" "--add"; do
>  	$CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
>  	PERF_EXIT_CODE=$?
>  
> -	../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
>  	CHECK_EXIT_CODE=$?
>  
>  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
> @@ -58,7 +59,7 @@ done
>  $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
> @@ -71,7 +72,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
>  $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
>  CHECK_EXIT_CODE=$?
>  
>  if [ $NO_DEBUGINFO ] ; then
> @@ -93,9 +94,9 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
>  # the value should be greater than 1
>  REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
>  REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
> -../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
>  CHECK_EXIT_CODE=$?
> -../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> @@ -108,7 +109,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
>  $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> @@ -121,7 +122,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
>  $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
> @@ -135,7 +136,7 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
>  PERF_EXIT_CODE=$?
>  
>  # check for the output (should be the same as usual)
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
>  CHECK_EXIT_CODE=$?
>  
>  # check that no probe was added in real
> @@ -152,7 +153,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
>  $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
> @@ -162,7 +163,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
>  ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
> @@ -173,7 +174,7 @@ NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l`
>  $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
> @@ -187,7 +188,7 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
>  PERF_EXIT_CODE=$?
>  
>  REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
> -../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
>  CHECK_EXIT_CODE=$?
>  
>  VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
> @@ -205,7 +206,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
>  $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> @@ -217,7 +218,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
>  $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
>  CHECK_EXIT_CODE=$?
>  
>  if [ $NO_DEBUGINFO ] ; then
> @@ -240,13 +241,13 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
>  PERF_EXIT_CODE=$?
>  
>  # check that the error message is reasonable
> -../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
>  CHECK_EXIT_CODE=$?
> -../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
>  (( CHECK_EXIT_CODE += $? ))
> -../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
>  (( CHECK_EXIT_CODE += $? ))
> -../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> +"$DIR_PATH/../common/check_no_patterns_found.pl" "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  if [ $NO_DEBUGINFO ]; then
> @@ -264,7 +265,7 @@ fi
>  $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> @@ -274,7 +275,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
>  $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
> @@ -285,9 +286,9 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
>  PERF_EXIT_CODE=$?
>  
>  REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
> -../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
>  CHECK_EXIT_CODE=$?
> -../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
> diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
> index 9d8b5afbeddd..e8fed67be9c1 100755
> --- a/tools/perf/tests/shell/base_probe/test_basic.sh
> +++ b/tools/perf/tests/shell/base_probe/test_basic.sh
> @@ -12,11 +12,12 @@
>  #		This test tests basic functionality of perf probe command.
>  #
>  
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
>  TEST_RESULT=0
>  
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>  if ! check_kprobes_available; then
>  	print_overall_skipped
>  	exit 2
> @@ -30,15 +31,15 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
>  	$CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
>  	PERF_EXIT_CODE=$?
>  
> -	../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
>  	CHECK_EXIT_CODE=$?
> -	../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> +	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
>  	(( CHECK_EXIT_CODE += $? ))
>  
>  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> @@ -53,7 +54,7 @@ fi
>  # without any args perf-probe should print usage
>  $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
>  
> -../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
>  CHECK_EXIT_CODE=$?
>  
>  print_results 0 $CHECK_EXIT_CODE "usage message"
> diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> index 92f7254eb32a..9caeab2fe77c 100755
> --- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> +++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> @@ -12,11 +12,12 @@
>  #		This test checks whether the invalid and incompatible options are reported
>  #
>  
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
>  TEST_RESULT=0
>  
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>  if ! check_kprobes_available; then
>  	print_overall_skipped
>  	exit 2
> @@ -33,7 +34,7 @@ for opt in '-a' '-d' '-L' '-V'; do
>  	! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
>  	PERF_EXIT_CODE=$?
>  
> -	../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
>  	CHECK_EXIT_CODE=$?
>  
>  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
> @@ -66,7 +67,7 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
>  	! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
>  	PERF_EXIT_CODE=$?
>  
> -	../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
>  	CHECK_EXIT_CODE=$?
>  
>  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
> diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> index 20435b6bf6bc..576442d87a44 100755
> --- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> +++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> @@ -13,11 +13,12 @@
>  #		arguments are properly reported.
>  #
>  
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
>  TEST_RESULT=0
>  
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>  if ! check_kprobes_available; then
>  	print_overall_skipped
>  	exit 2
> diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
> index 8634e7e0dda6..2fd5c97f9822 100755
> --- a/tools/perf/tests/shell/base_report/setup.sh
> +++ b/tools/perf/tests/shell/base_report/setup.sh
> @@ -12,8 +12,10 @@
>  #
>  #
>  
> +DIR_PATH="$(dirname $0)"
> +
>  # include working environment
> -. ../common/init.sh
> +. "$DIR_PATH/../common/init.sh"
>  
>  TEST_RESULT=0
>  
> @@ -24,7 +26,7 @@ SW_EVENT="cpu-clock"
>  $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
> diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
> index adfd8713b8f8..a15d3007f449 100755
> --- a/tools/perf/tests/shell/base_report/test_basic.sh
> +++ b/tools/perf/tests/shell/base_report/test_basic.sh
> @@ -12,11 +12,12 @@
>  #
>  #
>  
> -# include working environment
> -. ../common/init.sh
> -
> +DIR_PATH="$(dirname $0)"
>  TEST_RESULT=0
>  
> +# include working environment
> +. "$DIR_PATH/../common/init.sh"
> +
>  
>  ### help message
>  
> @@ -25,19 +26,19 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
>  	$CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
>  	PERF_EXIT_CODE=$?
>  
> -	../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
>  	CHECK_EXIT_CODE=$?
> -	../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
> +	"$DIR_PATH/../common/check_all_patterns_found.pl" "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
>  	(( CHECK_EXIT_CODE += $? ))
> -	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> +	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
>  	(( CHECK_EXIT_CODE += $? ))
>  
>  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> @@ -57,9 +58,9 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
>  REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
>  REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
>  REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> -../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
>  CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
> @@ -74,9 +75,9 @@ PERF_EXIT_CODE=$?
>  
>  REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
>  REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
>  CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
> @@ -98,7 +99,7 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
>  REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
>  # disable precise check for "nrcpus avail" in BASIC runmode
>  test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
> -../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
>  CHECK_EXIT_CODE=$?
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
> @@ -129,9 +130,9 @@ PERF_EXIT_CODE=$?
>  
>  REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
>  REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
>  CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> @@ -144,9 +145,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
>  $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
>  PERF_EXIT_CODE=$?
>  
> -grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
> +grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
>  CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> @@ -159,9 +160,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
>  $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
>  PERF_EXIT_CODE=$?
>  
> -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
>  CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> @@ -174,9 +175,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
>  $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
>  PERF_EXIT_CODE=$?
>  
> -grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
> +grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
>  CHECK_EXIT_CODE=$?
> -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
>  (( CHECK_EXIT_CODE += $? ))
>  
>  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
> diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
> index 26c7525651e0..cbfc78bec974 100644
> --- a/tools/perf/tests/shell/common/init.sh
> +++ b/tools/perf/tests/shell/common/init.sh
> @@ -11,8 +11,8 @@
>  #
>  
>  
> -. ../common/settings.sh
> -. ../common/patterns.sh
> +. "$(dirname $0)/../common/settings.sh"
> +. "$(dirname $0)/../common/patterns.sh"
>  
>  THIS_TEST_NAME=`basename $0 .sh`
>  
> -- 
> 2.50.1
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 2/7] perf tests: Create a structure for shell tests
  2025-07-21 13:26       ` [PATCH v3 2/7] perf tests: Create a structure for shell tests Jakub Brnak
  2025-07-21 19:39         ` Ian Rogers
@ 2025-07-26  6:03         ` Namhyung Kim
  2025-08-21 11:15           ` Jakub Brnak
  1 sibling, 1 reply; 43+ messages in thread
From: Namhyung Kim @ 2025-07-26  6:03 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

On Mon, Jul 21, 2025 at 03:26:37PM +0200, Jakub Brnak wrote:
> From: Veronika Molnarova <vmolnaro@redhat.com>
> 
> The general structure of test suites with test cases has been implemented
> for C tests for some time, while shell tests were just all put into a list
> without any possible structuring.
> 
> Provide the same possibility of test suite structure for shell tests. The
> suite is created for each subdirectory located in the 'perf/tests/shell'
> directory that contains at least one test script. All of the deeper levels
> of subdirectories will be merged with the first level of test cases.
> The name of the test suite is the name of the subdirectory, where the test
> cases are located. For all of the test scripts that are not in any
> subdirectory, a test suite with a single test case is created as it has
> been till now.
> 
> The new structure of the shell tests for 'perf test list':
>     77: build id cache operations
>     78: coresight
>     78:1: CoreSight / ASM Pure Loop
>     78:2: CoreSight / Memcpy 16k 10 Threads
>     78:3: CoreSight / Thread Loop 10 Threads - Check TID
>     78:4: CoreSight / Thread Loop 2 Threads - Check TID
>     78:5: CoreSight / Unroll Loop Thread 10
>     79: daemon operations
>     80: perf diff tests

I like the idea!  But there are too many coding style issues.  Can you
please follow the style for the kernel?

Thanks,
Namhyung

> 
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
>  tools/perf/tests/tests-scripts.c | 223 +++++++++++++++++++++++++------
>  tools/perf/tests/tests-scripts.h |   4 +
>  2 files changed, 189 insertions(+), 38 deletions(-)
> 
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index f18c4cd337c8..21a6ede330e9 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -151,14 +151,45 @@ static char *strdup_check(const char *str)
>  	return newstr;
>  }
>  
> -static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
> +/* Free the whole structure of test_suite with its test_cases */
> +static void free_suite(struct test_suite *suite) {
> +	if (suite->test_cases){
> +		int num = 0;
> +		while (suite->test_cases[num].name){ /* Last case has name set to NULL */
> +			free((void*) suite->test_cases[num].name);
> +			free((void*) suite->test_cases[num].desc);
> +			num++;
> +		}
> +		free(suite->test_cases);
> +	}
> +	if (suite->desc)
> +		free((void*) suite->desc);
> +	if (suite->priv){
> +		struct shell_info *test_info = suite->priv;
> +		free((void*) test_info->base_path);
> +		free(test_info);
> +	}
> +
> +	free(suite);
> +}
> +
> +static int shell_test__run(struct test_suite *test, int subtest)
>  {
> -	const char *file = test->priv;
> +	const char *file;
>  	int err;
>  	char *cmd = NULL;
>  
> +	/* Get absolute file path */
> +	if (subtest >= 0) {
> +		file = test->test_cases[subtest].name;
> +	}
> +	else {		/* Single test case */
> +		file = test->test_cases[0].name;
> +	}
> +
>  	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
>  		return TEST_FAIL;
> +
>  	err = system(cmd);
>  	free(cmd);
>  	if (!err)
> @@ -167,63 +198,154 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
>  	return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
>  }
>  
> -static void append_script(int dir_fd, const char *name, char *desc,
> -			  struct test_suite ***result,
> -			  size_t *result_sz)
> +static struct test_suite* prepare_test_suite(int dir_fd)
>  {
> -	char filename[PATH_MAX], link[128];
> -	struct test_suite *test_suite, **result_tmp;
> -	struct test_case *tests;
> +	char dirpath[PATH_MAX], link[128];
>  	ssize_t len;
> -	char *exclusive;
> +	struct test_suite *test_suite = NULL;
> +	struct shell_info *test_info;
>  
> +	/* Get dir absolute path */
>  	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> -	len = readlink(link, filename, sizeof(filename));
> +	len = readlink(link, dirpath, sizeof(dirpath));
>  	if (len < 0) {
>  		pr_err("Failed to readlink %s", link);
> -		return;
> +		return NULL;
>  	}
> -	filename[len++] = '/';
> -	strcpy(&filename[len], name);
> +	dirpath[len++] = '/';
> +	dirpath[len] = '\0';
>  
> -	tests = calloc(2, sizeof(*tests));
> -	if (!tests) {
> -		pr_err("Out of memory while building script test suite list\n");
> -		return;
> -	}
> -	tests[0].name = strdup_check(name);
> -	exclusive = strstr(desc, " (exclusive)");
> -	if (exclusive != NULL) {
> -		tests[0].exclusive = true;
> -		exclusive[0] = '\0';
> -	}
> -	tests[0].desc = strdup_check(desc);
> -	tests[0].run_case = shell_test__run;
>  	test_suite = zalloc(sizeof(*test_suite));
>  	if (!test_suite) {
>  		pr_err("Out of memory while building script test suite list\n");
> -		free(tests);
> -		return;
> +		return NULL;
>  	}
> -	test_suite->desc = desc;
> -	test_suite->test_cases = tests;
> -	test_suite->priv = strdup_check(filename);
> +
> +	test_info = zalloc(sizeof(*test_info));
> +	if (!test_info) {
> +		pr_err("Out of memory while building script test suite list\n");
> +		return NULL;
> +	}
> +
> +	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
> +
> +	test_suite->priv = test_info;
> +	test_suite->desc = NULL;
> +	test_suite->test_cases = NULL;
> +
> +	return test_suite;
> +}
> +
> +static void append_suite(struct test_suite ***result,
> +			  size_t *result_sz, struct test_suite *test_suite)
> +{
> +	struct test_suite **result_tmp;
> +
>  	/* Realloc is good enough, though we could realloc by chunks, not that
>  	 * anyone will ever measure performance here */
>  	result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
>  	if (result_tmp == NULL) {
>  		pr_err("Out of memory while building script test suite list\n");
> -		free(tests);
> -		free(test_suite);
> +		free_suite(test_suite);
>  		return;
>  	}
> +
>  	/* Add file to end and NULL terminate the struct array */
>  	*result = result_tmp;
>  	(*result)[*result_sz] = test_suite;
>  	(*result_sz)++;
>  }
>  
> -static void append_scripts_in_dir(int dir_fd,
> +static void append_script_to_suite(int dir_fd, const char *name, char *desc,
> +					struct test_suite *test_suite, size_t *tc_count)
> +{
> +	char file_name[PATH_MAX], link[128];
> +	struct test_case *tests;
> +	size_t len;
> +	char *exclusive;
> +
> +	if (!test_suite)
> +		return;
> +
> +	/* Requires an empty test case at the end */
> +	tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
> +	if (!tests) {
> +		pr_err("Out of memory while building script test suite list\n");
> +		return;
> +	}
> +
> +	/* Get path to the test script */
> +	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> +	len = readlink(link, file_name, sizeof(file_name));
> +	if (len < 0) {
> +		pr_err("Failed to readlink %s", link);
> +		return;
> +	}
> +	file_name[len++] = '/';
> +	strcpy(&file_name[len], name);
> +
> +	tests[(*tc_count)].name = strdup_check(file_name);	/* Get path to the script from base dir */
> +	tests[(*tc_count)].exclusive = false;
> +	exclusive = strstr(desc, " (exclusive)");
> +	if (exclusive != NULL) {
> +		tests[(*tc_count)].exclusive = true;
> +		exclusive[0] = '\0';
> +	}
> +	tests[(*tc_count)].desc = desc;
> +	tests[(*tc_count)].skip_reason = NULL;	/* Unused */
> +	tests[(*tc_count)++].run_case = shell_test__run;
> +
> +	tests[(*tc_count)].name = NULL;		/* End the test cases */
> +
> +	test_suite->test_cases = tests;
> +}
> +
> +static void append_scripts_in_subdir(int dir_fd,
> +				  struct test_suite *suite,
> +				  size_t *tc_count)
> +{
> +	struct dirent **entlist;
> +	struct dirent *ent;
> +	int n_dirs, i;
> +
> +	/* List files, sorted by alpha */
> +	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
> +	if (n_dirs == -1)
> +		return;
> +	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> +		int fd;
> +
> +		if (ent->d_name[0] == '.')
> +			continue; /* Skip hidden files */
> +		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
> +			char *desc = shell_test__description(dir_fd, ent->d_name);
> +
> +			if (desc) /* It has a desc line - valid script */
> +				append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
> +			continue;
> +		}
> +
> +		if (ent->d_type != DT_DIR) {
> +			struct stat st;
> +
> +			if (ent->d_type != DT_UNKNOWN)
> +				continue;
> +			fstatat(dir_fd, ent->d_name, &st, 0);
> +			if (!S_ISDIR(st.st_mode))
> +				continue;
> +		}
> +
> +		fd = openat(dir_fd, ent->d_name, O_PATH);
> +
> +		/* Recurse into the dir */
> +		append_scripts_in_subdir(fd, suite, tc_count);
> +	}
> +	for (i = 0; i < n_dirs; i++) /* Clean up */
> +		zfree(&entlist[i]);
> +	free(entlist);
> +}
> +
> +static void append_suits_in_dir(int dir_fd,
>  				  struct test_suite ***result,
>  				  size_t *result_sz)
>  {
> @@ -237,16 +359,27 @@ static void append_scripts_in_dir(int dir_fd,
>  		return;
>  	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
>  		int fd;
> +		struct test_suite *test_suite;
> +		size_t cases_count = 0;
>  
>  		if (ent->d_name[0] == '.')
>  			continue; /* Skip hidden files */
>  		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
>  			char *desc = shell_test__description(dir_fd, ent->d_name);
>  
> -			if (desc) /* It has a desc line - valid script */
> -				append_script(dir_fd, ent->d_name, desc, result, result_sz);
> +			if (desc) { /* It has a desc line - valid script */
> +				test_suite = prepare_test_suite(dir_fd); /* Create a test suite with a single test case */
> +				append_script_to_suite(dir_fd, ent->d_name, desc, test_suite, &cases_count);
> +				test_suite->desc = strdup_check(desc);
> +
> +				if (cases_count)
> +					append_suite(result, result_sz, test_suite);
> +				else /* Wasn't able to create the test case */
> +					free_suite(test_suite);
> +			}
>  			continue;
>  		}
> +
>  		if (ent->d_type != DT_DIR) {
>  			struct stat st;
>  
> @@ -258,8 +391,22 @@ static void append_scripts_in_dir(int dir_fd,
>  		}
>  		if (strncmp(ent->d_name, "base_", 5) == 0)
>  			continue; /* Skip scripts that have a separate driver. */
> +
> +		/* Scan subdir for test cases*/
>  		fd = openat(dir_fd, ent->d_name, O_PATH);
> -		append_scripts_in_dir(fd, result, result_sz);
> +		test_suite = prepare_test_suite(fd);	/* Prepare a testsuite with its path */
> +		if (!test_suite)
> +			continue;
> +
> +		append_scripts_in_subdir(fd, test_suite, &cases_count);
> +		if (cases_count == 0){
> +			free_suite(test_suite);
> +			continue;
> +		}
> +
> +		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
> +
> +		append_suite(result, result_sz, test_suite);
>  		close(fd);
>  	}
>  	for (i = 0; i < n_dirs; i++) /* Clean up */
> @@ -278,7 +425,7 @@ struct test_suite **create_script_test_suites(void)
>  	 * length array.
>  	 */
>  	if (dir_fd >= 0)
> -		append_scripts_in_dir(dir_fd, &result, &result_sz);
> +		append_suits_in_dir(dir_fd, &result, &result_sz);
>  
>  	result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
>  	if (result_tmp == NULL) {
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index b553ad26ea17..60a1a19a45c9 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -4,6 +4,10 @@
>  
>  #include "tests.h"
>  
> +struct shell_info {
> +	const char *base_path;
> +};
> +
>  struct test_suite **create_script_test_suites(void);
>  
>  #endif /* TESTS_SCRIPTS_H */
> -- 
> 2.50.1
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 3/7] perf test: Provide setup for the shell test suite
  2025-07-21 13:26       ` [PATCH v3 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
@ 2025-07-26  6:07         ` Namhyung Kim
  2025-08-04 14:39           ` Michael Petlan
  0 siblings, 1 reply; 43+ messages in thread
From: Namhyung Kim @ 2025-07-26  6:07 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

On Mon, Jul 21, 2025 at 03:26:38PM +0200, Jakub Brnak wrote:
> From: Veronika Molnarova <vmolnaro@redhat.com>
> 
> Some of the perftool-testsuite test cases require a setup to be done
> beforehand as may be recording data, setting up cache or restoring sample
> rate. The setup file also provides the possibility to set the name of
> the test suite, if the name of the directory is not good enough.
> 
> Check for the existence of the "setup.sh" script for the shell test
> suites and run it before the any of the test cases. If the setup fails,
> skip all of the test cases of the test suite as the setup may be
> required for the result to be valid.

Looks like better to be documented somewhere.  Maybe you can add a
section like "Add a new (shell) test" in the perf-test man page or so.

Thanks,
Namhyung

> 
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
>  tools/perf/tests/builtin-test.c  | 30 +++++++++++++++++++++-------
>  tools/perf/tests/tests-scripts.c | 34 ++++++++++++++++++++++++++++++--
>  tools/perf/tests/tests-scripts.h | 10 ++++++++++
>  tools/perf/tests/tests.h         |  8 +++++---
>  4 files changed, 70 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 85142dfb3e01..4e3d2f779b01 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -258,6 +258,22 @@ static test_fnptr test_function(const struct test_suite *t, int test_case)
>  	return t->test_cases[test_case].run_case;
>  }
>  
> +/* If setup fails, skip all test cases */
> +static void check_shell_setup(const struct test_suite *t, int ret)
> +{
> +	struct shell_info* test_info;
> +
> +	if (!t->priv)
> +		return;
> +
> +	test_info = t->priv;
> +
> +	if (ret == TEST_SETUP_FAIL)
> +		test_info->has_setup = FAILED_SETUP;
> +	else if (test_info->has_setup == RUN_SETUP)
> +		test_info->has_setup = PASSED_SETUP;
> +}
> +
>  static bool test_exclusive(const struct test_suite *t, int test_case)
>  {
>  	if (test_case <= 0)
> @@ -347,10 +363,8 @@ static int run_test_child(struct child_process *process)
>  	return -err;
>  }
>  
> -#define TEST_RUNNING -3
> -
> -static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case,
> -			     int result, int width, int running)
> +static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case, int result, int width,
> +			     int running)
>  {
>  	if (test_suite__num_test_cases(t) > 1) {
>  		int subw = width > 2 ? width - 2 : width;
> @@ -367,7 +381,8 @@ static int print_test_result(struct test_suite *t, int curr_suite, int curr_test
>  	case TEST_OK:
>  		pr_info(" Ok\n");
>  		break;
> -	case TEST_SKIP: {
> +	case TEST_SKIP:
> +	case TEST_SETUP_FAIL:{
>  		const char *reason = skip_reason(t, curr_test_case);
>  
>  		if (reason)
> @@ -482,6 +497,7 @@ static void finish_test(struct child_test **child_tests, int running_test, int c
>  	}
>  	/* Clean up child process. */
>  	ret = finish_command(&child_test->process);
> +	check_shell_setup(t, ret);
>  	if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
>  		fprintf(stderr, "%s", err_output.buf);
>  
> @@ -503,8 +519,8 @@ static int start_test(struct test_suite *test, int curr_suite, int curr_test_cas
>  			pr_debug("--- start ---\n");
>  			err = test_function(test, curr_test_case)(test, curr_test_case);
>  			pr_debug("---- end ----\n");
> -			print_test_result(test, curr_suite, curr_test_case, err, width,
> -					  /*running=*/0);
> +			print_test_result(test, curr_suite, curr_test_case, err, width, /*running=*/0);
> +			check_shell_setup(test, err);
>  		}
>  		return 0;
>  	}
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index 21a6ede330e9..d680a878800f 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -138,6 +138,11 @@ static bool is_test_script(int dir_fd, const char *name)
>  	return is_shell_script(dir_fd, name);
>  }
>  
> +/* Filter for scandir */
> +static int setup_filter(const struct dirent *entry){
> +	return strcmp(entry->d_name, SHELL_SETUP);
> +}
> +
>  /* Duplicate a string and fall over and die if we run out of memory */
>  static char *strdup_check(const char *str)
>  {
> @@ -175,6 +180,7 @@ static void free_suite(struct test_suite *suite) {
>  
>  static int shell_test__run(struct test_suite *test, int subtest)
>  {
> +	struct shell_info *test_info = test->priv;
>  	const char *file;
>  	int err;
>  	char *cmd = NULL;
> @@ -187,6 +193,22 @@ static int shell_test__run(struct test_suite *test, int subtest)
>  		file = test->test_cases[0].name;
>  	}
>  
> +	/* Run setup if needed */
> +	if (test_info->has_setup == RUN_SETUP){
> +		char *setup_script;
> +		if (asprintf(&setup_script, "%s%s%s", test_info->base_path, SHELL_SETUP, verbose ? " -v" : "") < 0)
> +			return TEST_SETUP_FAIL;
> +
> +		err = system(setup_script);
> +		free(setup_script);
> +
> +		if (err)
> +			return TEST_SETUP_FAIL;
> +	}
> +	else if (test_info->has_setup == FAILED_SETUP) {
> +		return TEST_SKIP; /* Skip test suite if setup failed */
> +	}
> +
>  	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
>  		return TEST_FAIL;
>  
> @@ -228,6 +250,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
>  	}
>  
>  	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
> +	test_info->has_setup = NO_SETUP;
>  
>  	test_suite->priv = test_info;
>  	test_suite->desc = NULL;
> @@ -309,7 +332,7 @@ static void append_scripts_in_subdir(int dir_fd,
>  	int n_dirs, i;
>  
>  	/* List files, sorted by alpha */
> -	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
> +	n_dirs = scandirat(dir_fd, ".", &entlist, setup_filter, alphasort);
>  	if (n_dirs == -1)
>  		return;
>  	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> @@ -404,7 +427,14 @@ static void append_suits_in_dir(int dir_fd,
>  			continue;
>  		}
>  
> -		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
> +		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
> +			char *desc = shell_test__description(fd, SHELL_SETUP);
> +			test_suite->desc = desc;	/* Set the suite name by the setup description */
> +			((struct shell_info*)(test_suite->priv))->has_setup = RUN_SETUP;
> +		}
> +		else {
> +			test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
> +		}
>  
>  		append_suite(result, result_sz, test_suite);
>  		close(fd);
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index 60a1a19a45c9..da4dcd26140c 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -4,8 +4,18 @@
>  
>  #include "tests.h"
>  
> +#define SHELL_SETUP "setup.sh"
> +
> +enum shell_setup {
> +	NO_SETUP     = 0,
> +	RUN_SETUP    = 1,
> +	FAILED_SETUP = 2,
> +	PASSED_SETUP = 3,
> +};
> +
>  struct shell_info {
>  	const char *base_path;
> +	enum shell_setup has_setup;
>  };
>  
>  struct test_suite **create_script_test_suites(void);
> diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
> index 97e62db8764a..0545c9429000 100644
> --- a/tools/perf/tests/tests.h
> +++ b/tools/perf/tests/tests.h
> @@ -6,9 +6,11 @@
>  #include "util/debug.h"
>  
>  enum {
> -	TEST_OK   =  0,
> -	TEST_FAIL = -1,
> -	TEST_SKIP = -2,
> +	TEST_OK         =  0,
> +	TEST_FAIL      	= -1,
> +	TEST_SKIP       = -2,
> +	TEST_RUNNING	= -3,
> +	TEST_SETUP_FAIL = -4,
>  };
>  
>  #define TEST_ASSERT_VAL(text, cond)					 \
> -- 
> 2.50.1
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 5/7] perf test: Introduce storing logs for shell tests
  2025-07-21 13:26       ` [PATCH v3 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
  2025-07-21 19:43         ` Ian Rogers
@ 2025-07-26  6:17         ` Namhyung Kim
  1 sibling, 0 replies; 43+ messages in thread
From: Namhyung Kim @ 2025-07-26  6:17 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

On Mon, Jul 21, 2025 at 03:26:40PM +0200, Jakub Brnak wrote:
> From: Veronika Molnarova <vmolnaro@redhat.com>
> 
> Create temporary directories for storing log files for shell tests
> that could help while debugging. The log files are necessary for
> perftool testsuite test cases also. If the variable KEEP_TEST_LOGS

Looks like you meant PERFTEST_KEEP_LOGS.

Thanks,
Namhyung


> is set keep the logs, else delete them.
> 
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
>  tools/perf/tests/builtin-test.c  | 90 ++++++++++++++++++++++++++++++++
>  tools/perf/tests/tests-scripts.c |  3 ++
>  tools/perf/tests/tests-scripts.h |  1 +
>  3 files changed, 94 insertions(+)
> 
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 4e3d2f779b01..89b180798224 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -6,6 +6,7 @@
>   */
>  #include <ctype.h>
>  #include <fcntl.h>
> +#include <ftw.h>
>  #include <errno.h>
>  #ifdef HAVE_BACKTRACE_SUPPORT
>  #include <execinfo.h>
> @@ -282,6 +283,86 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
>  	return t->test_cases[test_case].exclusive;
>  }
>  
> +static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
> +						 int typeflag, struct FTW *ftwbuf)
> +{
> +	int rv = -1;
> +
> +	/* Stop traversal if going too deep */
> +	if (ftwbuf->level > 5) {
> +		pr_err("Tree traversal reached level %d, stopping.", ftwbuf->level);
> +		return rv;
> +	}
> +
> +	/* Remove only expected directories */
> +	if (typeflag == FTW_D || typeflag == FTW_DP){
> +		const char *dirname = fpath + ftwbuf->base;
> +
> +		if (strcmp(dirname, "logs") && strcmp(dirname, "examples") &&
> +			strcmp(dirname, "header_tar") && strncmp(dirname, "perf_", 5)) {
> +				pr_err("Unknown directory %s", dirname);
> +				return rv;
> +			 }
> +	}
> +
> +	/* Attempt to remove the file */
> +	rv = remove(fpath);
> +	if (rv)
> +		pr_err("Failed to remove file: %s", fpath);
> +
> +	return rv;
> +}
> +
> +static bool create_logs(struct test_suite *t, int pass){
> +	bool store_logs = t->priv && ((struct shell_info*)(t->priv))->store_logs;
> +	if (pass == 1 && (!test_exclusive(t, 0) || sequential || dont_fork)) {
> +		/* Sequential and non-exclusive tests run on the first pass. */
> +		return store_logs;
> +	}
> +	else if (pass != 1 && test_exclusive(t, 0) && !sequential && !dont_fork) {
> +		/* Exclusive tests without sequential run on the second pass. */
> +		return store_logs;
> +	}
> +	return false;
> +}
> +
> +static char *setup_shell_logs(const char *name)
> +{
> +	char template[PATH_MAX];
> +	char *temp_dir;
> +
> +	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
> +		pr_err("Failed to create log dir template");
> +		return NULL; /* Skip the testsuite */
> +	}
> +
> +	temp_dir = mkdtemp(template);
> +	if (temp_dir) {
> +		setenv("PERFSUITE_RUN_DIR", temp_dir, 1);
> +		return strdup(temp_dir);
> +	}
> +	else {
> +		pr_err("Failed to create the temporary directory");
> +	}
> +
> +	return NULL; /* Skip the testsuite */
> +}
> +
> +static void cleanup_shell_logs(char *dirname)
> +{
> +	char *keep_logs = getenv("PERFTEST_KEEP_LOGS");
> +
> +	/* Check if logs should be kept or do cleanup */
> +	if (dirname) {
> +		if (!keep_logs || strcmp(keep_logs, "y") != 0) {
> +			nftw(dirname, delete_file, 8, FTW_DEPTH | FTW_PHYS);
> +		}
> +		free(dirname);
> +	}
> +
> +	unsetenv("PERFSUITE_RUN_DIR");
> +}
> +
>  static bool perf_test__matches(const char *desc, int suite_num, int argc, const char *argv[])
>  {
>  	int i;
> @@ -626,6 +707,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
>  		for (struct test_suite **t = suites; *t; t++, curr_suite++) {
>  			int curr_test_case;
>  			bool suite_matched = false;
> +			char *tmpdir = NULL;
>  
>  			if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) {
>  				/*
> @@ -655,6 +737,13 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
>  			}
>  
>  			for (unsigned int run = 0; run < runs_per_test; run++) {
> +				/* Setup temporary log directories for shell test suites */
> +				if (create_logs(*t, pass)) {
> +					tmpdir = setup_shell_logs((*t)->desc);
> +
> +					if (tmpdir == NULL)  /* Couldn't create log dir, skip test suite */
> +						((struct shell_info*)((*t)->priv))->has_setup = FAILED_SETUP;
> +				}
>  				test_suite__for_each_test_case(*t, curr_test_case) {
>  					if (!suite_matched &&
>  					    !perf_test__matches(test_description(*t, curr_test_case),
> @@ -667,6 +756,7 @@ static int __cmd_test(struct test_suite **suites, int argc, const char *argv[],
>  						goto err_out;
>  				}
>  			}
> +			cleanup_shell_logs(tmpdir);
>  		}
>  		if (!sequential) {
>  			/* Parallel mode starts tests but doesn't finish them. Do that now. */
> diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> index d680a878800f..d4e382898a30 100644
> --- a/tools/perf/tests/tests-scripts.c
> +++ b/tools/perf/tests/tests-scripts.c
> @@ -251,6 +251,7 @@ static struct test_suite* prepare_test_suite(int dir_fd)
>  
>  	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
>  	test_info->has_setup = NO_SETUP;
> +	test_info->store_logs = false;
>  
>  	test_suite->priv = test_info;
>  	test_suite->desc = NULL;
> @@ -427,6 +428,8 @@ static void append_suits_in_dir(int dir_fd,
>  			continue;
>  		}
>  
> +		/* Store logs for testsuite is sub-directories */
> +		((struct shell_info*)(test_suite->priv))->store_logs = true;
>  		if (is_test_script(fd, SHELL_SETUP)) {	/* Check for setup existance */
>  			char *desc = shell_test__description(fd, SHELL_SETUP);
>  			test_suite->desc = desc;	/* Set the suite name by the setup description */
> diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> index da4dcd26140c..41da0a175e4e 100644
> --- a/tools/perf/tests/tests-scripts.h
> +++ b/tools/perf/tests/tests-scripts.h
> @@ -16,6 +16,7 @@ enum shell_setup {
>  struct shell_info {
>  	const char *base_path;
>  	enum shell_setup has_setup;
> +	bool store_logs;
>  };
>  
>  struct test_suite **create_script_test_suites(void);
> -- 
> 2.50.1
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 6/7] perf test: Format log directories for shell tests
  2025-07-21 13:26       ` [PATCH v3 6/7] perf test: Format log directories " Jakub Brnak
@ 2025-07-26  6:21         ` Namhyung Kim
  0 siblings, 0 replies; 43+ messages in thread
From: Namhyung Kim @ 2025-07-26  6:21 UTC (permalink / raw)
  To: Jakub Brnak; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

On Mon, Jul 21, 2025 at 03:26:41PM +0200, Jakub Brnak wrote:
> From: Veronika Molnarova <vmolnaro@redhat.com>
> 
> The name of the log directory can be taken from the test suite
> description, which possibly could contain whitespace characters. This
> can cause further issues if the name is not quoted correctly.
> 
> Replace the whitespace characters with an underscore to prevent the
> possible issues caused by the name splitting.
> 
> Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> ---
>  tools/perf/tests/builtin-test.c | 21 +++++++++++++++++++--
>  1 file changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 89b180798224..9cb0788d3307 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -283,6 +283,21 @@ static bool test_exclusive(const struct test_suite *t, int test_case)
>  	return t->test_cases[test_case].exclusive;
>  }
>  
> +/* Replace non-alphanumeric characters with _ */
> +static void check_dir_name(const char *src, char *dst)

It's not about just checking the name.  Maybe replace_dir_name()?

Thanks,
Namhyung

> +{
> +	size_t i;
> +	size_t len = strlen(src);
> +
> +	for (i = 0; i < len; i++) {
> +		if (!isalnum(src[i]))
> +			dst[i] = '_';
> +		else
> +			dst[i] = src[i];
> +	}
> +	dst[i] = '\0';
> +}
> +
>  static int delete_file(const char *fpath, const struct stat *sb __maybe_unused,
>  						 int typeflag, struct FTW *ftwbuf)
>  {
> @@ -328,10 +343,12 @@ static bool create_logs(struct test_suite *t, int pass){
>  
>  static char *setup_shell_logs(const char *name)
>  {
> -	char template[PATH_MAX];
> +	char template[PATH_MAX], valid_name[strlen(name)+1];
>  	char *temp_dir;
>  
> -	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", name) < 0) {
> +	check_dir_name(name, valid_name);
> +
> +	if (snprintf(template, PATH_MAX, "/tmp/perf_test_%s.XXXXXX", valid_name) < 0) {

It'd be better to do snprintf() first and replace the name so that it
doesn't add the valid_name buffer.

Thanks,
Namhyung


>  		pr_err("Failed to create log dir template");
>  		return NULL; /* Skip the testsuite */
>  	}
> -- 
> 2.50.1
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 0/7] Introduce structure for shell tests
  2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
                         ` (6 preceding siblings ...)
  2025-07-21 13:26       ` [PATCH v3 7/7] perf test: Remove perftool drivers Jakub Brnak
@ 2025-07-31 12:54       ` tejas05
  7 siblings, 0 replies; 43+ messages in thread
From: tejas05 @ 2025-07-31 12:54 UTC (permalink / raw)
  To: Jakub Brnak, vmolnaro
  Cc: acme, acme, irogers, linux-perf-users, mpetlan, namhyung


On 7/21/25 18:56, Jakub Brnak wrote:

> Hi Arnaldo,
>
> This series of Veronika's patches as a part of upstreaming effort of perftool-testsuite has been rebased on the latest perf-tools-next branch and should now apply cleanly.
> Patches 01/10, 02/10, and 05/10 from the v2 have been dropped as they were already accepted upstream.
>
> Thanks,
> Jakub Brnak
>
> Veronika Molnarova (7):
>    perf test perftool_testsuite: Use absolute paths
>    perf tests: Create a structure for shell tests
>    perf test: Provide setup for the shell test suite
>    perftool-testsuite: Add empty setup for base_probe
>    perf test: Introduce storing logs for shell tests
>    perf test: Format log directories for shell tests
>    perf test: Remove perftool drivers
>
>   tools/perf/tests/builtin-test.c               | 137 +++++++++-
>   tools/perf/tests/shell/base_probe/setup.sh    |  13 +
>   .../base_probe/test_adding_blacklisted.sh     |  13 +-
>   .../shell/base_probe/test_adding_kernel.sh    |  53 ++--
>   .../perf/tests/shell/base_probe/test_basic.sh |  19 +-
>   .../shell/base_probe/test_invalid_options.sh  |  11 +-
>   .../shell/base_probe/test_line_semantics.sh   |   7 +-
>   tools/perf/tests/shell/base_report/setup.sh   |   6 +-
>   .../tests/shell/base_report/test_basic.sh     |  47 ++--
>   tools/perf/tests/shell/common/init.sh         |   4 +-
>   .../tests/shell/perftool-testsuite_probe.sh   |  24 --
>   .../tests/shell/perftool-testsuite_report.sh  |  23 --
>   tools/perf/tests/tests-scripts.c              | 258 +++++++++++++++---
>   tools/perf/tests/tests-scripts.h              |  15 +
>   tools/perf/tests/tests.h                      |   8 +-
>   15 files changed, 465 insertions(+), 173 deletions(-)
>   create mode 100755 tools/perf/tests/shell/base_probe/setup.sh
>   delete mode 100755 tools/perf/tests/shell/perftool-testsuite_probe.sh
>   delete mode 100755 tools/perf/tests/shell/perftool-testsuite_report.sh

Hello,

I am seeing this testcase skip and fail on powerpc. The test was skipping and when run in verbose it showed failures in
perf/tests/shell/base_report/setup.sh.

# ./perf test "perftool"
115: perftool-testsuite :: perf_probe                                :
115.1: perf_probe :: Reject blacklisted probes                       : Running (1 act
115.1: perf_probe :: Reject blacklisted probes                       : Ok
115.2: perf_probe :: Add probes, list and remove them                : Running (1 act
115.2: perf_probe :: Add probes, list and remove them                : Ok
115.3: perf_probe :: Basic perf probe functionality                  : Running (1 act
115.3: perf_probe :: Basic perf probe functionality                  : Ok
115.4: perf_probe :: Reject invalid options                          : Running (1 act
115.4: perf_probe :: Reject invalid options                          : Ok
115.5: perf_probe :: Check patterns for line semantics               : Running (1 act
115.5: perf_probe :: Check patterns for line semantics               : Ok
116: perf_report :: Basic perf report options                        : Running (1 act
116: perf_report :: Basic perf report options                        : Skip

# ./perf test 116 -vv
116: perftool-testsuite :: perf_report:
116: perf_report :: Basic perf report options                        : Running (1 act
--- start ---
test child forked, pid 2671
-- [ PASS ] -- perf_report :: setup :: prepare the perf.data file
==================
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.099 MB /tmp/perf_test_perftool_testsuite____perf_report.eAUy69/perf_report/perf.data.1 (767 samples) ]
==================
/root/perf-tools-next/tools/perf/tests/shell/base_report/setup.sh: line 43: ../common/check_all_patterns_found.pl: No such file or directory
-- [ FAIL ] -- perf_report :: setup :: prepare the perf.data.1 file (output regexp parsing)
## [ FAIL ] ## perf_report :: setup SUMMARY :: 1 failures found
---- end(-4) ----

116: perf_report :: Basic perf report options                        : Skip

In other instances where /common/check_all_patterns_found.pl is used is has $DIR_PATH appended to it, something similar should also be used here.

Thanks & Regards,
Tejas Manhas


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 3/7] perf test: Provide setup for the shell test suite
  2025-07-26  6:07         ` Namhyung Kim
@ 2025-08-04 14:39           ` Michael Petlan
  0 siblings, 0 replies; 43+ messages in thread
From: Michael Petlan @ 2025-08-04 14:39 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: Jakub Brnak, vmolnaro, acme, acme, irogers, linux-perf-users

On Fri, 25 Jul 2025, Namhyung Kim wrote:
> On Mon, Jul 21, 2025 at 03:26:38PM +0200, Jakub Brnak wrote:
> > From: Veronika Molnarova <vmolnaro@redhat.com>
> > 
> > Some of the perftool-testsuite test cases require a setup to be done
> > beforehand as may be recording data, setting up cache or restoring sample
> > rate. The setup file also provides the possibility to set the name of
> > the test suite, if the name of the directory is not good enough.
> > 
> > Check for the existence of the "setup.sh" script for the shell test
> > suites and run it before the any of the test cases. If the setup fails,
> > skip all of the test cases of the test suite as the setup may be
> > required for the result to be valid.
> 
> Looks like better to be documented somewhere.  Maybe you can add a
> section like "Add a new (shell) test" in the perf-test man page or so.
> 
> Thanks,
> Namhyung
> 
This is indeed a great idea! It could help the future testcases align
with what we have. Currently the tests handle temporary files, logs
and debugging output on their own. Unifying this, all the tests would
then keep/delete logs based on the same env. variable. Let's follow up
this patchset with providing such docs.

Regards,
Michael


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths
  2025-07-26  6:00         ` Namhyung Kim
@ 2025-08-21 11:01           ` Jakub Brnak
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Brnak @ 2025-08-21 11:01 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

On Fri, Jul 25, 2025 at 11:00:20PM -0700, Namhyung Kim wrote:
> Hello,
> 
> On Mon, Jul 21, 2025 at 03:26:36PM +0200, Jakub Brnak wrote:
> > From: Veronika Molnarova <vmolnaro@redhat.com>
> > 
> > Test cases from perftool_testsuite are affected by the current
> > directory where the test are run. For this reason, the test
> > driver has to change the directory to the base_dir for references to
> > work correctly.
> > 
> > Utilize absolute paths when sourcing and referencing other scripts so
> > that the current working directory doesn't impact the test cases.
> > 
> > Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> > Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> > Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> 
> I'm ok with this change but can you please remove long lines?  I'm not
> sure if we should follow the same coding style in shell scripts but long
> lines would harm readability IMHO.
> 
> Of course it can be on top of this series.
> 
> Thanks,
> Namhyung

Hi Namhyung, 
we definitley can remove long lines but we would prefer to do it as separate patch after this one if possible. Of course when continuing with upstreaming effort
of perftool-testsuite, we will avoid introducing new long lines in all new testcases.

Thanks, 
Jakub
> 
> > ---
> >  .../base_probe/test_adding_blacklisted.sh     | 13 ++---
> >  .../shell/base_probe/test_adding_kernel.sh    | 53 ++++++++++---------
> >  .../perf/tests/shell/base_probe/test_basic.sh | 19 +++----
> >  .../shell/base_probe/test_invalid_options.sh  | 11 ++--
> >  .../shell/base_probe/test_line_semantics.sh   |  7 +--
> >  tools/perf/tests/shell/base_report/setup.sh   |  6 ++-
> >  .../tests/shell/base_report/test_basic.sh     | 47 ++++++++--------
> >  tools/perf/tests/shell/common/init.sh         |  4 +-
> >  8 files changed, 84 insertions(+), 76 deletions(-)
> > 
> > diff --git a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> > index 8226449ac5c3..c409ca8520f8 100755
> > --- a/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
> > @@ -13,11 +13,12 @@
> >  #	they must be skipped.
> >  #
> >  
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> >  TEST_RESULT=0
> >  
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >  # skip if not supported
> >  BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2`
> >  if [ -z "$BLACKFUNC_LIST" ]; then
> > @@ -53,7 +54,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
> >  	PERF_EXIT_CODE=$?
> >  
> >  	# check for bad DWARF polluting the result
> > -	../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err
> >  
> >  	if [ $? -eq 0 ]; then
> >  		SKIP_DWARF=1
> > @@ -73,7 +74,7 @@ for BLACKFUNC in $BLACKFUNC_LIST; do
> >  			fi
> >  		fi
> >  	else
> > -		../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> > +		"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err
> >  		CHECK_EXIT_CODE=$?
> >  
> >  		SKIP_DWARF=0
> > @@ -94,7 +95,7 @@ fi
> >  $CMD_PERF list probe:\* > $LOGS_DIR/adding_blacklisted_list.log
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_blacklisted_list.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing blacklisted probe (should NOT be listed)"
> > diff --git a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> > index df288cf90cd6..3548faf60c8e 100755
> > --- a/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_adding_kernel.sh
> > @@ -13,13 +13,14 @@
> >  #		and removing.
> >  #
> >  
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> >  TEST_RESULT=0
> >  
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >  # shellcheck source=lib/probe_vfs_getname.sh
> > -. "$(dirname "$0")/../lib/probe_vfs_getname.sh"
> > +. "$DIR_PATH/../lib/probe_vfs_getname.sh"
> >  
> >  TEST_PROBE=${TEST_PROBE:-"inode_permission"}
> >  
> > @@ -44,7 +45,7 @@ for opt in "" "-a" "--add"; do
> >  	$CMD_PERF probe $opt $TEST_PROBE 2> $LOGS_DIR/adding_kernel_add$opt.err
> >  	PERF_EXIT_CODE=$?
> >  
> > -	../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_add$opt.err
> >  	CHECK_EXIT_CODE=$?
> >  
> >  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding probe $TEST_PROBE :: $opt"
> > @@ -58,7 +59,7 @@ done
> >  $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list.log
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "probe:${TEST_PROBE}(?:_\d+)?\s+\[Tracepoint event\]" "Metric Groups:" < $LOGS_DIR/adding_kernel_list.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list"
> > @@ -71,7 +72,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing added probe :: perf list
> >  $CMD_PERF probe -l > $LOGS_DIR/adding_kernel_list-l.log
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "\s*probe:${TEST_PROBE}(?:_\d+)?\s+\(on ${TEST_PROBE}(?:[:\+]$RE_NUMBER_HEX)?@.+\)" < $LOGS_DIR/adding_kernel_list-l.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  if [ $NO_DEBUGINFO ] ; then
> > @@ -93,9 +94,9 @@ REGEX_STAT_VALUES="\s*\d+\s+probe:$TEST_PROBE"
> >  # the value should be greater than 1
> >  REGEX_STAT_VALUE_NONZERO="\s*[1-9][0-9]*\s+probe:$TEST_PROBE"
> >  REGEX_STAT_TIME="\s*$RE_NUMBER\s+seconds (?:time elapsed|user|sys)"
> > -../common/check_all_lines_matched.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUES" "$REGEX_STAT_TIME" "$RE_LINE_COMMENT" "$RE_LINE_EMPTY" < $LOGS_DIR/adding_kernel_using_probe.log
> >  CHECK_EXIT_CODE=$?
> > -../common/check_all_patterns_found.pl "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_STAT_HEADER" "$REGEX_STAT_VALUE_NONZERO" "$REGEX_STAT_TIME" < $LOGS_DIR/adding_kernel_using_probe.log
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> > @@ -108,7 +109,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using added probe"
> >  $CMD_PERF probe -d $TEST_PROBE\* 2> $LOGS_DIR/adding_kernel_removing.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" < $LOGS_DIR/adding_kernel_removing.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> > @@ -121,7 +122,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "deleting added probe"
> >  $CMD_PERF list probe:\* > $LOGS_DIR/adding_kernel_list_removed.log
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "List of pre-defined events" "Metric Groups:" < $LOGS_DIR/adding_kernel_list_removed.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "listing removed probe (should NOT be listed)"
> > @@ -135,7 +136,7 @@ $CMD_PERF probe -n --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_dryrun.err
> >  PERF_EXIT_CODE=$?
> >  
> >  # check for the output (should be the same as usual)
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_dryrun.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  # check that no probe was added in real
> > @@ -152,7 +153,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "dry run :: adding probe"
> >  $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_01.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_01.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first probe adding"
> > @@ -162,7 +163,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: first pro
> >  ! $CMD_PERF probe --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_02.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: event \"$TEST_PROBE\" already exists." "Error: Failed to add events." < $LOGS_DIR/adding_kernel_forceadd_02.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (without force)"
> > @@ -173,7 +174,7 @@ NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l`
> >  $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:${TEST_PROBE}_${NO_OF_PROBES}" "on $TEST_PROBE" < $LOGS_DIR/adding_kernel_forceadd_03.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "force-adding probes :: second probe adding (with force)"
> > @@ -187,7 +188,7 @@ $CMD_PERF stat -e probe:$TEST_PROBE -e probe:${TEST_PROBE}_${NO_OF_PROBES} -x';'
> >  PERF_EXIT_CODE=$?
> >  
> >  REGEX_LINE="$RE_NUMBER;+probe:${TEST_PROBE}_?(?:$NO_OF_PROBES)?;$RE_NUMBER;$RE_NUMBER"
> > -../common/check_all_lines_matched.pl "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_LINE" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/adding_kernel_using_two.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  VALUE_1=`grep "$TEST_PROBE;" $LOGS_DIR/adding_kernel_using_two.log | awk -F';' '{print $1}'`
> > @@ -205,7 +206,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "using doubled probe"
> >  $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> > @@ -217,7 +218,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
> >  $CMD_PERF probe -nf --max-probes=512 -a 'vfs_* $params' 2> $LOGS_DIR/adding_kernel_adding_wildcard.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "probe:vfs_mknod" "probe:vfs_create" "probe:vfs_rmdir" "probe:vfs_link" "probe:vfs_write" < $LOGS_DIR/adding_kernel_adding_wildcard.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  if [ $NO_DEBUGINFO ] ; then
> > @@ -240,13 +241,13 @@ test $PERF_EXIT_CODE -ne 139 -a $PERF_EXIT_CODE -ne 0
> >  PERF_EXIT_CODE=$?
> >  
> >  # check that the error message is reasonable
> > -../common/check_all_patterns_found.pl "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "Failed to find" "somenonexistingrandomstuffwhichisalsoprettylongorevenlongertoexceed64" < $LOGS_DIR/adding_kernel_nonexisting.err
> >  CHECK_EXIT_CODE=$?
> > -../common/check_all_patterns_found.pl "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "in this function|at this address" "Error" "Failed to add events" < $LOGS_DIR/adding_kernel_nonexisting.err
> >  (( CHECK_EXIT_CODE += $? ))
> > -../common/check_all_lines_matched.pl "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "Failed to find" "Error" "Probe point .+ not found" "optimized out" "Use.+\-\-range option to show.+location range" < $LOGS_DIR/adding_kernel_nonexisting.err
> >  (( CHECK_EXIT_CODE += $? ))
> > -../common/check_no_patterns_found.pl "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> > +"$DIR_PATH/../common/check_no_patterns_found.pl" "$RE_SEGFAULT" < $LOGS_DIR/adding_kernel_nonexisting.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  if [ $NO_DEBUGINFO ]; then
> > @@ -264,7 +265,7 @@ fi
> >  $CMD_PERF probe --add "$TEST_PROBE%return \$retval" 2> $LOGS_DIR/adding_kernel_func_retval_add.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "Added new events?:" "probe:$TEST_PROBE" "on $TEST_PROBE%return with \\\$retval" < $LOGS_DIR/adding_kernel_func_retval_add.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> > @@ -274,7 +275,7 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: add"
> >  $CMD_PERF record -e probe:$TEST_PROBE\* -o $CURRENT_TEST_DIR/perf.data -- cat /proc/cpuinfo > /dev/null 2> $LOGS_DIR/adding_kernel_func_retval_record.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/adding_kernel_func_retval_record.err
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function with retval :: record"
> > @@ -285,9 +286,9 @@ $CMD_PERF script -i $CURRENT_TEST_DIR/perf.data > $LOGS_DIR/adding_kernel_func_r
> >  PERF_EXIT_CODE=$?
> >  
> >  REGEX_SCRIPT_LINE="\s*cat\s+$RE_NUMBER\s+\[$RE_NUMBER\]\s+$RE_NUMBER:\s+probe:$TEST_PROBE\w*:\s+\($RE_NUMBER_HEX\s+<\-\s+$RE_NUMBER_HEX\)\s+arg1=$RE_NUMBER_HEX"
> > -../common/check_all_lines_matched.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> >  CHECK_EXIT_CODE=$?
> > -../common/check_all_patterns_found.pl "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_SCRIPT_LINE" < $LOGS_DIR/adding_kernel_func_retval_script.log
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "function argument probing :: script"
> > diff --git a/tools/perf/tests/shell/base_probe/test_basic.sh b/tools/perf/tests/shell/base_probe/test_basic.sh
> > index 9d8b5afbeddd..e8fed67be9c1 100755
> > --- a/tools/perf/tests/shell/base_probe/test_basic.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_basic.sh
> > @@ -12,11 +12,12 @@
> >  #		This test tests basic functionality of perf probe command.
> >  #
> >  
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> >  TEST_RESULT=0
> >  
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >  if ! check_kprobes_available; then
> >  	print_overall_skipped
> >  	exit 2
> > @@ -30,15 +31,15 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
> >  	$CMD_PERF probe --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
> >  	PERF_EXIT_CODE=$?
> >  
> > -	../common/check_all_patterns_found.pl "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-PROBE" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "PROBE\s+SYNTAX" "PROBE\s+ARGUMENT" "LINE\s+SYNTAX" < $LOGS_DIR/basic_helpmsg.log
> >  	CHECK_EXIT_CODE=$?
> > -	../common/check_all_patterns_found.pl "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "LAZY\s+MATCHING" "FILTER\s+PATTERN" "EXAMPLES" "SEE\s+ALSO" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_all_patterns_found.pl "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "vmlinux" "module=" "source=" "verbose" "quiet" "add=" "del=" "list.*EVENT" "line=" "vars=" "externs" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_all_patterns_found.pl "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "no-inlines" "funcs.*FILTER" "filter=FILTER" "force" "dry-run" "max-probes" "exec=" "demangle-kernel" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> > +	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> >  	(( CHECK_EXIT_CODE += $? ))
> >  
> >  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> > @@ -53,7 +54,7 @@ fi
> >  # without any args perf-probe should print usage
> >  $CMD_PERF probe 2> $LOGS_DIR/basic_usage.log > /dev/null
> >  
> > -../common/check_all_patterns_found.pl "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "[Uu]sage" "perf probe" "verbose" "quiet" "add" "del" "force" "line" "vars" "externs" "range" < $LOGS_DIR/basic_usage.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results 0 $CHECK_EXIT_CODE "usage message"
> > diff --git a/tools/perf/tests/shell/base_probe/test_invalid_options.sh b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> > index 92f7254eb32a..9caeab2fe77c 100755
> > --- a/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_invalid_options.sh
> > @@ -12,11 +12,12 @@
> >  #		This test checks whether the invalid and incompatible options are reported
> >  #
> >  
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> >  TEST_RESULT=0
> >  
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >  if ! check_kprobes_available; then
> >  	print_overall_skipped
> >  	exit 2
> > @@ -33,7 +34,7 @@ for opt in '-a' '-d' '-L' '-V'; do
> >  	! $CMD_PERF probe $opt 2> $LOGS_DIR/invalid_options_missing_argument$opt.err
> >  	PERF_EXIT_CODE=$?
> >  
> > -	../common/check_all_patterns_found.pl "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .* requires a value" < $LOGS_DIR/invalid_options_missing_argument$opt.err
> >  	CHECK_EXIT_CODE=$?
> >  
> >  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "missing argument for $opt"
> > @@ -66,7 +67,7 @@ for opt in '-a xxx -d xxx' '-a xxx -L foo' '-a xxx -V foo' '-a xxx -l' '-a xxx -
> >  	! $CMD_PERF probe $opt > /dev/null 2> $LOGS_DIR/aux.log
> >  	PERF_EXIT_CODE=$?
> >  
> > -	../common/check_all_patterns_found.pl "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "Error: switch .+ cannot be used with switch .+" < $LOGS_DIR/aux.log
> >  	CHECK_EXIT_CODE=$?
> >  
> >  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "mutually exclusive options :: $opt"
> > diff --git a/tools/perf/tests/shell/base_probe/test_line_semantics.sh b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> > index 20435b6bf6bc..576442d87a44 100755
> > --- a/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> > +++ b/tools/perf/tests/shell/base_probe/test_line_semantics.sh
> > @@ -13,11 +13,12 @@
> >  #		arguments are properly reported.
> >  #
> >  
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> >  TEST_RESULT=0
> >  
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >  if ! check_kprobes_available; then
> >  	print_overall_skipped
> >  	exit 2
> > diff --git a/tools/perf/tests/shell/base_report/setup.sh b/tools/perf/tests/shell/base_report/setup.sh
> > index 8634e7e0dda6..2fd5c97f9822 100755
> > --- a/tools/perf/tests/shell/base_report/setup.sh
> > +++ b/tools/perf/tests/shell/base_report/setup.sh
> > @@ -12,8 +12,10 @@
> >  #
> >  #
> >  
> > +DIR_PATH="$(dirname $0)"
> > +
> >  # include working environment
> > -. ../common/init.sh
> > +. "$DIR_PATH/../common/init.sh"
> >  
> >  TEST_RESULT=0
> >  
> > @@ -24,7 +26,7 @@ SW_EVENT="cpu-clock"
> >  $CMD_PERF record -asdg -e $SW_EVENT -o $CURRENT_TEST_DIR/perf.data -- $CMD_LONGER_SLEEP 2> $LOGS_DIR/setup.log
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_patterns_found.pl "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$RE_LINE_RECORD1" "$RE_LINE_RECORD2" < $LOGS_DIR/setup.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "prepare the perf.data file"
> > diff --git a/tools/perf/tests/shell/base_report/test_basic.sh b/tools/perf/tests/shell/base_report/test_basic.sh
> > index adfd8713b8f8..a15d3007f449 100755
> > --- a/tools/perf/tests/shell/base_report/test_basic.sh
> > +++ b/tools/perf/tests/shell/base_report/test_basic.sh
> > @@ -12,11 +12,12 @@
> >  #
> >  #
> >  
> > -# include working environment
> > -. ../common/init.sh
> > -
> > +DIR_PATH="$(dirname $0)"
> >  TEST_RESULT=0
> >  
> > +# include working environment
> > +. "$DIR_PATH/../common/init.sh"
> > +
> >  
> >  ### help message
> >  
> > @@ -25,19 +26,19 @@ if [ "$PARAM_GENERAL_HELP_TEXT_CHECK" = "y" ]; then
> >  	$CMD_PERF report --help > $LOGS_DIR/basic_helpmsg.log 2> $LOGS_DIR/basic_helpmsg.err
> >  	PERF_EXIT_CODE=$?
> >  
> > -	../common/check_all_patterns_found.pl "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "PERF-REPORT" "NAME" "SYNOPSIS" "DESCRIPTION" "OPTIONS" "OVERHEAD\s+CALCULATION" "SEE ALSO" < $LOGS_DIR/basic_helpmsg.log
> >  	CHECK_EXIT_CODE=$?
> > -	../common/check_all_patterns_found.pl "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "input" "verbose" "show-nr-samples" "show-cpu-utilization" "threads" "comms" "pid" "tid" "dsos" "symbols" "symbol-filter" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_all_patterns_found.pl "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "hide-unresolved" "sort" "fields" "parent" "exclude-other" "column-widths" "field-separator" "dump-raw-trace" "children" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_all_patterns_found.pl "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "call-graph" "max-stack" "inverted" "ignore-callees" "pretty" "stdio" "tui" "gtk" "vmlinux" "kallsyms" "modules" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_all_patterns_found.pl "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "force" "symfs" "cpu" "disassembler-style" "source" "asm-raw" "show-total-period" "show-info" "branch-stack" "group" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_all_patterns_found.pl "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
> > +	"$DIR_PATH/../common/check_all_patterns_found.pl" "branch-history" "objdump" "demangle" "percent-limit" "percentage" "header" "itrace" "full-source-path" "show-ref-call-graph" < $LOGS_DIR/basic_helpmsg.log
> >  	(( CHECK_EXIT_CODE += $? ))
> > -	../common/check_no_patterns_found.pl "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> > +	"$DIR_PATH/../common/check_no_patterns_found.pl" "No manual entry for" < $LOGS_DIR/basic_helpmsg.err
> >  	(( CHECK_EXIT_CODE += $? ))
> >  
> >  	print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "help message"
> > @@ -57,9 +58,9 @@ REGEX_LOST_SAMPLES_INFO="#\s*Total Lost Samples:\s+$RE_NUMBER"
> >  REGEX_SAMPLES_INFO="#\s*Samples:\s+(?:$RE_NUMBER)\w?\s+of\s+event\s+'$RE_EVENT_ANY'"
> >  REGEX_LINES_HEADER="#\s*Children\s+Self\s+Command\s+Shared Object\s+Symbol"
> >  REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> > -../common/check_all_patterns_found.pl "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LOST_SAMPLES_INFO" "$REGEX_SAMPLES_INFO" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_basic.log
> >  CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_basic.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "basic execution"
> > @@ -74,9 +75,9 @@ PERF_EXIT_CODE=$?
> >  
> >  REGEX_LINES_HEADER="#\s*Children\s+Self\s+Samples\s+Command\s+Shared Object\s+Symbol"
> >  REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> > -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_nrsamples.log
> >  CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_nrsamples.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "number of samples"
> > @@ -98,7 +99,7 @@ REGEX_LINE_CPUS_ONLINE="#\s+nrcpus online\s*:\s*$MY_CPUS_ONLINE"
> >  REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$MY_CPUS_AVAILABLE"
> >  # disable precise check for "nrcpus avail" in BASIC runmode
> >  test $PERFTOOL_TESTSUITE_RUNMODE -lt $RUNMODE_STANDARD && REGEX_LINE_CPUS_AVAIL="#\s+nrcpus avail\s*:\s*$RE_NUMBER"
> > -../common/check_all_patterns_found.pl "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINE_TIMESTAMP" "$REGEX_LINE_HOSTNAME" "$REGEX_LINE_KERNEL" "$REGEX_LINE_PERF" "$REGEX_LINE_ARCH" "$REGEX_LINE_CPUS_ONLINE" "$REGEX_LINE_CPUS_AVAIL" < $LOGS_DIR/basic_header.log
> >  CHECK_EXIT_CODE=$?
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "header"
> > @@ -129,9 +130,9 @@ PERF_EXIT_CODE=$?
> >  
> >  REGEX_LINES_HEADER="#\s*Children\s+Self\s+sys\s+usr\s+Command\s+Shared Object\s+Symbol"
> >  REGEX_LINES="\s*$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+$RE_NUMBER%\s+\S+\s+\[kernel\.(?:vmlinux)|(?:kallsyms)\]\s+\[[k\.]\]\s+\w+"
> > -../common/check_all_patterns_found.pl "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> > +"$DIR_PATH/../common/check_all_patterns_found.pl" "$REGEX_LINES_HEADER" "$REGEX_LINES" < $LOGS_DIR/basic_cpuut.log
> >  CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_cpuut.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> > @@ -144,9 +145,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "show CPU utilization"
> >  $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --pid=1 > $LOGS_DIR/basic_pid.log 2> $LOGS_DIR/basic_pid.err
> >  PERF_EXIT_CODE=$?
> >  
> > -grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "systemd|init"
> > +grep -P -v '^#' $LOGS_DIR/basic_pid.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "systemd|init"
> >  CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_pid.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> > @@ -159,9 +160,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "pid"
> >  $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbols=dummynonexistingsymbol > $LOGS_DIR/basic_symbols.log 2> $LOGS_DIR/basic_symbols.err
> >  PERF_EXIT_CODE=$?
> >  
> > -../common/check_all_lines_matched.pl "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> > +"$DIR_PATH/../common/check_all_lines_matched.pl" "$RE_LINE_EMPTY" "$RE_LINE_COMMENT" < $LOGS_DIR/basic_symbols.log
> >  CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbols.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> > @@ -174,9 +175,9 @@ print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "non-existing symbol"
> >  $CMD_PERF report --stdio -i $CURRENT_TEST_DIR/perf.data --symbol-filter=map > $LOGS_DIR/basic_symbolfilter.log 2> $LOGS_DIR/basic_symbolfilter.err
> >  PERF_EXIT_CODE=$?
> >  
> > -grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | ../common/check_all_lines_matched.pl "\[[k\.]\]\s+.*map"
> > +grep -P -v '^#' $LOGS_DIR/basic_symbolfilter.log | grep -P '\s+[\d\.]+%' | "$DIR_PATH/../common/check_all_lines_matched.pl" "\[[k\.]\]\s+.*map"
> >  CHECK_EXIT_CODE=$?
> > -../common/check_errors_whitelisted.pl "stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> > +"$DIR_PATH/../common/check_errors_whitelisted.pl" "$DIR_PATH/stderr-whitelist.txt" < $LOGS_DIR/basic_symbolfilter.err
> >  (( CHECK_EXIT_CODE += $? ))
> >  
> >  print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "symbol filter"
> > diff --git a/tools/perf/tests/shell/common/init.sh b/tools/perf/tests/shell/common/init.sh
> > index 26c7525651e0..cbfc78bec974 100644
> > --- a/tools/perf/tests/shell/common/init.sh
> > +++ b/tools/perf/tests/shell/common/init.sh
> > @@ -11,8 +11,8 @@
> >  #
> >  
> >  
> > -. ../common/settings.sh
> > -. ../common/patterns.sh
> > +. "$(dirname $0)/../common/settings.sh"
> > +. "$(dirname $0)/../common/patterns.sh"
> >  
> >  THIS_TEST_NAME=`basename $0 .sh`
> >  
> > -- 
> > 2.50.1
> > 
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v3 2/7] perf tests: Create a structure for shell tests
  2025-07-26  6:03         ` Namhyung Kim
@ 2025-08-21 11:15           ` Jakub Brnak
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Brnak @ 2025-08-21 11:15 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: vmolnaro, acme, acme, irogers, linux-perf-users, mpetlan

On Fri, Jul 25, 2025 at 11:03:59PM -0700, Namhyung Kim wrote:
> On Mon, Jul 21, 2025 at 03:26:37PM +0200, Jakub Brnak wrote:
> > From: Veronika Molnarova <vmolnaro@redhat.com>
> > 
> > The general structure of test suites with test cases has been implemented
> > for C tests for some time, while shell tests were just all put into a list
> > without any possible structuring.
> > 
> > Provide the same possibility of test suite structure for shell tests. The
> > suite is created for each subdirectory located in the 'perf/tests/shell'
> > directory that contains at least one test script. All of the deeper levels
> > of subdirectories will be merged with the first level of test cases.
> > The name of the test suite is the name of the subdirectory, where the test
> > cases are located. For all of the test scripts that are not in any
> > subdirectory, a test suite with a single test case is created as it has
> > been till now.
> > 
> > The new structure of the shell tests for 'perf test list':
> >     77: build id cache operations
> >     78: coresight
> >     78:1: CoreSight / ASM Pure Loop
> >     78:2: CoreSight / Memcpy 16k 10 Threads
> >     78:3: CoreSight / Thread Loop 10 Threads - Check TID
> >     78:4: CoreSight / Thread Loop 2 Threads - Check TID
> >     78:5: CoreSight / Unroll Loop Thread 10
> >     79: daemon operations
> >     80: perf diff tests
> 
> I like the idea!  But there are too many coding style issues.  Can you
> please follow the style for the kernel?
> 
> Thanks,
> Namhyung

Hi Namhyung, 
thank you for feedback, right now I am working on v4 which will resolve issues you described.

Thanks, 
Jakub
> 
> > 
> > Signed-off-by: Michael Petlan <mpetlan@redhat.com>
> > Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
> > Signed-off-by: Jakub Brnak <jbrnak@redhat.com>
> > ---
> >  tools/perf/tests/tests-scripts.c | 223 +++++++++++++++++++++++++------
> >  tools/perf/tests/tests-scripts.h |   4 +
> >  2 files changed, 189 insertions(+), 38 deletions(-)
> > 
> > diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
> > index f18c4cd337c8..21a6ede330e9 100644
> > --- a/tools/perf/tests/tests-scripts.c
> > +++ b/tools/perf/tests/tests-scripts.c
> > @@ -151,14 +151,45 @@ static char *strdup_check(const char *str)
> >  	return newstr;
> >  }
> >  
> > -static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
> > +/* Free the whole structure of test_suite with its test_cases */
> > +static void free_suite(struct test_suite *suite) {
> > +	if (suite->test_cases){
> > +		int num = 0;
> > +		while (suite->test_cases[num].name){ /* Last case has name set to NULL */
> > +			free((void*) suite->test_cases[num].name);
> > +			free((void*) suite->test_cases[num].desc);
> > +			num++;
> > +		}
> > +		free(suite->test_cases);
> > +	}
> > +	if (suite->desc)
> > +		free((void*) suite->desc);
> > +	if (suite->priv){
> > +		struct shell_info *test_info = suite->priv;
> > +		free((void*) test_info->base_path);
> > +		free(test_info);
> > +	}
> > +
> > +	free(suite);
> > +}
> > +
> > +static int shell_test__run(struct test_suite *test, int subtest)
> >  {
> > -	const char *file = test->priv;
> > +	const char *file;
> >  	int err;
> >  	char *cmd = NULL;
> >  
> > +	/* Get absolute file path */
> > +	if (subtest >= 0) {
> > +		file = test->test_cases[subtest].name;
> > +	}
> > +	else {		/* Single test case */
> > +		file = test->test_cases[0].name;
> > +	}
> > +
> >  	if (asprintf(&cmd, "%s%s", file, verbose ? " -v" : "") < 0)
> >  		return TEST_FAIL;
> > +
> >  	err = system(cmd);
> >  	free(cmd);
> >  	if (!err)
> > @@ -167,63 +198,154 @@ static int shell_test__run(struct test_suite *test, int subtest __maybe_unused)
> >  	return WEXITSTATUS(err) == 2 ? TEST_SKIP : TEST_FAIL;
> >  }
> >  
> > -static void append_script(int dir_fd, const char *name, char *desc,
> > -			  struct test_suite ***result,
> > -			  size_t *result_sz)
> > +static struct test_suite* prepare_test_suite(int dir_fd)
> >  {
> > -	char filename[PATH_MAX], link[128];
> > -	struct test_suite *test_suite, **result_tmp;
> > -	struct test_case *tests;
> > +	char dirpath[PATH_MAX], link[128];
> >  	ssize_t len;
> > -	char *exclusive;
> > +	struct test_suite *test_suite = NULL;
> > +	struct shell_info *test_info;
> >  
> > +	/* Get dir absolute path */
> >  	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> > -	len = readlink(link, filename, sizeof(filename));
> > +	len = readlink(link, dirpath, sizeof(dirpath));
> >  	if (len < 0) {
> >  		pr_err("Failed to readlink %s", link);
> > -		return;
> > +		return NULL;
> >  	}
> > -	filename[len++] = '/';
> > -	strcpy(&filename[len], name);
> > +	dirpath[len++] = '/';
> > +	dirpath[len] = '\0';
> >  
> > -	tests = calloc(2, sizeof(*tests));
> > -	if (!tests) {
> > -		pr_err("Out of memory while building script test suite list\n");
> > -		return;
> > -	}
> > -	tests[0].name = strdup_check(name);
> > -	exclusive = strstr(desc, " (exclusive)");
> > -	if (exclusive != NULL) {
> > -		tests[0].exclusive = true;
> > -		exclusive[0] = '\0';
> > -	}
> > -	tests[0].desc = strdup_check(desc);
> > -	tests[0].run_case = shell_test__run;
> >  	test_suite = zalloc(sizeof(*test_suite));
> >  	if (!test_suite) {
> >  		pr_err("Out of memory while building script test suite list\n");
> > -		free(tests);
> > -		return;
> > +		return NULL;
> >  	}
> > -	test_suite->desc = desc;
> > -	test_suite->test_cases = tests;
> > -	test_suite->priv = strdup_check(filename);
> > +
> > +	test_info = zalloc(sizeof(*test_info));
> > +	if (!test_info) {
> > +		pr_err("Out of memory while building script test suite list\n");
> > +		return NULL;
> > +	}
> > +
> > +	test_info->base_path = strdup_check(dirpath);		/* Absolute path to dir */
> > +
> > +	test_suite->priv = test_info;
> > +	test_suite->desc = NULL;
> > +	test_suite->test_cases = NULL;
> > +
> > +	return test_suite;
> > +}
> > +
> > +static void append_suite(struct test_suite ***result,
> > +			  size_t *result_sz, struct test_suite *test_suite)
> > +{
> > +	struct test_suite **result_tmp;
> > +
> >  	/* Realloc is good enough, though we could realloc by chunks, not that
> >  	 * anyone will ever measure performance here */
> >  	result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
> >  	if (result_tmp == NULL) {
> >  		pr_err("Out of memory while building script test suite list\n");
> > -		free(tests);
> > -		free(test_suite);
> > +		free_suite(test_suite);
> >  		return;
> >  	}
> > +
> >  	/* Add file to end and NULL terminate the struct array */
> >  	*result = result_tmp;
> >  	(*result)[*result_sz] = test_suite;
> >  	(*result_sz)++;
> >  }
> >  
> > -static void append_scripts_in_dir(int dir_fd,
> > +static void append_script_to_suite(int dir_fd, const char *name, char *desc,
> > +					struct test_suite *test_suite, size_t *tc_count)
> > +{
> > +	char file_name[PATH_MAX], link[128];
> > +	struct test_case *tests;
> > +	size_t len;
> > +	char *exclusive;
> > +
> > +	if (!test_suite)
> > +		return;
> > +
> > +	/* Requires an empty test case at the end */
> > +	tests = realloc(test_suite->test_cases, (*tc_count + 2) * sizeof(*tests));
> > +	if (!tests) {
> > +		pr_err("Out of memory while building script test suite list\n");
> > +		return;
> > +	}
> > +
> > +	/* Get path to the test script */
> > +	snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
> > +	len = readlink(link, file_name, sizeof(file_name));
> > +	if (len < 0) {
> > +		pr_err("Failed to readlink %s", link);
> > +		return;
> > +	}
> > +	file_name[len++] = '/';
> > +	strcpy(&file_name[len], name);
> > +
> > +	tests[(*tc_count)].name = strdup_check(file_name);	/* Get path to the script from base dir */
> > +	tests[(*tc_count)].exclusive = false;
> > +	exclusive = strstr(desc, " (exclusive)");
> > +	if (exclusive != NULL) {
> > +		tests[(*tc_count)].exclusive = true;
> > +		exclusive[0] = '\0';
> > +	}
> > +	tests[(*tc_count)].desc = desc;
> > +	tests[(*tc_count)].skip_reason = NULL;	/* Unused */
> > +	tests[(*tc_count)++].run_case = shell_test__run;
> > +
> > +	tests[(*tc_count)].name = NULL;		/* End the test cases */
> > +
> > +	test_suite->test_cases = tests;
> > +}
> > +
> > +static void append_scripts_in_subdir(int dir_fd,
> > +				  struct test_suite *suite,
> > +				  size_t *tc_count)
> > +{
> > +	struct dirent **entlist;
> > +	struct dirent *ent;
> > +	int n_dirs, i;
> > +
> > +	/* List files, sorted by alpha */
> > +	n_dirs = scandirat(dir_fd, ".", &entlist, NULL, alphasort);
> > +	if (n_dirs == -1)
> > +		return;
> > +	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> > +		int fd;
> > +
> > +		if (ent->d_name[0] == '.')
> > +			continue; /* Skip hidden files */
> > +		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
> > +			char *desc = shell_test__description(dir_fd, ent->d_name);
> > +
> > +			if (desc) /* It has a desc line - valid script */
> > +				append_script_to_suite(dir_fd, ent->d_name, desc, suite, tc_count);
> > +			continue;
> > +		}
> > +
> > +		if (ent->d_type != DT_DIR) {
> > +			struct stat st;
> > +
> > +			if (ent->d_type != DT_UNKNOWN)
> > +				continue;
> > +			fstatat(dir_fd, ent->d_name, &st, 0);
> > +			if (!S_ISDIR(st.st_mode))
> > +				continue;
> > +		}
> > +
> > +		fd = openat(dir_fd, ent->d_name, O_PATH);
> > +
> > +		/* Recurse into the dir */
> > +		append_scripts_in_subdir(fd, suite, tc_count);
> > +	}
> > +	for (i = 0; i < n_dirs; i++) /* Clean up */
> > +		zfree(&entlist[i]);
> > +	free(entlist);
> > +}
> > +
> > +static void append_suits_in_dir(int dir_fd,
> >  				  struct test_suite ***result,
> >  				  size_t *result_sz)
> >  {
> > @@ -237,16 +359,27 @@ static void append_scripts_in_dir(int dir_fd,
> >  		return;
> >  	for (i = 0; i < n_dirs && (ent = entlist[i]); i++) {
> >  		int fd;
> > +		struct test_suite *test_suite;
> > +		size_t cases_count = 0;
> >  
> >  		if (ent->d_name[0] == '.')
> >  			continue; /* Skip hidden files */
> >  		if (is_test_script(dir_fd, ent->d_name)) { /* It's a test */
> >  			char *desc = shell_test__description(dir_fd, ent->d_name);
> >  
> > -			if (desc) /* It has a desc line - valid script */
> > -				append_script(dir_fd, ent->d_name, desc, result, result_sz);
> > +			if (desc) { /* It has a desc line - valid script */
> > +				test_suite = prepare_test_suite(dir_fd); /* Create a test suite with a single test case */
> > +				append_script_to_suite(dir_fd, ent->d_name, desc, test_suite, &cases_count);
> > +				test_suite->desc = strdup_check(desc);
> > +
> > +				if (cases_count)
> > +					append_suite(result, result_sz, test_suite);
> > +				else /* Wasn't able to create the test case */
> > +					free_suite(test_suite);
> > +			}
> >  			continue;
> >  		}
> > +
> >  		if (ent->d_type != DT_DIR) {
> >  			struct stat st;
> >  
> > @@ -258,8 +391,22 @@ static void append_scripts_in_dir(int dir_fd,
> >  		}
> >  		if (strncmp(ent->d_name, "base_", 5) == 0)
> >  			continue; /* Skip scripts that have a separate driver. */
> > +
> > +		/* Scan subdir for test cases*/
> >  		fd = openat(dir_fd, ent->d_name, O_PATH);
> > -		append_scripts_in_dir(fd, result, result_sz);
> > +		test_suite = prepare_test_suite(fd);	/* Prepare a testsuite with its path */
> > +		if (!test_suite)
> > +			continue;
> > +
> > +		append_scripts_in_subdir(fd, test_suite, &cases_count);
> > +		if (cases_count == 0){
> > +			free_suite(test_suite);
> > +			continue;
> > +		}
> > +
> > +		test_suite->desc = strdup_check(ent->d_name);	/* If no setup, set name to the directory */
> > +
> > +		append_suite(result, result_sz, test_suite);
> >  		close(fd);
> >  	}
> >  	for (i = 0; i < n_dirs; i++) /* Clean up */
> > @@ -278,7 +425,7 @@ struct test_suite **create_script_test_suites(void)
> >  	 * length array.
> >  	 */
> >  	if (dir_fd >= 0)
> > -		append_scripts_in_dir(dir_fd, &result, &result_sz);
> > +		append_suits_in_dir(dir_fd, &result, &result_sz);
> >  
> >  	result_tmp = realloc(result, (result_sz + 1) * sizeof(*result_tmp));
> >  	if (result_tmp == NULL) {
> > diff --git a/tools/perf/tests/tests-scripts.h b/tools/perf/tests/tests-scripts.h
> > index b553ad26ea17..60a1a19a45c9 100644
> > --- a/tools/perf/tests/tests-scripts.h
> > +++ b/tools/perf/tests/tests-scripts.h
> > @@ -4,6 +4,10 @@
> >  
> >  #include "tests.h"
> >  
> > +struct shell_info {
> > +	const char *base_path;
> > +};
> > +
> >  struct test_suite **create_script_test_suites(void);
> >  
> >  #endif /* TESTS_SCRIPTS_H */
> > -- 
> > 2.50.1
> > 
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2025-08-21 11:16 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-20 22:03 [PATCH 00/10] Introduce structure for shell tests vmolnaro
2024-12-20 22:03 ` [PATCH 01/10] perf test perftool_testsuite: Add missing description vmolnaro
2024-12-20 22:03 ` [PATCH 02/10] perf test perftool_testsuite: Return correct value for skipping vmolnaro
2024-12-20 22:03 ` [PATCH 03/10] perf test perftool_testsuite: Use absolute paths vmolnaro
2024-12-20 22:03 ` [PATCH 04/10] perf tests: Create a structure for shell tests vmolnaro
2024-12-20 22:03 ` [PATCH 05/10] perf testsuite: Fix perf-report tests installation vmolnaro
2024-12-20 22:03 ` [PATCH 06/10] perf test: Provide setup for the shell test suite vmolnaro
2024-12-20 22:03 ` [PATCH 07/10] perftool-testsuite: Add empty setup for base_probe vmolnaro
2024-12-20 22:03 ` [PATCH 08/10] perf test: Introduce storing logs for shell tests vmolnaro
2024-12-20 22:03 ` [PATCH 09/10] perf test: Format log directories " vmolnaro
2024-12-20 22:03 ` [PATCH 10/10] perf test: Remove perftool drivers vmolnaro
2025-01-13 15:24 ` [PATCH 00/10] Introduce structure for shell tests Arnaldo Carvalho de Melo
2025-01-13 18:25   ` [PATCH v2 " vmolnaro
2025-07-21 13:26     ` [PATCH v3 0/7] " Jakub Brnak
2025-07-21 13:26       ` [PATCH v3 1/7] perf test perftool_testsuite: Use absolute paths Jakub Brnak
2025-07-26  6:00         ` Namhyung Kim
2025-08-21 11:01           ` Jakub Brnak
2025-07-21 13:26       ` [PATCH v3 2/7] perf tests: Create a structure for shell tests Jakub Brnak
2025-07-21 19:39         ` Ian Rogers
2025-07-26  6:03         ` Namhyung Kim
2025-08-21 11:15           ` Jakub Brnak
2025-07-21 13:26       ` [PATCH v3 3/7] perf test: Provide setup for the shell test suite Jakub Brnak
2025-07-26  6:07         ` Namhyung Kim
2025-08-04 14:39           ` Michael Petlan
2025-07-21 13:26       ` [PATCH v3 4/7] perftool-testsuite: Add empty setup for base_probe Jakub Brnak
2025-07-21 13:26       ` [PATCH v3 5/7] perf test: Introduce storing logs for shell tests Jakub Brnak
2025-07-21 19:43         ` Ian Rogers
2025-07-26  6:17         ` Namhyung Kim
2025-07-21 13:26       ` [PATCH v3 6/7] perf test: Format log directories " Jakub Brnak
2025-07-26  6:21         ` Namhyung Kim
2025-07-21 13:26       ` [PATCH v3 7/7] perf test: Remove perftool drivers Jakub Brnak
2025-07-21 19:46         ` Ian Rogers
2025-07-31 12:54       ` [PATCH v3 0/7] Introduce structure for shell tests tejas05
2025-01-13 18:25   ` [PATCH v2 01/10] perf test perftool_testsuite: Add missing description vmolnaro
2025-01-13 18:25   ` [PATCH v2 02/10] perf test perftool_testsuite: Return correct value for skipping vmolnaro
2025-01-13 18:25   ` [PATCH v2 03/10] perf test perftool_testsuite: Use absolute paths vmolnaro
2025-01-13 18:25   ` [PATCH v2 04/10] perf tests: Create a structure for shell tests vmolnaro
2025-01-13 18:26   ` [PATCH v2 05/10] perf testsuite: Fix perf-report tests installation vmolnaro
2025-01-13 18:26   ` [PATCH v2 06/10] perf test: Provide setup for the shell test suite vmolnaro
2025-01-13 18:26   ` [PATCH v2 07/10] perftool-testsuite: Add empty setup for base_probe vmolnaro
2025-01-13 18:26   ` [PATCH v2 08/10] perf test: Introduce storing logs for shell tests vmolnaro
2025-01-13 18:26   ` [PATCH v2 09/10] perf test: Format log directories " vmolnaro
2025-01-13 18:26   ` [PATCH v2 10/10] perf test: Remove perftool drivers vmolnaro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).