* [PATCH v2 1/8] tools subcmd: Add non-waitpid check_if_command_finished()
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 2/8] perf test: Display number of active running tests Ian Rogers
` (7 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Using waitpid can cause stdout/stderr of the child process to be
lost. Use Linux's /prod/<pid>/status file to determine if the process
has reached the zombie state. Use the 'status' file rather than 'stat'
to avoid issues around skipping the process name.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/lib/subcmd/run-command.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/tools/lib/subcmd/run-command.c b/tools/lib/subcmd/run-command.c
index 4e3a557a2f37..0a764c25c384 100644
--- a/tools/lib/subcmd/run-command.c
+++ b/tools/lib/subcmd/run-command.c
@@ -2,6 +2,7 @@
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
+#include <ctype.h>
#include <fcntl.h>
#include <string.h>
#include <linux/string.h>
@@ -217,8 +218,40 @@ static int wait_or_whine(struct child_process *cmd, bool block)
int check_if_command_finished(struct child_process *cmd)
{
+#ifdef __linux__
+ char filename[FILENAME_MAX + 12];
+ char status_line[256];
+ FILE *status_file;
+
+ /*
+ * Check by reading /proc/<pid>/status as calling waitpid causes
+ * stdout/stderr to be closed and data lost.
+ */
+ sprintf(filename, "/proc/%d/status", cmd->pid);
+ status_file = fopen(filename, "r");
+ if (status_file == NULL) {
+ /* Open failed assume finish_command was called. */
+ return true;
+ }
+ while (fgets(status_line, sizeof(status_line), status_file) != NULL) {
+ char *p;
+
+ if (strncmp(status_line, "State:", 6))
+ continue;
+
+ fclose(status_file);
+ p = status_line + 6;
+ while (isspace(*p))
+ p++;
+ return *p == 'Z' ? 1 : 0;
+ }
+ /* Read failed assume finish_command was called. */
+ fclose(status_file);
+ return 1;
+#else
wait_or_whine(cmd, /*block=*/false);
return cmd->finished;
+#endif
}
int finish_command(struct child_process *cmd)
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 2/8] perf test: Display number of active running tests
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
2024-10-11 22:03 ` [PATCH v2 1/8] tools subcmd: Add non-waitpid check_if_command_finished() Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 3/8] perf test: Reduce scope of parallel variable Ian Rogers
` (6 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Before polling or sleeping to wait for a test to complete, print out
": Running (<num> active)" where the number of active tests is
determined by iterating over the tests and seeing which return false
for check_if_command_finished. The line erasing and printing out only
occur if the number of runnings tests changes to avoid the line
flickering excessively. Knowing tests are running allows a user to
know a test is running and in parallel mode how many of the tests are
waiting to complete. If color mode is disabled then avoid displaying
the "Running" message as deleting the line isn't reliable.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 93 ++++++++++++++++++++++-----------
tools/perf/util/color.h | 1 +
2 files changed, 63 insertions(+), 31 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 470a9709427d..bf03fb42420b 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -242,7 +242,10 @@ static int run_test_child(struct child_process *process)
return -err;
}
-static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width)
+#define TEST_RUNNING -3
+
+static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width,
+ int running)
{
if (has_subtests(t)) {
int subw = width > 2 ? width - 2 : width;
@@ -252,6 +255,9 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
pr_info("%3d: %-*s:", i + 1, width, test_description(t, subtest));
switch (result) {
+ case TEST_RUNNING:
+ color_fprintf(stderr, PERF_COLOR_YELLOW, " Running (%d active)\n", running);
+ break;
case TEST_OK:
pr_info(" Ok\n");
break;
@@ -273,14 +279,17 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
return 0;
}
-static int finish_test(struct child_test *child_test, int width)
+static int finish_test(struct child_test **child_tests, int running_test, int child_test_num,
+ int width)
{
+ struct child_test *child_test = child_tests[running_test];
struct test_suite *t = child_test->test;
int i = child_test->test_num;
int subi = child_test->subtest;
int err = child_test->process.err;
- bool err_done = err <= 0;
+ bool err_done = false;
struct strbuf err_output = STRBUF_INIT;
+ int last_running = -1;
int ret;
/*
@@ -294,7 +303,7 @@ static int finish_test(struct child_test *child_test, int width)
* Busy loop reading from the child's stdout/stderr that are set to be
* non-blocking until EOF.
*/
- if (!err_done)
+ if (err > 0)
fcntl(err, F_SETFL, O_NONBLOCK);
if (verbose > 1) {
if (has_subtests(t))
@@ -308,38 +317,60 @@ static int finish_test(struct child_test *child_test, int width)
.events = POLLIN | POLLERR | POLLHUP | POLLNVAL,
},
};
- char buf[512];
- ssize_t len;
-
- /* Poll to avoid excessive spinning, timeout set for 100ms. */
- poll(pfds, ARRAY_SIZE(pfds), /*timeout=*/100);
- if (!err_done && pfds[0].revents) {
- errno = 0;
- len = read(err, buf, sizeof(buf) - 1);
-
- if (len <= 0) {
- err_done = errno != EAGAIN;
- } else {
- buf[len] = '\0';
- if (verbose > 1)
- fprintf(stdout, "%s", buf);
- else
+ if (perf_use_color_default) {
+ int running = 0;
+
+ for (int y = running_test; y < child_test_num; y++) {
+ if (check_if_command_finished(&child_tests[y]->process) == 0)
+ running++;
+ }
+ if (running != last_running) {
+ if (last_running != -1) {
+ /*
+ * Erase "Running (.. active)" line
+ * printed before poll/sleep.
+ */
+ fprintf(debug_file(), PERF_COLOR_DELETE_LINE);
+ }
+ print_test_result(t, i, subi, TEST_RUNNING, width, running);
+ last_running = running;
+ }
+ }
+
+ err_done = true;
+ if (err <= 0) {
+ /* No child stderr to poll, sleep for 10ms for child to complete. */
+ usleep(10 * 1000);
+ } else {
+ /* Poll to avoid excessive spinning, timeout set for 100ms. */
+ poll(pfds, ARRAY_SIZE(pfds), /*timeout=*/100);
+ if (pfds[0].revents) {
+ char buf[512];
+ ssize_t len;
+
+ len = read(err, buf, sizeof(buf) - 1);
+
+ if (len > 0) {
+ err_done = false;
+ buf[len] = '\0';
strbuf_addstr(&err_output, buf);
+ }
}
}
+ if (err_done)
+ err_done = check_if_command_finished(&child_test->process);
+ }
+ if (perf_use_color_default && last_running != -1) {
+ /* Erase "Running (.. active)" line printed before poll/sleep. */
+ fprintf(debug_file(), PERF_COLOR_DELETE_LINE);
}
/* Clean up child process. */
ret = finish_command(&child_test->process);
- if (verbose == 1 && ret == TEST_FAIL) {
- /* Add header for test that was skipped above. */
- if (has_subtests(t))
- pr_info("%3d.%1d: %s:\n", i + 1, subi + 1, test_description(t, subi));
- else
- pr_info("%3d: %s:\n", i + 1, test_description(t, -1));
+ if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
fprintf(stderr, "%s", err_output.buf);
- }
+
strbuf_release(&err_output);
- print_test_result(t, i, subi, ret, width);
+ print_test_result(t, i, subi, ret, width, /*running=*/0);
if (err > 0)
close(err);
return 0;
@@ -355,7 +386,7 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
pr_debug("--- start ---\n");
err = test_function(test, subi)(test, subi);
pr_debug("---- end ----\n");
- print_test_result(test, i, subi, err, width);
+ print_test_result(test, i, subi, err, width, /*running=*/0);
return 0;
}
@@ -380,7 +411,7 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
err = start_command(&(*child)->process);
if (err || !sequential)
return err;
- return finish_test(*child, width);
+ return finish_test(child, /*running_test=*/0, /*child_test_num=*/1, width);
}
#define for_each_test(j, k, t) \
@@ -465,7 +496,7 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
}
for (i = 0; i < child_test_num; i++) {
if (!sequential) {
- int ret = finish_test(child_tests[i], width);
+ int ret = finish_test(child_tests, i, child_test_num, width);
if (ret)
return ret;
diff --git a/tools/perf/util/color.h b/tools/perf/util/color.h
index 01f7bed21c9b..4b9f8d5d4439 100644
--- a/tools/perf/util/color.h
+++ b/tools/perf/util/color.h
@@ -22,6 +22,7 @@
#define MIN_GREEN 0.5
#define MIN_RED 5.0
+#define PERF_COLOR_DELETE_LINE "\033[A\33[2K\r"
/*
* This variable stores the value of color.ui
*/
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 3/8] perf test: Reduce scope of parallel variable
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
2024-10-11 22:03 ` [PATCH v2 1/8] tools subcmd: Add non-waitpid check_if_command_finished() Ian Rogers
2024-10-11 22:03 ` [PATCH v2 2/8] perf test: Display number of active running tests Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 4/8] perf test: Avoid list test blocking on writing to stdout Ian Rogers
` (5 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
The variable duplicates sequential but is only used for command line
argument processing. Reduce scope to make the behavior clearer.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index bf03fb42420b..88ffc0feb51e 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -41,9 +41,6 @@
static bool dont_fork;
/* Don't fork the tests in parallel and wait for their completion. */
static bool sequential = true;
-/* Do it in parallel, lacks infrastructure to avoid running tests that clash for resources,
- * So leave it as the developers choice to enable while working on the needed infra */
-static bool parallel;
const char *dso_to_test;
const char *test_objdump_path = "objdump";
@@ -565,6 +562,12 @@ int cmd_test(int argc, const char **argv)
};
const char *skip = NULL;
const char *workload = NULL;
+ /*
+ * Run tests in parallel, lacks infrastructure to avoid running tests
+ * that clash for resources, So leave it as the developers choice to
+ * enable while working on the needed infra.
+ */
+ bool parallel = false;
const struct option test_options[] = {
OPT_STRING('s', "skip", &skip, "tests", "tests to skip"),
OPT_INCR('v', "verbose", &verbose,
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 4/8] perf test: Avoid list test blocking on writing to stdout
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (2 preceding siblings ...)
2024-10-11 22:03 ` [PATCH v2 3/8] perf test: Reduce scope of parallel variable Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 5/8] perf test: Tag parallel failing shell tests with "(exclusive)" Ian Rogers
` (4 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Python's json.tool will output the input json to stdout. Redirect to
/dev/null to avoid blocking on stdout writes.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/shell/list.sh | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/list.sh b/tools/perf/tests/shell/list.sh
index 8a868ae64560..76a9846cff22 100755
--- a/tools/perf/tests/shell/list.sh
+++ b/tools/perf/tests/shell/list.sh
@@ -24,8 +24,11 @@ trap trap_cleanup EXIT TERM INT
test_list_json() {
echo "Json output test"
+ # Generate perf list json output into list_output file.
perf list -j -o "${list_output}"
- $PYTHON -m json.tool "${list_output}"
+ # Validate the json using python, redirect the json copy to /dev/null as
+ # otherwise the test may block writing to stdout.
+ $PYTHON -m json.tool "${list_output}" /dev/null
echo "Json output test [Success]"
}
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 5/8] perf test: Tag parallel failing shell tests with "(exclusive)"
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (3 preceding siblings ...)
2024-10-11 22:03 ` [PATCH v2 4/8] perf test: Avoid list test blocking on writing to stdout Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 6/8] perf test: Add a signal handler around running a test Ian Rogers
` (3 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Some shell tests compete for resources and so can't run with other
tests, tag such tests. The "(exclusive)" stems from shared/exclusive
to describe how the tests run as if holding a lock.
For ARM/coresight tests:
Suggested-by: James Clark <james.clark@linaro.org>
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/shell/coresight/asm_pure_loop.sh | 2 +-
tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh | 2 +-
tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh | 2 +-
tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh | 2 +-
tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh | 2 +-
tools/perf/tests/shell/perftool-testsuite_report.sh | 2 +-
tools/perf/tests/shell/record.sh | 2 +-
tools/perf/tests/shell/record_lbr.sh | 2 +-
tools/perf/tests/shell/record_offcpu.sh | 2 +-
tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
tools/perf/tests/shell/test_arm_coresight.sh | 2 +-
tools/perf/tests/shell/test_arm_coresight_disasm.sh | 2 +-
tools/perf/tests/shell/test_arm_spe.sh | 2 +-
tools/perf/tests/shell/test_intel_pt.sh | 2 +-
tools/perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
15 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/tools/perf/tests/shell/coresight/asm_pure_loop.sh b/tools/perf/tests/shell/coresight/asm_pure_loop.sh
index 2d65defb7e0f..c63bc8c73e26 100755
--- a/tools/perf/tests/shell/coresight/asm_pure_loop.sh
+++ b/tools/perf/tests/shell/coresight/asm_pure_loop.sh
@@ -1,5 +1,5 @@
#!/bin/sh -e
-# CoreSight / ASM Pure Loop
+# CoreSight / ASM Pure Loop (exclusive)
# SPDX-License-Identifier: GPL-2.0
# Carsten Haitzler <carsten.haitzler@arm.com>, 2021
diff --git a/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh b/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh
index ddcc9bb850f5..8e29630957c8 100755
--- a/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh
+++ b/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh
@@ -1,5 +1,5 @@
#!/bin/sh -e
-# CoreSight / Memcpy 16k 10 Threads
+# CoreSight / Memcpy 16k 10 Threads (exclusive)
# SPDX-License-Identifier: GPL-2.0
# Carsten Haitzler <carsten.haitzler@arm.com>, 2021
diff --git a/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh b/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh
index 2ce5e139b2fd..0c4c82a1c8e1 100755
--- a/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh
+++ b/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh
@@ -1,5 +1,5 @@
#!/bin/sh -e
-# CoreSight / Thread Loop 10 Threads - Check TID
+# CoreSight / Thread Loop 10 Threads - Check TID (exclusive)
# SPDX-License-Identifier: GPL-2.0
# Carsten Haitzler <carsten.haitzler@arm.com>, 2021
diff --git a/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh b/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh
index 3ad9498753d7..d3aea9fc6ced 100755
--- a/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh
+++ b/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh
@@ -1,5 +1,5 @@
#!/bin/sh -e
-# CoreSight / Thread Loop 2 Threads - Check TID
+# CoreSight / Thread Loop 2 Threads - Check TID (exclusive)
# SPDX-License-Identifier: GPL-2.0
# Carsten Haitzler <carsten.haitzler@arm.com>, 2021
diff --git a/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh b/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
index 4fbb4a29aad3..7429d3a2ae43 100755
--- a/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
+++ b/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
@@ -1,5 +1,5 @@
#!/bin/sh -e
-# CoreSight / Unroll Loop Thread 10
+# CoreSight / Unroll Loop Thread 10 (exclusive)
# SPDX-License-Identifier: GPL-2.0
# Carsten Haitzler <carsten.haitzler@arm.com>, 2021
diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
index 973012ce92a7..a8cf75b4e77e 100755
--- a/tools/perf/tests/shell/perftool-testsuite_report.sh
+++ b/tools/perf/tests/shell/perftool-testsuite_report.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perftool-testsuite_report
+# perftool-testsuite_report (exclusive)
# SPDX-License-Identifier: GPL-2.0
test -d "$(dirname "$0")/base_report" || exit 2
diff --git a/tools/perf/tests/shell/record.sh b/tools/perf/tests/shell/record.sh
index 8d6366d96883..f7d8c5b243a4 100755
--- a/tools/perf/tests/shell/record.sh
+++ b/tools/perf/tests/shell/record.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perf record tests
+# perf record tests (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/record_lbr.sh b/tools/perf/tests/shell/record_lbr.sh
index 32314641217e..8d750ee631f8 100755
--- a/tools/perf/tests/shell/record_lbr.sh
+++ b/tools/perf/tests/shell/record_lbr.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perf record LBR tests
+# perf record LBR tests (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/record_offcpu.sh b/tools/perf/tests/shell/record_offcpu.sh
index 67c925f3a15a..678947fe69ee 100755
--- a/tools/perf/tests/shell/record_offcpu.sh
+++ b/tools/perf/tests/shell/record_offcpu.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# perf record offcpu profiling tests
+# perf record offcpu profiling tests (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/stat_all_pmu.sh b/tools/perf/tests/shell/stat_all_pmu.sh
index 42456d89c5da..8b148b300be1 100755
--- a/tools/perf/tests/shell/stat_all_pmu.sh
+++ b/tools/perf/tests/shell/stat_all_pmu.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perf all PMU test
+# perf all PMU test (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/test_arm_coresight.sh b/tools/perf/tests/shell/test_arm_coresight.sh
index 3302ea0b9672..534145778d7b 100755
--- a/tools/perf/tests/shell/test_arm_coresight.sh
+++ b/tools/perf/tests/shell/test_arm_coresight.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# Check Arm CoreSight trace data recording and synthesized samples
+# Check Arm CoreSight trace data recording and synthesized samples (exclusive)
# Uses the 'perf record' to record trace data with Arm CoreSight sinks;
# then verify if there have any branch samples and instruction samples
diff --git a/tools/perf/tests/shell/test_arm_coresight_disasm.sh b/tools/perf/tests/shell/test_arm_coresight_disasm.sh
index af63e3757cb0..353d0a24f976 100755
--- a/tools/perf/tests/shell/test_arm_coresight_disasm.sh
+++ b/tools/perf/tests/shell/test_arm_coresight_disasm.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# Check Arm CoreSight disassembly script completes without errors
+# Check Arm CoreSight disassembly script completes without errors (exclusive)
# SPDX-License-Identifier: GPL-2.0
# The disassembly script reconstructs ranges of instructions and gives these to objdump to
diff --git a/tools/perf/tests/shell/test_arm_spe.sh b/tools/perf/tests/shell/test_arm_spe.sh
index 03d5c7d12ee5..be0fcdadeea8 100755
--- a/tools/perf/tests/shell/test_arm_spe.sh
+++ b/tools/perf/tests/shell/test_arm_spe.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# Check Arm SPE trace data recording and synthesized samples
+# Check Arm SPE trace data recording and synthesized samples (exclusive)
# Uses the 'perf record' to record trace data of Arm SPE events;
# then verify if any SPE event samples are generated by SPE with
diff --git a/tools/perf/tests/shell/test_intel_pt.sh b/tools/perf/tests/shell/test_intel_pt.sh
index 723ec501f99a..660991d17607 100755
--- a/tools/perf/tests/shell/test_intel_pt.sh
+++ b/tools/perf/tests/shell/test_intel_pt.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# Miscellaneous Intel PT testing
+# Miscellaneous Intel PT testing (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/test_stat_intel_tpebs.sh b/tools/perf/tests/shell/test_stat_intel_tpebs.sh
index 9a11f42d153c..f95fc64bf0a7 100755
--- a/tools/perf/tests/shell/test_stat_intel_tpebs.sh
+++ b/tools/perf/tests/shell/test_stat_intel_tpebs.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# test Intel TPEBS counting mode
+# test Intel TPEBS counting mode (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 6/8] perf test: Add a signal handler around running a test
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (4 preceding siblings ...)
2024-10-11 22:03 ` [PATCH v2 5/8] perf test: Tag parallel failing shell tests with "(exclusive)" Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 7/8] perf test: Run parallel tests in two passes Ian Rogers
` (2 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Add a signal handler around running a test. If a signal occurs during
the test a siglongjmp unwinds the stack and output is flushed. The
global run_test_jmp_buf is either unique per forked child or not
shared during sequential execution.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 88ffc0feb51e..eca2e765beb5 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -8,6 +8,7 @@
#include <errno.h>
#include <poll.h>
#include <unistd.h>
+#include <setjmp.h>
#include <string.h>
#include <stdlib.h>
#include <sys/types.h>
@@ -226,16 +227,41 @@ struct child_test {
int subtest;
};
+static jmp_buf run_test_jmp_buf;
+
+static void child_test_sig_handler(int sig)
+{
+ siglongjmp(run_test_jmp_buf, sig);
+}
+
static int run_test_child(struct child_process *process)
{
- struct child_test *child = container_of(process, struct child_test, process);
+ const int signals[] = {
+ SIGABRT, SIGBUS, SIGFPE, SIGILL, SIGINT, SIGPIPE, SIGQUIT, SIGSEGV, SIGTERM,
+ };
+ static struct child_test *child;
int err;
+ err = sigsetjmp(run_test_jmp_buf, 1);
+ if (err) {
+ fprintf(stderr, "\n---- unexpected signal (%d) ----\n", err);
+ err = err > 0 ? -err : -1;
+ goto err_out;
+ }
+
+ child = container_of(process, struct child_test, process);
+ for (size_t i = 0; i < ARRAY_SIZE(signals); i++)
+ signal(signals[i], child_test_sig_handler);
+
pr_debug("--- start ---\n");
pr_debug("test child forked, pid %d\n", getpid());
err = test_function(child->test, child->subtest)(child->test, child->subtest);
pr_debug("---- end(%d) ----\n", err);
+
+err_out:
fflush(NULL);
+ for (size_t i = 0; i < ARRAY_SIZE(signals); i++)
+ signal(signals[i], SIG_DFL);
return -err;
}
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 7/8] perf test: Run parallel tests in two passes
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (5 preceding siblings ...)
2024-10-11 22:03 ` [PATCH v2 6/8] perf test: Add a signal handler around running a test Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-11 22:03 ` [PATCH v2 8/8] perf test: Make parallel testing the default Ian Rogers
2024-10-16 23:48 ` [PATCH v2 0/8] Run tests in parallel showing number of tests running Namhyung Kim
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
In pass 1 run all tests that succeed when run in parallel. In pass 2
sequentially run all remaining tests that are flagged as
"exclusive". Sequential and dont_fork tests keep to run in pass 1.
Read the exclusive flag from the shell test descriptions, but remove
from display to avoid >100 characters. Add error handling to finish
tests if starting a later test fails. Mark the task-exit test as
exclusive due to issues reported-by James Clark.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 157 +++++++++++++++++++------------
tools/perf/tests/task-exit.c | 9 +-
tools/perf/tests/tests-scripts.c | 7 +-
tools/perf/tests/tests.h | 9 ++
4 files changed, 121 insertions(+), 61 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index eca2e765beb5..ba3797521d79 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -196,6 +196,14 @@ static test_fnptr test_function(const struct test_suite *t, int subtest)
return t->test_cases[subtest].run_case;
}
+static bool test_exclusive(const struct test_suite *t, int subtest)
+{
+ if (subtest <= 0)
+ return t->test_cases[0].exclusive;
+
+ return t->test_cases[subtest].exclusive;
+}
+
static bool perf_test__matches(const char *desc, int curr, int argc, const char *argv[])
{
int i;
@@ -239,7 +247,7 @@ static int run_test_child(struct child_process *process)
const int signals[] = {
SIGABRT, SIGBUS, SIGFPE, SIGILL, SIGINT, SIGPIPE, SIGQUIT, SIGSEGV, SIGTERM,
};
- static struct child_test *child;
+ struct child_test *child = container_of(process, struct child_test, process);
int err;
err = sigsetjmp(run_test_jmp_buf, 1);
@@ -249,7 +257,6 @@ static int run_test_child(struct child_process *process)
goto err_out;
}
- child = container_of(process, struct child_test, process);
for (size_t i = 0; i < ARRAY_SIZE(signals); i++)
signal(signals[i], child_test_sig_handler);
@@ -302,19 +309,25 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
return 0;
}
-static int finish_test(struct child_test **child_tests, int running_test, int child_test_num,
- int width)
+static void finish_test(struct child_test **child_tests, int running_test, int child_test_num,
+ int width)
{
struct child_test *child_test = child_tests[running_test];
- struct test_suite *t = child_test->test;
- int i = child_test->test_num;
- int subi = child_test->subtest;
- int err = child_test->process.err;
+ struct test_suite *t;
+ int i, subi, err;
bool err_done = false;
struct strbuf err_output = STRBUF_INIT;
int last_running = -1;
int ret;
+ if (child_test == NULL) {
+ /* Test wasn't started. */
+ return;
+ }
+ t = child_test->test;
+ i = child_test->test_num;
+ subi = child_test->subtest;
+ err = child_test->process.err;
/*
* For test suites with subtests, display the suite name ahead of the
* sub test names.
@@ -344,6 +357,8 @@ static int finish_test(struct child_test **child_tests, int running_test, int ch
int running = 0;
for (int y = running_test; y < child_test_num; y++) {
+ if (child_tests[y] == NULL)
+ continue;
if (check_if_command_finished(&child_tests[y]->process) == 0)
running++;
}
@@ -396,23 +411,32 @@ static int finish_test(struct child_test **child_tests, int running_test, int ch
print_test_result(t, i, subi, ret, width, /*running=*/0);
if (err > 0)
close(err);
- return 0;
+ zfree(&child_tests[running_test]);
}
static int start_test(struct test_suite *test, int i, int subi, struct child_test **child,
- int width)
+ int width, int pass)
{
int err;
*child = NULL;
if (dont_fork) {
- pr_debug("--- start ---\n");
- err = test_function(test, subi)(test, subi);
- pr_debug("---- end ----\n");
- print_test_result(test, i, subi, err, width, /*running=*/0);
+ if (pass == 1) {
+ pr_debug("--- start ---\n");
+ err = test_function(test, subi)(test, subi);
+ pr_debug("---- end ----\n");
+ print_test_result(test, i, subi, err, width, /*running=*/0);
+ }
+ return 0;
+ }
+ if (pass == 1 && !sequential && test_exclusive(test, subi)) {
+ /* When parallel, skip exclusive tests on the first pass. */
+ return 0;
+ }
+ if (pass != 1 && (sequential || !test_exclusive(test, subi))) {
+ /* Sequential and non-exclusive tests were run on the first pass. */
return 0;
}
-
*child = zalloc(sizeof(**child));
if (!*child)
return -ENOMEM;
@@ -431,10 +455,14 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
(*child)->process.err = -1;
}
(*child)->process.no_exec_cmd = run_test_child;
- err = start_command(&(*child)->process);
- if (err || !sequential)
- return err;
- return finish_test(child, /*running_test=*/0, /*child_test_num=*/1, width);
+ if (sequential || pass == 2) {
+ err = start_command(&(*child)->process);
+ if (err)
+ return err;
+ finish_test(child, /*running_test=*/0, /*child_test_num=*/1, width);
+ return 0;
+ }
+ return start_command(&(*child)->process);
}
#define for_each_test(j, k, t) \
@@ -444,12 +472,11 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
{
struct test_suite *t;
- unsigned int j, k;
- int i = 0;
int width = 0;
+ unsigned int j, k;
size_t num_tests = 0;
struct child_test **child_tests;
- int child_test_num = 0;
+ int err = 0;
for_each_test(j, k, t) {
int len = strlen(test_description(t, -1));
@@ -472,62 +499,73 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
if (!child_tests)
return -ENOMEM;
- for_each_test(j, k, t) {
- int curr = i++;
-
- if (!perf_test__matches(test_description(t, -1), curr, argc, argv)) {
- bool skip = true;
+ /*
+ * In parallel mode pass 1 runs non-exclusive tests in parallel, pass 2
+ * runs the exclusive tests sequentially. In other modes all tests are
+ * run in pass 1.
+ */
+ for (int pass = 1; pass <= 2; pass++) {
+ int child_test_num = 0;
+ int i = 0;
+
+ for_each_test(j, k, t) {
+ int curr = i++;
+
+ if (!perf_test__matches(test_description(t, -1), curr, argc, argv)) {
+ /*
+ * Test suite shouldn't be run based on
+ * description. See if subtest should.
+ */
+ bool skip = true;
+
+ for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) {
+ if (perf_test__matches(test_description(t, subi),
+ curr, argc, argv))
+ skip = false;
+ }
- for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) {
- if (perf_test__matches(test_description(t, subi),
- curr, argc, argv))
- skip = false;
+ if (skip)
+ continue;
}
- if (skip)
+ if (intlist__find(skiplist, i)) {
+ pr_info("%3d: %-*s:", curr + 1, width, test_description(t, -1));
+ color_fprintf(stderr, PERF_COLOR_YELLOW, " Skip (user override)\n");
continue;
- }
-
- if (intlist__find(skiplist, i)) {
- pr_info("%3d: %-*s:", curr + 1, width, test_description(t, -1));
- color_fprintf(stderr, PERF_COLOR_YELLOW, " Skip (user override)\n");
- continue;
- }
-
- if (!has_subtests(t)) {
- int err = start_test(t, curr, -1, &child_tests[child_test_num++], width);
+ }
- if (err) {
- /* TODO: if !sequential waitpid the already forked children. */
- free(child_tests);
- return err;
+ if (!has_subtests(t)) {
+ err = start_test(t, curr, -1, &child_tests[child_test_num++],
+ width, pass);
+ if (err)
+ goto err_out;
+ continue;
}
- } else {
for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) {
- int err;
-
if (!perf_test__matches(test_description(t, subi),
curr, argc, argv))
continue;
err = start_test(t, curr, subi, &child_tests[child_test_num++],
- width);
+ width, pass);
if (err)
- return err;
+ goto err_out;
}
}
- }
- for (i = 0; i < child_test_num; i++) {
if (!sequential) {
- int ret = finish_test(child_tests, i, child_test_num, width);
-
- if (ret)
- return ret;
+ /* Parallel mode starts tests but doesn't finish them. Do that now. */
+ for (size_t x = 0; x < num_tests; x++)
+ finish_test(child_tests, x, num_tests, width);
}
- free(child_tests[i]);
+ }
+err_out:
+ if (err) {
+ pr_err("Internal test harness failure. Completing any started tests:\n:");
+ for (size_t x = 0; x < num_tests; x++)
+ finish_test(child_tests, x, num_tests, width);
}
free(child_tests);
- return 0;
+ return err;
}
static int perf_test__list(int argc, const char **argv)
@@ -637,6 +675,7 @@ int cmd_test(int argc, const char **argv)
symbol_conf.priv_size = sizeof(int);
symbol_conf.try_vmlinux_path = true;
+
if (symbol__init(NULL) < 0)
return -1;
diff --git a/tools/perf/tests/task-exit.c b/tools/perf/tests/task-exit.c
index d33d0952025c..8e328bbd509d 100644
--- a/tools/perf/tests/task-exit.c
+++ b/tools/perf/tests/task-exit.c
@@ -152,4 +152,11 @@ static int test__task_exit(struct test_suite *test __maybe_unused, int subtest _
return err;
}
-DEFINE_SUITE("Number of exit events of a simple workload", task_exit);
+struct test_case tests__task_exit[] = {
+ TEST_CASE_EXCLUSIVE("Number of exit events of a simple workload", task_exit),
+ { .name = NULL, }
+};
+struct test_suite suite__task_exit = {
+ .desc = "Number of exit events of a simple workload",
+ .test_cases = tests__task_exit,
+};
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index ed114b044293..cf3ae0c1d871 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -175,6 +175,7 @@ static void append_script(int dir_fd, const char *name, char *desc,
struct test_suite *test_suite, **result_tmp;
struct test_case *tests;
size_t len;
+ char *exclusive;
snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
len = readlink(link, filename, sizeof(filename));
@@ -191,9 +192,13 @@ static void append_script(int dir_fd, const char *name, char *desc,
return;
}
tests[0].name = strdup_check(name);
+ exclusive = strstr(desc, " (exclusive)");
+ if (exclusive != NULL) {
+ tests[0].exclusive = true;
+ exclusive[0] = '\0';
+ }
tests[0].desc = strdup_check(desc);
tests[0].run_case = shell_test__run;
-
test_suite = zalloc(sizeof(*test_suite));
if (!test_suite) {
pr_err("Out of memory while building script test suite list\n");
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 6ea2be86b7bf..ed66b704e438 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -36,6 +36,7 @@ struct test_case {
const char *desc;
const char *skip_reason;
test_fnptr run_case;
+ bool exclusive;
};
struct test_suite {
@@ -62,6 +63,14 @@ struct test_suite {
.skip_reason = _reason, \
}
+#define TEST_CASE_EXCLUSIVE(description, _name) \
+ { \
+ .name = #_name, \
+ .desc = description, \
+ .run_case = test__##_name, \
+ .exclusive = true, \
+ }
+
#define DEFINE_SUITE(description, _name) \
struct test_case tests__##_name[] = { \
TEST_CASE(description, _name), \
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 8/8] perf test: Make parallel testing the default
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (6 preceding siblings ...)
2024-10-11 22:03 ` [PATCH v2 7/8] perf test: Run parallel tests in two passes Ian Rogers
@ 2024-10-11 22:03 ` Ian Rogers
2024-10-16 23:48 ` [PATCH v2 0/8] Run tests in parallel showing number of tests running Namhyung Kim
8 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2024-10-11 22:03 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Now C tests can have the "exclusive" flag to run without other tests,
and shell tests can add "(exclusive)" to their description, run tests
in parallel by default. Tests which flake when run in parallel can be
marked exclusive to resolve the problem.
Non-scientifically, the reduction on `perf test` execution time is
from 8m35.890s to 3m55.115s on a Tigerlake laptop. So the tests
complete in less than half the time.
Tested-by: James Clark <james.clark@linaro.org>
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 13 ++-----------
1 file changed, 2 insertions(+), 11 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index ba3797521d79..09afe884a987 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -40,8 +40,8 @@
* making them easier to debug.
*/
static bool dont_fork;
-/* Don't fork the tests in parallel and wait for their completion. */
-static bool sequential = true;
+/* Fork the tests in parallel and wait for their completion. */
+static bool sequential;
const char *dso_to_test;
const char *test_objdump_path = "objdump";
@@ -626,19 +626,12 @@ int cmd_test(int argc, const char **argv)
};
const char *skip = NULL;
const char *workload = NULL;
- /*
- * Run tests in parallel, lacks infrastructure to avoid running tests
- * that clash for resources, So leave it as the developers choice to
- * enable while working on the needed infra.
- */
- bool parallel = false;
const struct option test_options[] = {
OPT_STRING('s', "skip", &skip, "tests", "tests to skip"),
OPT_INCR('v', "verbose", &verbose,
"be more verbose (show symbol address, etc)"),
OPT_BOOLEAN('F', "dont-fork", &dont_fork,
"Do not fork for testcase"),
- OPT_BOOLEAN('p', "parallel", ¶llel, "Run the tests in parallel"),
OPT_BOOLEAN('S', "sequential", &sequential,
"Run the tests one after another rather than in parallel"),
OPT_STRING('w', "workload", &workload, "work", "workload to run for testing"),
@@ -669,8 +662,6 @@ int cmd_test(int argc, const char **argv)
if (dont_fork)
sequential = true;
- else if (parallel)
- sequential = false;
symbol_conf.priv_size = sizeof(int);
symbol_conf.try_vmlinux_path = true;
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v2 0/8] Run tests in parallel showing number of tests running
2024-10-11 22:03 [PATCH v2 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (7 preceding siblings ...)
2024-10-11 22:03 ` [PATCH v2 8/8] perf test: Make parallel testing the default Ian Rogers
@ 2024-10-16 23:48 ` Namhyung Kim
2024-10-17 0:28 ` Ian Rogers
8 siblings, 1 reply; 13+ messages in thread
From: Namhyung Kim @ 2024-10-16 23:48 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Adrian Hunter,
Kan Liang, James Clark, Howard Chu, Athira Jajeev, Michael Petlan,
Veronika Molnarova, Dapeng Mi, Thomas Richter, Ilya Leoshkevich,
Colin Ian King, Weilin Wang, Andi Kleen, linux-kernel,
linux-perf-users
On Fri, Oct 11, 2024 at 03:03:46PM -0700, Ian Rogers wrote:
> Avoid waitpid so that stdout/stderr aren't destroyed prior to wanting
> to read them for display. When running on a color terminal, display
> the number of running tests (1 if sequential). To avoid previous
> flicker, only delete and refresh the display line when it changes. An
> earlier version of this code is here:
> https://lore.kernel.org/lkml/20240701044236.475098-1-irogers@google.com/
>
> Add a signal handler for perf tests so that unexpected signals are
> displayed and test clean up is possible.
>
> In perf test add an "exclusive" flag that causes a test to be run with
> no other test. Set this flag manually for C tests and via a
> "(exclusive)" in the test description for shell tests. Add the flag to
> shell tests that may fail when run with other tests.
>
> Change the perf test loop to run in two passes. For parallel
> execution, the first pass runs all tests that can be run in parallel
> then the 2nd runs remaining tests sequentially. This causes the
> "exclusive" tests to be run last and with test numbers moderately out
> of alignment.
>
> Change the default to be to run tests in parallel. Running tests in
> parallel brings the execution time down to less than half.
>
> v2: Fix inaccurate remaining counts when running specific
> tests. Rename "remaining" to "active" to better reflect the
> testing behavior. Move the exclusive flag to test cases and not
> entire suites. Add more "(exclusive)" flags to test as
> suggested-by James Clark. Remove "(exclusive)" flag from test
> descriptions to keep the command line output more concise. Add
> James Clark's tested-by.
>
> Ian Rogers (8):
> tools subcmd: Add non-waitpid check_if_command_finished()
> perf test: Display number of active running tests
> perf test: Reduce scope of parallel variable
> perf test: Avoid list test blocking on writing to stdout
> perf test: Tag parallel failing shell tests with "(exclusive)"
> perf test: Add a signal handler around running a test
> perf test: Run parallel tests in two passes
> perf test: Make parallel testing the default
Nice work! It looks much better now.
But I'm seeing more failures in parallel mode. Maybe we want to
keep the default serial mode for a little more.
Thanks,
Namhyung
>
> tools/lib/subcmd/run-command.c | 33 +++
> tools/perf/tests/builtin-test.c | 274 ++++++++++++------
> .../tests/shell/coresight/asm_pure_loop.sh | 2 +-
> .../shell/coresight/memcpy_thread_16k_10.sh | 2 +-
> .../coresight/thread_loop_check_tid_10.sh | 2 +-
> .../coresight/thread_loop_check_tid_2.sh | 2 +-
> .../shell/coresight/unroll_loop_thread_10.sh | 2 +-
> tools/perf/tests/shell/list.sh | 5 +-
> .../tests/shell/perftool-testsuite_report.sh | 2 +-
> tools/perf/tests/shell/record.sh | 2 +-
> tools/perf/tests/shell/record_lbr.sh | 2 +-
> tools/perf/tests/shell/record_offcpu.sh | 2 +-
> tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> tools/perf/tests/shell/test_arm_coresight.sh | 2 +-
> .../tests/shell/test_arm_coresight_disasm.sh | 2 +-
> tools/perf/tests/shell/test_arm_spe.sh | 2 +-
> tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> .../perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> tools/perf/tests/task-exit.c | 9 +-
> tools/perf/tests/tests-scripts.c | 7 +-
> tools/perf/tests/tests.h | 9 +
> tools/perf/util/color.h | 1 +
> 22 files changed, 258 insertions(+), 110 deletions(-)
>
> --
> 2.47.0.rc1.288.g06298d1525-goog
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 0/8] Run tests in parallel showing number of tests running
2024-10-16 23:48 ` [PATCH v2 0/8] Run tests in parallel showing number of tests running Namhyung Kim
@ 2024-10-17 0:28 ` Ian Rogers
2024-10-17 12:49 ` Ian Rogers
0 siblings, 1 reply; 13+ messages in thread
From: Ian Rogers @ 2024-10-17 0:28 UTC (permalink / raw)
To: Namhyung Kim
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Adrian Hunter,
Kan Liang, James Clark, Howard Chu, Athira Jajeev, Michael Petlan,
Veronika Molnarova, Dapeng Mi, Thomas Richter, Ilya Leoshkevich,
Colin Ian King, Weilin Wang, Andi Kleen, linux-kernel,
linux-perf-users
On Wed, Oct 16, 2024 at 4:49 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Fri, Oct 11, 2024 at 03:03:46PM -0700, Ian Rogers wrote:
> > Avoid waitpid so that stdout/stderr aren't destroyed prior to wanting
> > to read them for display. When running on a color terminal, display
> > the number of running tests (1 if sequential). To avoid previous
> > flicker, only delete and refresh the display line when it changes. An
> > earlier version of this code is here:
> > https://lore.kernel.org/lkml/20240701044236.475098-1-irogers@google.com/
> >
> > Add a signal handler for perf tests so that unexpected signals are
> > displayed and test clean up is possible.
> >
> > In perf test add an "exclusive" flag that causes a test to be run with
> > no other test. Set this flag manually for C tests and via a
> > "(exclusive)" in the test description for shell tests. Add the flag to
> > shell tests that may fail when run with other tests.
> >
> > Change the perf test loop to run in two passes. For parallel
> > execution, the first pass runs all tests that can be run in parallel
> > then the 2nd runs remaining tests sequentially. This causes the
> > "exclusive" tests to be run last and with test numbers moderately out
> > of alignment.
> >
> > Change the default to be to run tests in parallel. Running tests in
> > parallel brings the execution time down to less than half.
> >
> > v2: Fix inaccurate remaining counts when running specific
> > tests. Rename "remaining" to "active" to better reflect the
> > testing behavior. Move the exclusive flag to test cases and not
> > entire suites. Add more "(exclusive)" flags to test as
> > suggested-by James Clark. Remove "(exclusive)" flag from test
> > descriptions to keep the command line output more concise. Add
> > James Clark's tested-by.
> >
> > Ian Rogers (8):
> > tools subcmd: Add non-waitpid check_if_command_finished()
> > perf test: Display number of active running tests
> > perf test: Reduce scope of parallel variable
> > perf test: Avoid list test blocking on writing to stdout
> > perf test: Tag parallel failing shell tests with "(exclusive)"
> > perf test: Add a signal handler around running a test
> > perf test: Run parallel tests in two passes
> > perf test: Make parallel testing the default
>
> Nice work! It looks much better now.
>
> But I'm seeing more failures in parallel mode. Maybe we want to
> keep the default serial mode for a little more.
As you say, I think we should be conservative and mark all tests that
need to serial/sequential/exclusive with the exclusive tag. If you
tell me the failing tests I can add them to 'perf test: Tag parallel
failing shell tests with "(exclusive)"' as I did for James Clark with
the ARM tests. I'd prefer we did the tagging rather than not enabling
parallel testing as otherwise I may never learn which tests fail for
people when run in parallel.
Thanks,
Ian
> >
> > tools/lib/subcmd/run-command.c | 33 +++
> > tools/perf/tests/builtin-test.c | 274 ++++++++++++------
> > .../tests/shell/coresight/asm_pure_loop.sh | 2 +-
> > .../shell/coresight/memcpy_thread_16k_10.sh | 2 +-
> > .../coresight/thread_loop_check_tid_10.sh | 2 +-
> > .../coresight/thread_loop_check_tid_2.sh | 2 +-
> > .../shell/coresight/unroll_loop_thread_10.sh | 2 +-
> > tools/perf/tests/shell/list.sh | 5 +-
> > .../tests/shell/perftool-testsuite_report.sh | 2 +-
> > tools/perf/tests/shell/record.sh | 2 +-
> > tools/perf/tests/shell/record_lbr.sh | 2 +-
> > tools/perf/tests/shell/record_offcpu.sh | 2 +-
> > tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> > tools/perf/tests/shell/test_arm_coresight.sh | 2 +-
> > .../tests/shell/test_arm_coresight_disasm.sh | 2 +-
> > tools/perf/tests/shell/test_arm_spe.sh | 2 +-
> > tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> > .../perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> > tools/perf/tests/task-exit.c | 9 +-
> > tools/perf/tests/tests-scripts.c | 7 +-
> > tools/perf/tests/tests.h | 9 +
> > tools/perf/util/color.h | 1 +
> > 22 files changed, 258 insertions(+), 110 deletions(-)
> >
> > --
> > 2.47.0.rc1.288.g06298d1525-goog
> >
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 0/8] Run tests in parallel showing number of tests running
2024-10-17 0:28 ` Ian Rogers
@ 2024-10-17 12:49 ` Ian Rogers
2024-10-17 23:28 ` Namhyung Kim
0 siblings, 1 reply; 13+ messages in thread
From: Ian Rogers @ 2024-10-17 12:49 UTC (permalink / raw)
To: Namhyung Kim
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Adrian Hunter,
Kan Liang, James Clark, Howard Chu, Athira Jajeev, Michael Petlan,
Veronika Molnarova, Dapeng Mi, Thomas Richter, Ilya Leoshkevich,
Colin Ian King, Weilin Wang, Andi Kleen, linux-kernel,
linux-perf-users
On Wed, Oct 16, 2024 at 5:28 PM Ian Rogers <irogers@google.com> wrote:
>
> On Wed, Oct 16, 2024 at 4:49 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > On Fri, Oct 11, 2024 at 03:03:46PM -0700, Ian Rogers wrote:
> > > Avoid waitpid so that stdout/stderr aren't destroyed prior to wanting
> > > to read them for display. When running on a color terminal, display
> > > the number of running tests (1 if sequential). To avoid previous
> > > flicker, only delete and refresh the display line when it changes. An
> > > earlier version of this code is here:
> > > https://lore.kernel.org/lkml/20240701044236.475098-1-irogers@google.com/
> > >
> > > Add a signal handler for perf tests so that unexpected signals are
> > > displayed and test clean up is possible.
> > >
> > > In perf test add an "exclusive" flag that causes a test to be run with
> > > no other test. Set this flag manually for C tests and via a
> > > "(exclusive)" in the test description for shell tests. Add the flag to
> > > shell tests that may fail when run with other tests.
> > >
> > > Change the perf test loop to run in two passes. For parallel
> > > execution, the first pass runs all tests that can be run in parallel
> > > then the 2nd runs remaining tests sequentially. This causes the
> > > "exclusive" tests to be run last and with test numbers moderately out
> > > of alignment.
> > >
> > > Change the default to be to run tests in parallel. Running tests in
> > > parallel brings the execution time down to less than half.
> > >
> > > v2: Fix inaccurate remaining counts when running specific
> > > tests. Rename "remaining" to "active" to better reflect the
> > > testing behavior. Move the exclusive flag to test cases and not
> > > entire suites. Add more "(exclusive)" flags to test as
> > > suggested-by James Clark. Remove "(exclusive)" flag from test
> > > descriptions to keep the command line output more concise. Add
> > > James Clark's tested-by.
> > >
> > > Ian Rogers (8):
> > > tools subcmd: Add non-waitpid check_if_command_finished()
> > > perf test: Display number of active running tests
> > > perf test: Reduce scope of parallel variable
> > > perf test: Avoid list test blocking on writing to stdout
> > > perf test: Tag parallel failing shell tests with "(exclusive)"
> > > perf test: Add a signal handler around running a test
> > > perf test: Run parallel tests in two passes
> > > perf test: Make parallel testing the default
> >
> > Nice work! It looks much better now.
> >
> > But I'm seeing more failures in parallel mode. Maybe we want to
> > keep the default serial mode for a little more.
>
> As you say, I think we should be conservative and mark all tests that
> need to serial/sequential/exclusive with the exclusive tag. If you
> tell me the failing tests I can add them to 'perf test: Tag parallel
> failing shell tests with "(exclusive)"' as I did for James Clark with
> the ARM tests. I'd prefer we did the tagging rather than not enabling
> parallel testing as otherwise I may never learn which tests fail for
> people when run in parallel.
With repeat testing, most often for me it was fine, I was able to get
a flake on the probe plus vfs_getname tests like:
```
$ sudo /tmp/perf/perf test vfs -v
91: Add vfs_getname probe to get syscall args filenames : Ok
--- start ---
test child forked, pid 466904
Failed to write event: File exists
Error: Failed to add events.
---- end(-1) ----
93: Use vfs_getname probe to get syscall args filenames : FAILED!
--- start ---
test child forked, pid 466906
Error: event "vfs_getname" already exists.
Hint: Remove existing event by 'perf probe -d'
or force duplicates by 'perf probe -f'
or set 'force=yes' in BPF source.
Error: Failed to add events.
---- end(-1) ----
127: Check open filename arg using perf trace + vfs_getname : FAILED!
```
So I'll make those exclusive in v2 too. If you could let me know of others.
Thanks,
Ian
> > >
> > > tools/lib/subcmd/run-command.c | 33 +++
> > > tools/perf/tests/builtin-test.c | 274 ++++++++++++------
> > > .../tests/shell/coresight/asm_pure_loop.sh | 2 +-
> > > .../shell/coresight/memcpy_thread_16k_10.sh | 2 +-
> > > .../coresight/thread_loop_check_tid_10.sh | 2 +-
> > > .../coresight/thread_loop_check_tid_2.sh | 2 +-
> > > .../shell/coresight/unroll_loop_thread_10.sh | 2 +-
> > > tools/perf/tests/shell/list.sh | 5 +-
> > > .../tests/shell/perftool-testsuite_report.sh | 2 +-
> > > tools/perf/tests/shell/record.sh | 2 +-
> > > tools/perf/tests/shell/record_lbr.sh | 2 +-
> > > tools/perf/tests/shell/record_offcpu.sh | 2 +-
> > > tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> > > tools/perf/tests/shell/test_arm_coresight.sh | 2 +-
> > > .../tests/shell/test_arm_coresight_disasm.sh | 2 +-
> > > tools/perf/tests/shell/test_arm_spe.sh | 2 +-
> > > tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> > > .../perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> > > tools/perf/tests/task-exit.c | 9 +-
> > > tools/perf/tests/tests-scripts.c | 7 +-
> > > tools/perf/tests/tests.h | 9 +
> > > tools/perf/util/color.h | 1 +
> > > 22 files changed, 258 insertions(+), 110 deletions(-)
> > >
> > > --
> > > 2.47.0.rc1.288.g06298d1525-goog
> > >
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 0/8] Run tests in parallel showing number of tests running
2024-10-17 12:49 ` Ian Rogers
@ 2024-10-17 23:28 ` Namhyung Kim
0 siblings, 0 replies; 13+ messages in thread
From: Namhyung Kim @ 2024-10-17 23:28 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Adrian Hunter,
Kan Liang, James Clark, Howard Chu, Athira Jajeev, Michael Petlan,
Veronika Molnarova, Dapeng Mi, Thomas Richter, Ilya Leoshkevich,
Colin Ian King, Weilin Wang, Andi Kleen, linux-kernel,
linux-perf-users
On Thu, Oct 17, 2024 at 05:49:12AM -0700, Ian Rogers wrote:
> On Wed, Oct 16, 2024 at 5:28 PM Ian Rogers <irogers@google.com> wrote:
> >
> > On Wed, Oct 16, 2024 at 4:49 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > >
> > > On Fri, Oct 11, 2024 at 03:03:46PM -0700, Ian Rogers wrote:
> > > > Avoid waitpid so that stdout/stderr aren't destroyed prior to wanting
> > > > to read them for display. When running on a color terminal, display
> > > > the number of running tests (1 if sequential). To avoid previous
> > > > flicker, only delete and refresh the display line when it changes. An
> > > > earlier version of this code is here:
> > > > https://lore.kernel.org/lkml/20240701044236.475098-1-irogers@google.com/
> > > >
> > > > Add a signal handler for perf tests so that unexpected signals are
> > > > displayed and test clean up is possible.
> > > >
> > > > In perf test add an "exclusive" flag that causes a test to be run with
> > > > no other test. Set this flag manually for C tests and via a
> > > > "(exclusive)" in the test description for shell tests. Add the flag to
> > > > shell tests that may fail when run with other tests.
> > > >
> > > > Change the perf test loop to run in two passes. For parallel
> > > > execution, the first pass runs all tests that can be run in parallel
> > > > then the 2nd runs remaining tests sequentially. This causes the
> > > > "exclusive" tests to be run last and with test numbers moderately out
> > > > of alignment.
> > > >
> > > > Change the default to be to run tests in parallel. Running tests in
> > > > parallel brings the execution time down to less than half.
> > > >
> > > > v2: Fix inaccurate remaining counts when running specific
> > > > tests. Rename "remaining" to "active" to better reflect the
> > > > testing behavior. Move the exclusive flag to test cases and not
> > > > entire suites. Add more "(exclusive)" flags to test as
> > > > suggested-by James Clark. Remove "(exclusive)" flag from test
> > > > descriptions to keep the command line output more concise. Add
> > > > James Clark's tested-by.
> > > >
> > > > Ian Rogers (8):
> > > > tools subcmd: Add non-waitpid check_if_command_finished()
> > > > perf test: Display number of active running tests
> > > > perf test: Reduce scope of parallel variable
> > > > perf test: Avoid list test blocking on writing to stdout
> > > > perf test: Tag parallel failing shell tests with "(exclusive)"
> > > > perf test: Add a signal handler around running a test
> > > > perf test: Run parallel tests in two passes
> > > > perf test: Make parallel testing the default
> > >
> > > Nice work! It looks much better now.
> > >
> > > But I'm seeing more failures in parallel mode. Maybe we want to
> > > keep the default serial mode for a little more.
> >
> > As you say, I think we should be conservative and mark all tests that
> > need to serial/sequential/exclusive with the exclusive tag. If you
> > tell me the failing tests I can add them to 'perf test: Tag parallel
> > failing shell tests with "(exclusive)"' as I did for James Clark with
> > the ARM tests. I'd prefer we did the tagging rather than not enabling
> > parallel testing as otherwise I may never learn which tests fail for
> > people when run in parallel.
>
> With repeat testing, most often for me it was fine, I was able to get
> a flake on the probe plus vfs_getname tests like:
> ```
> $ sudo /tmp/perf/perf test vfs -v
> 91: Add vfs_getname probe to get syscall args filenames : Ok
> --- start ---
> test child forked, pid 466904
> Failed to write event: File exists
> Error: Failed to add events.
> ---- end(-1) ----
> 93: Use vfs_getname probe to get syscall args filenames : FAILED!
> --- start ---
> test child forked, pid 466906
> Error: event "vfs_getname" already exists.
> Hint: Remove existing event by 'perf probe -d'
> or force duplicates by 'perf probe -f'
> or set 'force=yes' in BPF source.
> Error: Failed to add events.
> ---- end(-1) ----
> 127: Check open filename arg using perf trace + vfs_getname : FAILED!
> ```
> So I'll make those exclusive in v2 too. If you could let me know of others.
Mine is the below (other than the existing probe test failure). I ran
them 3 times and picked one failed at least once.
92: Add vfs_getname probe to get syscall args filenames : FAILED!
94: Use vfs_getname probe to get syscall args filenames : FAILED!
112: perf stat --bpf-counters test : FAILED!
121: Test data symbol : FAILED!
128: Check open filename arg using perf trace + vfs_getname : FAILED!
Thanks,
Namhyung
>
> > > >
> > > > tools/lib/subcmd/run-command.c | 33 +++
> > > > tools/perf/tests/builtin-test.c | 274 ++++++++++++------
> > > > .../tests/shell/coresight/asm_pure_loop.sh | 2 +-
> > > > .../shell/coresight/memcpy_thread_16k_10.sh | 2 +-
> > > > .../coresight/thread_loop_check_tid_10.sh | 2 +-
> > > > .../coresight/thread_loop_check_tid_2.sh | 2 +-
> > > > .../shell/coresight/unroll_loop_thread_10.sh | 2 +-
> > > > tools/perf/tests/shell/list.sh | 5 +-
> > > > .../tests/shell/perftool-testsuite_report.sh | 2 +-
> > > > tools/perf/tests/shell/record.sh | 2 +-
> > > > tools/perf/tests/shell/record_lbr.sh | 2 +-
> > > > tools/perf/tests/shell/record_offcpu.sh | 2 +-
> > > > tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> > > > tools/perf/tests/shell/test_arm_coresight.sh | 2 +-
> > > > .../tests/shell/test_arm_coresight_disasm.sh | 2 +-
> > > > tools/perf/tests/shell/test_arm_spe.sh | 2 +-
> > > > tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> > > > .../perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> > > > tools/perf/tests/task-exit.c | 9 +-
> > > > tools/perf/tests/tests-scripts.c | 7 +-
> > > > tools/perf/tests/tests.h | 9 +
> > > > tools/perf/util/color.h | 1 +
> > > > 22 files changed, 258 insertions(+), 110 deletions(-)
> > > >
> > > > --
> > > > 2.47.0.rc1.288.g06298d1525-goog
> > > >
^ permalink raw reply [flat|nested] 13+ messages in thread