* [PATCH v1 1/8] tools subcmd: Add non-waitpid check_if_command_finished()
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 7:35 ` [PATCH v1 2/8] perf test: Display number of remaining tests Ian Rogers
` (7 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Using waitpid can cause stdout/stderr of the child process to be
lost. Use Linux's /prod/<pid>/status file to determine if the process
has reached the zombie state. Use the 'status' file rather than 'stat'
to avoid issues around skipping the process name.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/lib/subcmd/run-command.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/tools/lib/subcmd/run-command.c b/tools/lib/subcmd/run-command.c
index 4e3a557a2f37..ec06683e77a0 100644
--- a/tools/lib/subcmd/run-command.c
+++ b/tools/lib/subcmd/run-command.c
@@ -2,6 +2,7 @@
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
+#include <ctype.h>
#include <fcntl.h>
#include <string.h>
#include <linux/string.h>
@@ -217,8 +218,40 @@ static int wait_or_whine(struct child_process *cmd, bool block)
int check_if_command_finished(struct child_process *cmd)
{
+#ifdef __linux__
+ char filename[FILENAME_MAX + 12];
+ char status_line[256];
+ FILE *status_file;
+
+ /*
+ * Check by reading /proc/<pid>/status as calling waitpid causes
+ * stdout/stderr to be closed and data lost.
+ */
+ sprintf(filename, "/proc/%d/status", cmd->pid);
+ status_file = fopen(filename, "r");
+ if (status_file == NULL) {
+ /* Open failed assume finish_command was called. */
+ return true;
+ }
+ while (fgets(status_line, sizeof(status_line), status_file) != NULL) {
+ char *p;
+
+ if (strncmp(status_line, "State:", 6))
+ continue;
+
+ fclose(status_file);
+ p = status_line + 6;
+ while (isspace(*p))
+ p++;
+ return *p == 'Z';
+ }
+ /* Read failed assume finish_command was called. */
+ fclose(status_file);
+ return true;
+#else
wait_or_whine(cmd, /*block=*/false);
return cmd->finished;
+#endif
}
int finish_command(struct child_process *cmd)
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v1 2/8] perf test: Display number of remaining tests
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
2024-10-11 7:35 ` [PATCH v1 1/8] tools subcmd: Add non-waitpid check_if_command_finished() Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 7:35 ` [PATCH v1 3/8] perf test: Reduce scope of parallel variable Ian Rogers
` (6 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Before polling or sleeping to wait for a test to complete, print out
": Running (<num> remaining)" where the number of remaining tests is
determined by iterating over the remaining tests and seeing which
return true for check_if_command_finished. The line erasing and
printing out only occur if the number of runnings tests changes to
avoid the line flickering excessively. Knowing tests are running
allows a user to know a test is running and in parallel mode how many
of the tests are waiting to complete. If color mode is disabled then
avoid displaying the "Running" message as deleting the line isn't
reliable.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 94 ++++++++++++++++++++++-----------
tools/perf/util/color.h | 1 +
2 files changed, 64 insertions(+), 31 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 470a9709427d..df0466d3def6 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -242,7 +242,10 @@ static int run_test_child(struct child_process *process)
return -err;
}
-static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width)
+#define TEST_RUNNING -3
+
+static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width,
+ int remaining)
{
if (has_subtests(t)) {
int subw = width > 2 ? width - 2 : width;
@@ -252,6 +255,9 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
pr_info("%3d: %-*s:", i + 1, width, test_description(t, subtest));
switch (result) {
+ case TEST_RUNNING:
+ color_fprintf(stderr, PERF_COLOR_YELLOW, " Running (%d remaining)\n", remaining);
+ break;
case TEST_OK:
pr_info(" Ok\n");
break;
@@ -273,14 +279,17 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
return 0;
}
-static int finish_test(struct child_test *child_test, int width)
+static int finish_test(struct child_test **child_tests, int running_test, int child_test_num,
+ int width)
{
+ struct child_test *child_test = child_tests[running_test];
struct test_suite *t = child_test->test;
int i = child_test->test_num;
int subi = child_test->subtest;
int err = child_test->process.err;
- bool err_done = err <= 0;
+ bool err_done = false;
struct strbuf err_output = STRBUF_INIT;
+ int last_tests_in_progress = -1;
int ret;
/*
@@ -294,7 +303,7 @@ static int finish_test(struct child_test *child_test, int width)
* Busy loop reading from the child's stdout/stderr that are set to be
* non-blocking until EOF.
*/
- if (!err_done)
+ if (err > 0)
fcntl(err, F_SETFL, O_NONBLOCK);
if (verbose > 1) {
if (has_subtests(t))
@@ -308,38 +317,61 @@ static int finish_test(struct child_test *child_test, int width)
.events = POLLIN | POLLERR | POLLHUP | POLLNVAL,
},
};
- char buf[512];
- ssize_t len;
-
- /* Poll to avoid excessive spinning, timeout set for 100ms. */
- poll(pfds, ARRAY_SIZE(pfds), /*timeout=*/100);
- if (!err_done && pfds[0].revents) {
- errno = 0;
- len = read(err, buf, sizeof(buf) - 1);
-
- if (len <= 0) {
- err_done = errno != EAGAIN;
- } else {
- buf[len] = '\0';
- if (verbose > 1)
- fprintf(stdout, "%s", buf);
- else
+ if (perf_use_color_default) {
+ int tests_in_progress = running_test;
+
+ for (int y = running_test; y < child_test_num; y++) {
+ if (check_if_command_finished(&child_tests[y]->process))
+ tests_in_progress++;
+ }
+ if (tests_in_progress != last_tests_in_progress) {
+ if (last_tests_in_progress != -1) {
+ /*
+ * Erase "Running (.. remaining)" line
+ * printed before poll/sleep.
+ */
+ fprintf(debug_file(), PERF_COLOR_DELETE_LINE);
+ }
+ print_test_result(t, i, subi, TEST_RUNNING, width,
+ child_test_num - tests_in_progress);
+ last_tests_in_progress = tests_in_progress;
+ }
+ }
+
+ err_done = true;
+ if (err <= 0) {
+ /* No child stderr to poll, sleep for 10ms for child to complete. */
+ usleep(10 * 1000);
+ } else {
+ /* Poll to avoid excessive spinning, timeout set for 100ms. */
+ poll(pfds, ARRAY_SIZE(pfds), /*timeout=*/100);
+ if (pfds[0].revents) {
+ char buf[512];
+ ssize_t len;
+
+ len = read(err, buf, sizeof(buf) - 1);
+
+ if (len > 0) {
+ err_done = false;
+ buf[len] = '\0';
strbuf_addstr(&err_output, buf);
+ }
}
}
+ if (err_done)
+ err_done = check_if_command_finished(&child_test->process);
+ }
+ if (perf_use_color_default && last_tests_in_progress != -1) {
+ /* Erase "Running (.. remaining)" line printed before poll/sleep. */
+ fprintf(debug_file(), PERF_COLOR_DELETE_LINE);
}
/* Clean up child process. */
ret = finish_command(&child_test->process);
- if (verbose == 1 && ret == TEST_FAIL) {
- /* Add header for test that was skipped above. */
- if (has_subtests(t))
- pr_info("%3d.%1d: %s:\n", i + 1, subi + 1, test_description(t, subi));
- else
- pr_info("%3d: %s:\n", i + 1, test_description(t, -1));
+ if (verbose > 1 || (verbose == 1 && ret == TEST_FAIL))
fprintf(stderr, "%s", err_output.buf);
- }
+
strbuf_release(&err_output);
- print_test_result(t, i, subi, ret, width);
+ print_test_result(t, i, subi, ret, width, /*remaining=*/0);
if (err > 0)
close(err);
return 0;
@@ -355,7 +387,7 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
pr_debug("--- start ---\n");
err = test_function(test, subi)(test, subi);
pr_debug("---- end ----\n");
- print_test_result(test, i, subi, err, width);
+ print_test_result(test, i, subi, err, width, /*remaining=*/0);
return 0;
}
@@ -380,7 +412,7 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
err = start_command(&(*child)->process);
if (err || !sequential)
return err;
- return finish_test(*child, width);
+ return finish_test(child, /*running_test=*/0, /*child_test_num=*/1, width);
}
#define for_each_test(j, k, t) \
@@ -465,7 +497,7 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
}
for (i = 0; i < child_test_num; i++) {
if (!sequential) {
- int ret = finish_test(child_tests[i], width);
+ int ret = finish_test(child_tests, i, child_test_num, width);
if (ret)
return ret;
diff --git a/tools/perf/util/color.h b/tools/perf/util/color.h
index 01f7bed21c9b..4b9f8d5d4439 100644
--- a/tools/perf/util/color.h
+++ b/tools/perf/util/color.h
@@ -22,6 +22,7 @@
#define MIN_GREEN 0.5
#define MIN_RED 5.0
+#define PERF_COLOR_DELETE_LINE "\033[A\33[2K\r"
/*
* This variable stores the value of color.ui
*/
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v1 3/8] perf test: Reduce scope of parallel variable
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
2024-10-11 7:35 ` [PATCH v1 1/8] tools subcmd: Add non-waitpid check_if_command_finished() Ian Rogers
2024-10-11 7:35 ` [PATCH v1 2/8] perf test: Display number of remaining tests Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 7:35 ` [PATCH v1 4/8] perf test: Avoid list test blocking on writing to stdout Ian Rogers
` (5 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
The variable duplicates sequential but is only used for command line
argument processing. Reduce scope to make the behavior clearer.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index df0466d3def6..6b8ef198d975 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -41,9 +41,6 @@
static bool dont_fork;
/* Don't fork the tests in parallel and wait for their completion. */
static bool sequential = true;
-/* Do it in parallel, lacks infrastructure to avoid running tests that clash for resources,
- * So leave it as the developers choice to enable while working on the needed infra */
-static bool parallel;
const char *dso_to_test;
const char *test_objdump_path = "objdump";
@@ -566,6 +563,12 @@ int cmd_test(int argc, const char **argv)
};
const char *skip = NULL;
const char *workload = NULL;
+ /*
+ * Run tests in parallel, lacks infrastructure to avoid running tests
+ * that clash for resources, So leave it as the developers choice to
+ * enable while working on the needed infra.
+ */
+ bool parallel = false;
const struct option test_options[] = {
OPT_STRING('s', "skip", &skip, "tests", "tests to skip"),
OPT_INCR('v', "verbose", &verbose,
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v1 4/8] perf test: Avoid list test blocking on writing to stdout
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (2 preceding siblings ...)
2024-10-11 7:35 ` [PATCH v1 3/8] perf test: Reduce scope of parallel variable Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 7:35 ` [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)" Ian Rogers
` (4 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Python's json.tool will output the input json to stdout. Redirect to
/dev/null to avoid blocking on stdout writes.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/shell/list.sh | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/list.sh b/tools/perf/tests/shell/list.sh
index 8a868ae64560..76a9846cff22 100755
--- a/tools/perf/tests/shell/list.sh
+++ b/tools/perf/tests/shell/list.sh
@@ -24,8 +24,11 @@ trap trap_cleanup EXIT TERM INT
test_list_json() {
echo "Json output test"
+ # Generate perf list json output into list_output file.
perf list -j -o "${list_output}"
- $PYTHON -m json.tool "${list_output}"
+ # Validate the json using python, redirect the json copy to /dev/null as
+ # otherwise the test may block writing to stdout.
+ $PYTHON -m json.tool "${list_output}" /dev/null
echo "Json output test [Success]"
}
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)"
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (3 preceding siblings ...)
2024-10-11 7:35 ` [PATCH v1 4/8] perf test: Avoid list test blocking on writing to stdout Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 10:01 ` James Clark
2024-10-11 7:35 ` [PATCH v1 6/8] perf test: Add a signal handler around running a test Ian Rogers
` (3 subsequent siblings)
8 siblings, 1 reply; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Some shell tests compete for resources and so can't run with other
tests, tag such tests. The "(exclusive)" stems from shared/exclusive
to describe how the tests run as if holding a lock.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/shell/perftool-testsuite_report.sh | 2 +-
tools/perf/tests/shell/record.sh | 2 +-
tools/perf/tests/shell/record_lbr.sh | 2 +-
tools/perf/tests/shell/record_offcpu.sh | 2 +-
tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
tools/perf/tests/shell/test_intel_pt.sh | 2 +-
tools/perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
7 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/tools/perf/tests/shell/perftool-testsuite_report.sh b/tools/perf/tests/shell/perftool-testsuite_report.sh
index 973012ce92a7..a8cf75b4e77e 100755
--- a/tools/perf/tests/shell/perftool-testsuite_report.sh
+++ b/tools/perf/tests/shell/perftool-testsuite_report.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perftool-testsuite_report
+# perftool-testsuite_report (exclusive)
# SPDX-License-Identifier: GPL-2.0
test -d "$(dirname "$0")/base_report" || exit 2
diff --git a/tools/perf/tests/shell/record.sh b/tools/perf/tests/shell/record.sh
index 8d6366d96883..f7d8c5b243a4 100755
--- a/tools/perf/tests/shell/record.sh
+++ b/tools/perf/tests/shell/record.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perf record tests
+# perf record tests (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/record_lbr.sh b/tools/perf/tests/shell/record_lbr.sh
index 32314641217e..8d750ee631f8 100755
--- a/tools/perf/tests/shell/record_lbr.sh
+++ b/tools/perf/tests/shell/record_lbr.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perf record LBR tests
+# perf record LBR tests (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/record_offcpu.sh b/tools/perf/tests/shell/record_offcpu.sh
index 67c925f3a15a..678947fe69ee 100755
--- a/tools/perf/tests/shell/record_offcpu.sh
+++ b/tools/perf/tests/shell/record_offcpu.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# perf record offcpu profiling tests
+# perf record offcpu profiling tests (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/stat_all_pmu.sh b/tools/perf/tests/shell/stat_all_pmu.sh
index 42456d89c5da..8b148b300be1 100755
--- a/tools/perf/tests/shell/stat_all_pmu.sh
+++ b/tools/perf/tests/shell/stat_all_pmu.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# perf all PMU test
+# perf all PMU test (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/test_intel_pt.sh b/tools/perf/tests/shell/test_intel_pt.sh
index 723ec501f99a..660991d17607 100755
--- a/tools/perf/tests/shell/test_intel_pt.sh
+++ b/tools/perf/tests/shell/test_intel_pt.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-# Miscellaneous Intel PT testing
+# Miscellaneous Intel PT testing (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
diff --git a/tools/perf/tests/shell/test_stat_intel_tpebs.sh b/tools/perf/tests/shell/test_stat_intel_tpebs.sh
index 9a11f42d153c..f95fc64bf0a7 100755
--- a/tools/perf/tests/shell/test_stat_intel_tpebs.sh
+++ b/tools/perf/tests/shell/test_stat_intel_tpebs.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# test Intel TPEBS counting mode
+# test Intel TPEBS counting mode (exclusive)
# SPDX-License-Identifier: GPL-2.0
set -e
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)"
2024-10-11 7:35 ` [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)" Ian Rogers
@ 2024-10-11 10:01 ` James Clark
2024-10-11 10:29 ` James Clark
0 siblings, 1 reply; 14+ messages in thread
From: James Clark @ 2024-10-11 10:01 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Adrian Hunter, Kan Liang, Howard Chu, Athira Jajeev,
Michael Petlan, Veronika Molnarova, Dapeng Mi, Thomas Richter,
Ilya Leoshkevich, Colin Ian King, Weilin Wang, Andi Kleen,
linux-kernel, linux-perf-users
On 11/10/2024 8:35 am, Ian Rogers wrote:
> Some shell tests compete for resources and so can't run with other
> tests, tag such tests. The "(exclusive)" stems from shared/exclusive
> to describe how the tests run as if holding a lock.
>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> tools/perf/tests/shell/perftool-testsuite_report.sh | 2 +-
> tools/perf/tests/shell/record.sh | 2 +-
> tools/perf/tests/shell/record_lbr.sh | 2 +-
> tools/perf/tests/shell/record_offcpu.sh | 2 +-
> tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> tools/perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> 7 files changed, 7 insertions(+), 7 deletions(-)
>
The following ones would also need to be marked as exclusive, not sure
if you can include those here or you want me to send a patch:
tools/perf/tests/shell/coresight/asm_pure_loop.sh
tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh
tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh
tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh
tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
tools/perf/tests/shell/test_arm_coresight.sh
tools/perf/tests/shell/test_arm_coresight_disasm.sh
tools/perf/tests/shell/test_arm_spe.sh
In theory all tests using probes would also need to be exclusive because
they install and delete probes globally. In practice I don't think I saw
any failures, whether that's just luck or because of some skips I'm not
sure.
And this one fails consistently in parallel mode on Arm:
22: Number of exit events of a simple workload
: FAILED!
But it's a C test so I assume there isn't an exclusive mechanism to skip
it? It doesn't look like it should be affected though, so maybe we could
leave it failing as a real bug.
Thanks
James
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)"
2024-10-11 10:01 ` James Clark
@ 2024-10-11 10:29 ` James Clark
2024-10-11 16:52 ` Ian Rogers
0 siblings, 1 reply; 14+ messages in thread
From: James Clark @ 2024-10-11 10:29 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Adrian Hunter, Kan Liang, Howard Chu, Athira Jajeev,
Michael Petlan, Veronika Molnarova, Dapeng Mi, Thomas Richter,
Ilya Leoshkevich, Colin Ian King, Weilin Wang, Andi Kleen,
linux-kernel, linux-perf-users
On 11/10/2024 11:01 am, James Clark wrote:
>
>
> On 11/10/2024 8:35 am, Ian Rogers wrote:
>> Some shell tests compete for resources and so can't run with other
>> tests, tag such tests. The "(exclusive)" stems from shared/exclusive
>> to describe how the tests run as if holding a lock.
>>
>> Signed-off-by: Ian Rogers <irogers@google.com>
>> ---
>> tools/perf/tests/shell/perftool-testsuite_report.sh | 2 +-
>> tools/perf/tests/shell/record.sh | 2 +-
>> tools/perf/tests/shell/record_lbr.sh | 2 +-
>> tools/perf/tests/shell/record_offcpu.sh | 2 +-
>> tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
>> tools/perf/tests/shell/test_intel_pt.sh | 2 +-
>> tools/perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
>> 7 files changed, 7 insertions(+), 7 deletions(-)
>>
>
> The following ones would also need to be marked as exclusive, not sure
> if you can include those here or you want me to send a patch:
>
> tools/perf/tests/shell/coresight/asm_pure_loop.sh
> tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh
> tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh
> tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh
> tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
> tools/perf/tests/shell/test_arm_coresight.sh
> tools/perf/tests/shell/test_arm_coresight_disasm.sh
> tools/perf/tests/shell/test_arm_spe.sh
>
> In theory all tests using probes would also need to be exclusive because
> they install and delete probes globally. In practice I don't think I saw
> any failures, whether that's just luck or because of some skips I'm not
> sure.
>
> And this one fails consistently in parallel mode on Arm:
>
> 22: Number of exit events of a simple workload
> : FAILED!
>
> But it's a C test so I assume there isn't an exclusive mechanism to skip
> it? It doesn't look like it should be affected though, so maybe we could
> leave it failing as a real bug.
>
Oh I see it says in the cover letter it can be set for C tests. But can
that be done through all the existing TEST_CASE() etc macros?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)"
2024-10-11 10:29 ` James Clark
@ 2024-10-11 16:52 ` Ian Rogers
0 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 16:52 UTC (permalink / raw)
To: James Clark
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Adrian Hunter, Kan Liang, Howard Chu, Athira Jajeev,
Michael Petlan, Veronika Molnarova, Dapeng Mi, Thomas Richter,
Ilya Leoshkevich, Colin Ian King, Weilin Wang, Andi Kleen,
linux-kernel, linux-perf-users
On Fri, Oct 11, 2024 at 3:29 AM James Clark <james.clark@linaro.org> wrote:
>
>
>
> On 11/10/2024 11:01 am, James Clark wrote:
> >
> >
> > On 11/10/2024 8:35 am, Ian Rogers wrote:
> >> Some shell tests compete for resources and so can't run with other
> >> tests, tag such tests. The "(exclusive)" stems from shared/exclusive
> >> to describe how the tests run as if holding a lock.
> >>
> >> Signed-off-by: Ian Rogers <irogers@google.com>
> >> ---
> >> tools/perf/tests/shell/perftool-testsuite_report.sh | 2 +-
> >> tools/perf/tests/shell/record.sh | 2 +-
> >> tools/perf/tests/shell/record_lbr.sh | 2 +-
> >> tools/perf/tests/shell/record_offcpu.sh | 2 +-
> >> tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> >> tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> >> tools/perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> >> 7 files changed, 7 insertions(+), 7 deletions(-)
> >>
> >
> > The following ones would also need to be marked as exclusive, not sure
> > if you can include those here or you want me to send a patch:
> >
> > tools/perf/tests/shell/coresight/asm_pure_loop.sh
> > tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh
> > tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh
> > tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh
> > tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
> > tools/perf/tests/shell/test_arm_coresight.sh
> > tools/perf/tests/shell/test_arm_coresight_disasm.sh
> > tools/perf/tests/shell/test_arm_spe.sh
I'll add it to v2 and add your suggested-by. Thanks.
> > In theory all tests using probes would also need to be exclusive because
> > they install and delete probes globally. In practice I don't think I saw
> > any failures, whether that's just luck or because of some skips I'm not
> > sure.
> >
> > And this one fails consistently in parallel mode on Arm:
> >
> > 22: Number of exit events of a simple workload
> > : FAILED!
This looks like it could be a real issue. I believe the test is doing
uid filtering:
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/task-exit.c?h=perf-tools-next#n49
uid filtering scans /proc looking for processes of the given uid. This
is inherently racy with processes exiting and we'd be better using a
BPF filter to drop samples with the wrong uid - same effect but no
racy /proc scan. I've seen the racy /proc scan cause termination
issues, so possibly this is the issue you are seeing.
It could also be that tweaking the retry count will fix things:
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/tests/task-exit.c?h=perf-tools-next#n134
Anyway, for now I think it is expedient to mark the test as exclusive.
> > But it's a C test so I assume there isn't an exclusive mechanism to skip
> > it? It doesn't look like it should be affected though, so maybe we could
> > leave it failing as a real bug.
> >
>
> Oh I see it says in the cover letter it can be set for C tests. But can
> that be done through all the existing TEST_CASE() etc macros?
Currently only whole suites can be exclusive. We could add macros for
exclusive C tests but my preference would be to make the test work
non-exclusive. I'll make test cases exclusive and mark this one.
Thanks,
Ian
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v1 6/8] perf test: Add a signal handler around running a test
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (4 preceding siblings ...)
2024-10-11 7:35 ` [PATCH v1 5/8] perf test: Tag parallel failing shell tests with "(exclusive)" Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 7:35 ` [PATCH v1 7/8] perf test: Run parallel tests in two passes Ian Rogers
` (2 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Add a signal handler around running a test. If a signal occurs during
the test a siglongjmp unwinds the stack and output is flushed. The
global run_test_jmp_buf is either unique per forked child or not
shared during sequential execution.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 6b8ef198d975..45a31376fdb0 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -8,6 +8,7 @@
#include <errno.h>
#include <poll.h>
#include <unistd.h>
+#include <setjmp.h>
#include <string.h>
#include <stdlib.h>
#include <sys/types.h>
@@ -226,16 +227,41 @@ struct child_test {
int subtest;
};
+static jmp_buf run_test_jmp_buf;
+
+static void child_test_sig_handler(int sig)
+{
+ siglongjmp(run_test_jmp_buf, sig);
+}
+
static int run_test_child(struct child_process *process)
{
- struct child_test *child = container_of(process, struct child_test, process);
+ const int signals[] = {
+ SIGABRT, SIGBUS, SIGFPE, SIGILL, SIGINT, SIGPIPE, SIGQUIT, SIGSEGV, SIGTERM,
+ };
+ static struct child_test *child;
int err;
+ err = sigsetjmp(run_test_jmp_buf, 1);
+ if (err) {
+ fprintf(stderr, "\n---- unexpected signal (%d) ----\n", err);
+ err = err > 0 ? -err : -1;
+ goto err_out;
+ }
+
+ child = container_of(process, struct child_test, process);
+ for (size_t i = 0; i < ARRAY_SIZE(signals); i++)
+ signal(signals[i], child_test_sig_handler);
+
pr_debug("--- start ---\n");
pr_debug("test child forked, pid %d\n", getpid());
err = test_function(child->test, child->subtest)(child->test, child->subtest);
pr_debug("---- end(%d) ----\n", err);
+
+err_out:
fflush(NULL);
+ for (size_t i = 0; i < ARRAY_SIZE(signals); i++)
+ signal(signals[i], SIG_DFL);
return -err;
}
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v1 7/8] perf test: Run parallel tests in two passes
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (5 preceding siblings ...)
2024-10-11 7:35 ` [PATCH v1 6/8] perf test: Add a signal handler around running a test Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 7:35 ` [PATCH v1 8/8] perf test: Make parallel testing the default Ian Rogers
2024-10-11 10:03 ` [PATCH v1 0/8] Run tests in parallel showing number of tests running James Clark
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
In pass 1 run all tests that succeed when run in parallel. In pass 2
sequentially run all remaining tests that are flagged as
"exclusive". Sequential and dont_fork tests keep to run in pass 1. Add
error handling to finish tests if starting a test fails.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 149 +++++++++++++++++++------------
tools/perf/tests/tests-scripts.c | 5 ++
tools/perf/tests/tests.h | 1 +
3 files changed, 96 insertions(+), 59 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 45a31376fdb0..fd6b50f4d145 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -239,7 +239,7 @@ static int run_test_child(struct child_process *process)
const int signals[] = {
SIGABRT, SIGBUS, SIGFPE, SIGILL, SIGINT, SIGPIPE, SIGQUIT, SIGSEGV, SIGTERM,
};
- static struct child_test *child;
+ struct child_test *child = container_of(process, struct child_test, process);
int err;
err = sigsetjmp(run_test_jmp_buf, 1);
@@ -249,7 +249,6 @@ static int run_test_child(struct child_process *process)
goto err_out;
}
- child = container_of(process, struct child_test, process);
for (size_t i = 0; i < ARRAY_SIZE(signals); i++)
signal(signals[i], child_test_sig_handler);
@@ -302,19 +301,25 @@ static int print_test_result(struct test_suite *t, int i, int subtest, int resul
return 0;
}
-static int finish_test(struct child_test **child_tests, int running_test, int child_test_num,
- int width)
+static void finish_test(struct child_test **child_tests, int running_test, int child_test_num,
+ int width)
{
struct child_test *child_test = child_tests[running_test];
- struct test_suite *t = child_test->test;
- int i = child_test->test_num;
- int subi = child_test->subtest;
- int err = child_test->process.err;
+ struct test_suite *t;
+ int i, subi, err;
bool err_done = false;
struct strbuf err_output = STRBUF_INIT;
int last_tests_in_progress = -1;
int ret;
+ if (child_test == NULL) {
+ /* Test wasn't started. */
+ return;
+ }
+ t = child_test->test;
+ i = child_test->test_num;
+ subi = child_test->subtest;
+ err = child_test->process.err;
/*
* For test suites with subtests, display the suite name ahead of the
* sub test names.
@@ -344,6 +349,8 @@ static int finish_test(struct child_test **child_tests, int running_test, int ch
int tests_in_progress = running_test;
for (int y = running_test; y < child_test_num; y++) {
+ if (child_tests[y] == NULL)
+ continue;
if (check_if_command_finished(&child_tests[y]->process))
tests_in_progress++;
}
@@ -397,23 +404,32 @@ static int finish_test(struct child_test **child_tests, int running_test, int ch
print_test_result(t, i, subi, ret, width, /*remaining=*/0);
if (err > 0)
close(err);
- return 0;
+ zfree(&child_tests[running_test]);
}
static int start_test(struct test_suite *test, int i, int subi, struct child_test **child,
- int width)
+ int width, int pass)
{
int err;
*child = NULL;
if (dont_fork) {
- pr_debug("--- start ---\n");
- err = test_function(test, subi)(test, subi);
- pr_debug("---- end ----\n");
- print_test_result(test, i, subi, err, width, /*remaining=*/0);
+ if (pass == 1) {
+ pr_debug("--- start ---\n");
+ err = test_function(test, subi)(test, subi);
+ pr_debug("---- end ----\n");
+ print_test_result(test, i, subi, err, width, /*remaining=*/0);
+ }
+ return 0;
+ }
+ if (pass == 1 && !sequential && test->exclusive) {
+ /* When parallel, skip exclusive tests on the first pass. */
+ return 0;
+ }
+ if (pass != 1 && (sequential || !test->exclusive)) {
+ /* Sequential and non-exclusive tests were run on the first pass. */
return 0;
}
-
*child = zalloc(sizeof(**child));
if (!*child)
return -ENOMEM;
@@ -432,10 +448,14 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
(*child)->process.err = -1;
}
(*child)->process.no_exec_cmd = run_test_child;
- err = start_command(&(*child)->process);
- if (err || !sequential)
- return err;
- return finish_test(child, /*running_test=*/0, /*child_test_num=*/1, width);
+ if (sequential || pass == 2) {
+ err = start_command(&(*child)->process);
+ if (err)
+ return err;
+ finish_test(child, /*running_test=*/0, /*child_test_num=*/1, width);
+ return 0;
+ }
+ return start_command(&(*child)->process);
}
#define for_each_test(j, k, t) \
@@ -445,12 +465,11 @@ static int start_test(struct test_suite *test, int i, int subi, struct child_tes
static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
{
struct test_suite *t;
- unsigned int j, k;
- int i = 0;
int width = 0;
+ unsigned int j, k;
size_t num_tests = 0;
struct child_test **child_tests;
- int child_test_num = 0;
+ int err = 0;
for_each_test(j, k, t) {
int len = strlen(test_description(t, -1));
@@ -473,62 +492,73 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
if (!child_tests)
return -ENOMEM;
- for_each_test(j, k, t) {
- int curr = i++;
-
- if (!perf_test__matches(test_description(t, -1), curr, argc, argv)) {
- bool skip = true;
+ /*
+ * In parallel mode pass 1 runs non-exclusive tests in parallel, pass 2
+ * runs the exclusive tests sequentially. In other modes all tests are
+ * run in pass 1.
+ */
+ for (int pass = 1; pass <= 2; pass++) {
+ int child_test_num = 0;
+ int i = 0;
+
+ for_each_test(j, k, t) {
+ int curr = i++;
+
+ if (!perf_test__matches(test_description(t, -1), curr, argc, argv)) {
+ /*
+ * Test suite shouldn't be run based on
+ * description. See if subtest should.
+ */
+ bool skip = true;
+
+ for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) {
+ if (perf_test__matches(test_description(t, subi),
+ curr, argc, argv))
+ skip = false;
+ }
- for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) {
- if (perf_test__matches(test_description(t, subi),
- curr, argc, argv))
- skip = false;
+ if (skip)
+ continue;
}
- if (skip)
+ if (intlist__find(skiplist, i)) {
+ pr_info("%3d: %-*s:", curr + 1, width, test_description(t, -1));
+ color_fprintf(stderr, PERF_COLOR_YELLOW, " Skip (user override)\n");
continue;
- }
-
- if (intlist__find(skiplist, i)) {
- pr_info("%3d: %-*s:", curr + 1, width, test_description(t, -1));
- color_fprintf(stderr, PERF_COLOR_YELLOW, " Skip (user override)\n");
- continue;
- }
-
- if (!has_subtests(t)) {
- int err = start_test(t, curr, -1, &child_tests[child_test_num++], width);
+ }
- if (err) {
- /* TODO: if !sequential waitpid the already forked children. */
- free(child_tests);
- return err;
+ if (!has_subtests(t)) {
+ err = start_test(t, curr, -1, &child_tests[child_test_num++],
+ width, pass);
+ if (err)
+ goto err_out;
+ continue;
}
- } else {
for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) {
- int err;
-
if (!perf_test__matches(test_description(t, subi),
curr, argc, argv))
continue;
err = start_test(t, curr, subi, &child_tests[child_test_num++],
- width);
+ width, pass);
if (err)
- return err;
+ goto err_out;
}
}
- }
- for (i = 0; i < child_test_num; i++) {
if (!sequential) {
- int ret = finish_test(child_tests, i, child_test_num, width);
-
- if (ret)
- return ret;
+ /* Parallel mode starts tests but doesn't finish them. Do that now. */
+ for (size_t x = 0; x < num_tests; x++)
+ finish_test(child_tests, x, num_tests, width);
}
- free(child_tests[i]);
+ }
+err_out:
+ if (err) {
+ pr_err("Internal test harness failure. Completing any started tests:\n:");
+ for (size_t x = 0; x < num_tests; x++)
+ finish_test(child_tests, x, num_tests, width);
}
free(child_tests);
- return 0;
+ return err;
}
static int perf_test__list(int argc, const char **argv)
@@ -638,6 +668,7 @@ int cmd_test(int argc, const char **argv)
symbol_conf.priv_size = sizeof(int);
symbol_conf.try_vmlinux_path = true;
+
if (symbol__init(NULL) < 0)
return -1;
diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
index ed114b044293..443b55b0e05c 100644
--- a/tools/perf/tests/tests-scripts.c
+++ b/tools/perf/tests/tests-scripts.c
@@ -203,6 +203,11 @@ static void append_script(int dir_fd, const char *name, char *desc,
test_suite->desc = desc;
test_suite->test_cases = tests;
test_suite->priv = strdup_check(filename);
+ /*
+ * Note, tests with lots of stdout can't be exclusive as if they block
+ * then the file lock is held and no one makes progress.
+ */
+ test_suite->exclusive = strstr(desc, "(exclusive)") != NULL;
/* Realloc is good enough, though we could realloc by chunks, not that
* anyone will ever measure performance here */
result_tmp = realloc(*result, (*result_sz + 1) * sizeof(*result_tmp));
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 6ea2be86b7bf..1d3e1f91871c 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -42,6 +42,7 @@ struct test_suite {
const char *desc;
struct test_case *test_cases;
void *priv;
+ bool exclusive;
};
#define DECLARE_SUITE(name) \
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v1 8/8] perf test: Make parallel testing the default
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (6 preceding siblings ...)
2024-10-11 7:35 ` [PATCH v1 7/8] perf test: Run parallel tests in two passes Ian Rogers
@ 2024-10-11 7:35 ` Ian Rogers
2024-10-11 10:03 ` [PATCH v1 0/8] Run tests in parallel showing number of tests running James Clark
8 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 7:35 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, Kan Liang, James Clark, Howard Chu,
Athira Jajeev, Michael Petlan, Veronika Molnarova, Dapeng Mi,
Thomas Richter, Ilya Leoshkevich, Colin Ian King, Weilin Wang,
Andi Kleen, linux-kernel, linux-perf-users
Now C tests can have the "exclusive" flag to run without other tests,
and shell tests can add "(exclusive)" to their description, run tests
in parallel by default. Tests which flake when run in parallel can be
marked exclusive to resolve the problem.
Non-scientifically, the reduction on `perf test` execution time is
from 8m35.890s to 3m55.115s on a Tigerlake laptop. So the tests
complete in less than half the time.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/builtin-test.c | 13 ++-----------
1 file changed, 2 insertions(+), 11 deletions(-)
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index fd6b50f4d145..2d62b4961f85 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -40,8 +40,8 @@
* making them easier to debug.
*/
static bool dont_fork;
-/* Don't fork the tests in parallel and wait for their completion. */
-static bool sequential = true;
+/* Fork the tests in parallel and wait for their completion. */
+static bool sequential;
const char *dso_to_test;
const char *test_objdump_path = "objdump";
@@ -619,19 +619,12 @@ int cmd_test(int argc, const char **argv)
};
const char *skip = NULL;
const char *workload = NULL;
- /*
- * Run tests in parallel, lacks infrastructure to avoid running tests
- * that clash for resources, So leave it as the developers choice to
- * enable while working on the needed infra.
- */
- bool parallel = false;
const struct option test_options[] = {
OPT_STRING('s', "skip", &skip, "tests", "tests to skip"),
OPT_INCR('v', "verbose", &verbose,
"be more verbose (show symbol address, etc)"),
OPT_BOOLEAN('F', "dont-fork", &dont_fork,
"Do not fork for testcase"),
- OPT_BOOLEAN('p', "parallel", ¶llel, "Run the tests in parallel"),
OPT_BOOLEAN('S', "sequential", &sequential,
"Run the tests one after another rather than in parallel"),
OPT_STRING('w', "workload", &workload, "work", "workload to run for testing"),
@@ -662,8 +655,6 @@ int cmd_test(int argc, const char **argv)
if (dont_fork)
sequential = true;
- else if (parallel)
- sequential = false;
symbol_conf.priv_size = sizeof(int);
symbol_conf.try_vmlinux_path = true;
--
2.47.0.rc1.288.g06298d1525-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v1 0/8] Run tests in parallel showing number of tests running
2024-10-11 7:35 [PATCH v1 0/8] Run tests in parallel showing number of tests running Ian Rogers
` (7 preceding siblings ...)
2024-10-11 7:35 ` [PATCH v1 8/8] perf test: Make parallel testing the default Ian Rogers
@ 2024-10-11 10:03 ` James Clark
2024-10-11 16:26 ` Ian Rogers
8 siblings, 1 reply; 14+ messages in thread
From: James Clark @ 2024-10-11 10:03 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Adrian Hunter, Kan Liang, Howard Chu, Athira Jajeev,
Michael Petlan, Veronika Molnarova, Dapeng Mi, Thomas Richter,
Ilya Leoshkevich, Colin Ian King, Weilin Wang, Andi Kleen,
linux-kernel, linux-perf-users
On 11/10/2024 8:35 am, Ian Rogers wrote:
> Avoid waitpid so that stdout/stderr aren't destroyed prior to wanting
> to read them for display. When running on a color terminal, display
> the number of running tests (1 if sequential). To avoid previous
> flicker, only delete and refresh the display line when it changes. An
> earlier version of this code is here:
> https://lore.kernel.org/lkml/20240701044236.475098-1-irogers@google.com/
>
> Add a signal handler for perf tests so that unexpected signals are
> displayed and test clean up is possible.
>
> In perf test add an "exclusive" flag that causes a test to be run with
> no other test. Set this flag manually for C tests and via a
> "(exclusive)" in the test description for shell tests. Add the flag to
> shell tests that may fail when run with other tests.
>
> Change the perf test loop to run in two passes. For parallel
> execution, the first pass runs all tests that can be run in parallel
> then the 2nd runs remaining tests sequentially. This causes the
> "exclusive" tests to be run last and with test numbers moderately out
> of alignment.
>
> Change the default to be to run tests in parallel. Running tests in
> parallel brings the execution time down to less than half.
>
> Ian Rogers (8):
> tools subcmd: Add non-waitpid check_if_command_finished()
> perf test: Display number of remaining tests
> perf test: Reduce scope of parallel variable
> perf test: Avoid list test blocking on writing to stdout
> perf test: Tag parallel failing shell tests with "(exclusive)"
> perf test: Add a signal handler around running a test
> perf test: Run parallel tests in two passes
> perf test: Make parallel testing the default
>
> tools/lib/subcmd/run-command.c | 33 +++
> tools/perf/tests/builtin-test.c | 267 ++++++++++++------
> tools/perf/tests/shell/list.sh | 5 +-
> .../tests/shell/perftool-testsuite_report.sh | 2 +-
> tools/perf/tests/shell/record.sh | 2 +-
> tools/perf/tests/shell/record_lbr.sh | 2 +-
> tools/perf/tests/shell/record_offcpu.sh | 2 +-
> tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> .../perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> tools/perf/tests/tests-scripts.c | 5 +
> tools/perf/tests/tests.h | 1 +
> tools/perf/util/color.h | 1 +
> 13 files changed, 226 insertions(+), 100 deletions(-)
>
Not really a big deal but remaining doesn't work when a subset of tests
are run:
$ perf test 111 110
110: Check Arm64 callgraphs are complete in fp mode : Ok
111: Check Arm CoreSight trace data recording and synthesized samples:
Running (150 remaining)
Other than that:
Tested-by: James Clark <james.clark@linaro.org>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v1 0/8] Run tests in parallel showing number of tests running
2024-10-11 10:03 ` [PATCH v1 0/8] Run tests in parallel showing number of tests running James Clark
@ 2024-10-11 16:26 ` Ian Rogers
0 siblings, 0 replies; 14+ messages in thread
From: Ian Rogers @ 2024-10-11 16:26 UTC (permalink / raw)
To: James Clark
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Adrian Hunter, Kan Liang, Howard Chu, Athira Jajeev,
Michael Petlan, Veronika Molnarova, Dapeng Mi, Thomas Richter,
Ilya Leoshkevich, Colin Ian King, Weilin Wang, Andi Kleen,
linux-kernel, linux-perf-users
On Fri, Oct 11, 2024 at 3:03 AM James Clark <james.clark@linaro.org> wrote:
>
>
>
> On 11/10/2024 8:35 am, Ian Rogers wrote:
> > Avoid waitpid so that stdout/stderr aren't destroyed prior to wanting
> > to read them for display. When running on a color terminal, display
> > the number of running tests (1 if sequential). To avoid previous
> > flicker, only delete and refresh the display line when it changes. An
> > earlier version of this code is here:
> > https://lore.kernel.org/lkml/20240701044236.475098-1-irogers@google.com/
> >
> > Add a signal handler for perf tests so that unexpected signals are
> > displayed and test clean up is possible.
> >
> > In perf test add an "exclusive" flag that causes a test to be run with
> > no other test. Set this flag manually for C tests and via a
> > "(exclusive)" in the test description for shell tests. Add the flag to
> > shell tests that may fail when run with other tests.
> >
> > Change the perf test loop to run in two passes. For parallel
> > execution, the first pass runs all tests that can be run in parallel
> > then the 2nd runs remaining tests sequentially. This causes the
> > "exclusive" tests to be run last and with test numbers moderately out
> > of alignment.
> >
> > Change the default to be to run tests in parallel. Running tests in
> > parallel brings the execution time down to less than half.
> >
> > Ian Rogers (8):
> > tools subcmd: Add non-waitpid check_if_command_finished()
> > perf test: Display number of remaining tests
> > perf test: Reduce scope of parallel variable
> > perf test: Avoid list test blocking on writing to stdout
> > perf test: Tag parallel failing shell tests with "(exclusive)"
> > perf test: Add a signal handler around running a test
> > perf test: Run parallel tests in two passes
> > perf test: Make parallel testing the default
> >
> > tools/lib/subcmd/run-command.c | 33 +++
> > tools/perf/tests/builtin-test.c | 267 ++++++++++++------
> > tools/perf/tests/shell/list.sh | 5 +-
> > .../tests/shell/perftool-testsuite_report.sh | 2 +-
> > tools/perf/tests/shell/record.sh | 2 +-
> > tools/perf/tests/shell/record_lbr.sh | 2 +-
> > tools/perf/tests/shell/record_offcpu.sh | 2 +-
> > tools/perf/tests/shell/stat_all_pmu.sh | 2 +-
> > tools/perf/tests/shell/test_intel_pt.sh | 2 +-
> > .../perf/tests/shell/test_stat_intel_tpebs.sh | 2 +-
> > tools/perf/tests/tests-scripts.c | 5 +
> > tools/perf/tests/tests.h | 1 +
> > tools/perf/util/color.h | 1 +
> > 13 files changed, 226 insertions(+), 100 deletions(-)
> >
>
> Not really a big deal but remaining doesn't work when a subset of tests
> are run:
>
> $ perf test 111 110
> 110: Check Arm64 callgraphs are complete in fp mode : Ok
> 111: Check Arm CoreSight trace data recording and synthesized samples:
> Running (150 remaining)
Thanks, I'd been so focussed on other issues that I'd missed testing
like this. v2 will fix the "remaining" but I've renamed it "active" as
the count doesn't reflect the remaining tests as exclusive tests will
be missing and they will show 1 test remaining which isn't accurate.
> Other than that:
>
> Tested-by: James Clark <james.clark@linaro.org>
Thanks,
Ian
^ permalink raw reply [flat|nested] 14+ messages in thread