* [PATCH v3 0/9] KVM Selftest Runner
@ 2025-09-30 16:36 Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner Vipin Sharma
` (8 more replies)
0 siblings, 9 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Hello,
This is v3 of KVM selftest runner. After making changes on feedback
given in v2, this series has reduced from 15 patches to 9 patches. I
have tried to address all of the comments from v2. There are none left
open and are incorporated to best of my understanding.
To recap (copied from v2), KVM Selftest Runner allows running KVM
selftests with added features not present in default selftest runner
provided by selftests framework.
This Runner has two broad goals:
1. Make it easier for contributors and maintainers to run various
configuration of tests with features like preserving output,
controlling output verbosity, parallelism, different combinations of
command line arguments.
2. Provide common place to write interesting and useful combinations of
tests command line arguments to improve KVM test coverage. Default
selftests runner provide little to no control over this.
Future patches will add features like:
- Print process id of the test in execution.
- CTRL+C currently spits out lots of warning (depending on --job value).
This will be fixed in the next version.
- Add more tests configurations.
- Provide a way to set the environment in which runner will start tests. For
example, setting huge pages, stress testing based on resources
available on host.
This series is also available on github at:
https://github.com/shvipin/linux kvm/sefltests/runner-v3
v3:
- Created "tests_install" rule in Makefile.kvm to auto generate default
testcases, which will be ignored in .gitignore.
- Changed command line option names to pass testcase files, directories,
executable paths, print based on test status, and what to print.
Removed certain other options based on feedback in v2.
- Merged command.py into selftest.py
- Fixed issue where timed out test's stdout and stderr were not printed.
- Reduced python version from 3.7 to 3.6.
- Fixed issue where test status numerical value was printed instead of
text like PASSED, FAILED, SKIPPED, etc.
- Added README.rst.
v2: https://lore.kernel.org/kvm/20250606235619.1841595-1-vipinsh@google.com/
- Automatic default test generation.
- Command line flag to provide executables location
- Dump output to filesystem with timestamp
- Accept absolute path of *.test files/directory location
- Sticky status at bottom for the current state of runner.
- Knobs to control output verbosity
- Colored output for terminals.
v1: https://lore.kernel.org/kvm/20250222005943.3348627-1-vipinsh@google.com/
- Parallel test execution.
- Dumping separate output for each test.
- Timeout for test execution
- Specify single test or a test directory.
RFC: https://lore.kernel.org/kvm/20240821223012.3757828-1-vipinsh@google.com/
Vipin Sharma (9):
KVM: selftest: Create KVM selftest runner
KVM: selftests: Provide executables path option to the KVM selftest
runner
KVM: selftests: Add timeout option in selftests runner
KVM: selftests: Add option to save selftest runner output to a
directory
KVM: selftests: Run tests concurrently in KVM selftests runner
KVM: selftests: Add various print flags to KVM selftest runner
KVM: selftests: Print sticky KVM selftests runner status at bottom
KVM: selftests: Add rule to generate default tests for KVM selftests
runner
KVM: selftests: Provide README.rst for KVM selftests runner
tools/testing/selftests/kvm/.gitignore | 6 +-
tools/testing/selftests/kvm/Makefile.kvm | 20 ++
tools/testing/selftests/kvm/runner/README.rst | 54 +++++
.../testing/selftests/kvm/runner/__main__.py | 184 ++++++++++++++++++
.../testing/selftests/kvm/runner/selftest.py | 105 ++++++++++
.../selftests/kvm/runner/test_runner.py | 79 ++++++++
.../2slot_5vcpu_10iter.test | 1 +
.../no_dirty_log_protect.test | 1 +
8 files changed, 449 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/runner/README.rst
create mode 100644 tools/testing/selftests/kvm/runner/__main__.py
create mode 100644 tools/testing/selftests/kvm/runner/selftest.py
create mode 100644 tools/testing/selftests/kvm/runner/test_runner.py
create mode 100644 tools/testing/selftests/kvm/tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
create mode 100644 tools/testing/selftests/kvm/tests/dirty_log_perf_test/no_dirty_log_protect.test
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 22:23 ` Vipin Sharma
2025-10-10 9:47 ` Brendan Jackman
2025-09-30 16:36 ` [PATCH v3 2/9] KVM: selftests: Provide executables path option to the " Vipin Sharma
` (7 subsequent siblings)
8 siblings, 2 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Implement a basic KVM selftest runner in Python to run selftests. Add
command line options to select individual testcase file or a
directory containing multiple testcase files.
After selecting the tests to run, start their execution and print their
final execution status (passed, failed, skipped, no run), stdout and
stderr on terminal.
Print execution status in colors on the terminals where it is supported
to easily distinguish different statuses of the tests execution.
If a test fails or times out, then return with a non-zero exit code
after all of the tests execution have completed. If none of the tests
fails or times out then exit with status 0
Provide some sample test configuration files to demonstrate the
execution of the runner.
Runner can be started from tools/testing/selftests/kvm directory as:
python3 runner --dirs tests
OR
python3 runner --testcases \
tests/dirty_log_perf_test/no_dirty_log_protect.test
This is a very basic implementation of the runner. Next patches will
enhance the runner by adding more features like parallelization, dumping
output to file system, time limit, out-of-tree builds run, etc.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
tools/testing/selftests/kvm/.gitignore | 4 +-
.../testing/selftests/kvm/runner/__main__.py | 94 +++++++++++++++++++
.../testing/selftests/kvm/runner/selftest.py | 64 +++++++++++++
.../selftests/kvm/runner/test_runner.py | 37 ++++++++
.../2slot_5vcpu_10iter.test | 1 +
.../no_dirty_log_protect.test | 1 +
6 files changed, 200 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/runner/__main__.py
create mode 100644 tools/testing/selftests/kvm/runner/selftest.py
create mode 100644 tools/testing/selftests/kvm/runner/test_runner.py
create mode 100644 tools/testing/selftests/kvm/tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
create mode 100644 tools/testing/selftests/kvm/tests/dirty_log_perf_test/no_dirty_log_protect.test
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 1d41a046a7bf..95af97b1ff9e 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -3,10 +3,12 @@
!/**/
!*.c
!*.h
+!*.py
!*.S
!*.sh
+!*.test
!.gitignore
!config
!settings
!Makefile
-!Makefile.kvm
\ No newline at end of file
+!Makefile.kvm
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
new file mode 100644
index 000000000000..8d1a78450e41
--- /dev/null
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -0,0 +1,94 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright 2025 Google LLC
+#
+# Author: vipinsh@google.com (Vipin Sharma)
+
+import argparse
+import logging
+import os
+import sys
+
+from test_runner import TestRunner
+from selftest import SelftestStatus
+
+
+def cli():
+ parser = argparse.ArgumentParser(
+ prog="KVM Selftests Runner",
+ formatter_class=argparse.RawTextHelpFormatter,
+ allow_abbrev=False
+ )
+
+ parser.add_argument("-t",
+ "--testcases",
+ nargs="*",
+ default=[],
+ help="Testcases to run. Provide the space separated testcases paths")
+
+ parser.add_argument("-d",
+ "--dirs",
+ nargs="*",
+ default=[],
+ help="Run the testcases present in the given directory and all of its sub directories. Provide the space separated paths to add multiple directories.")
+
+ return parser.parse_args()
+
+
+def setup_logging():
+ class TerminalColorFormatter(logging.Formatter):
+ reset = "\033[0m"
+ red_bold = "\033[31;1m"
+ green = "\033[32m"
+ yellow = "\033[33m"
+ blue = "\033[34m"
+
+ COLORS = {
+ SelftestStatus.PASSED: green,
+ SelftestStatus.NO_RUN: blue,
+ SelftestStatus.SKIPPED: yellow,
+ SelftestStatus.FAILED: red_bold
+ }
+
+ def __init__(self, fmt=None, datefmt=None):
+ super().__init__(fmt, datefmt)
+
+ def format(self, record):
+ return (self.COLORS.get(record.levelno, "") +
+ super().format(record) + self.reset)
+
+ logger = logging.getLogger("runner")
+ logger.setLevel(logging.INFO)
+
+ ch = logging.StreamHandler()
+ ch_formatter = TerminalColorFormatter(fmt="%(asctime)s | %(message)s",
+ datefmt="%H:%M:%S")
+ ch.setFormatter(ch_formatter)
+ logger.addHandler(ch)
+
+
+def fetch_testcases_in_dirs(dirs):
+ testcases = []
+ for dir in dirs:
+ for root, child_dirs, files in os.walk(dir):
+ for file in files:
+ testcases.append(os.path.join(root, file))
+ return testcases
+
+
+def fetch_testcases(args):
+ testcases = args.testcases
+ testcases.extend(fetch_testcases_in_dirs(args.dirs))
+ # Remove duplicates
+ testcases = list(dict.fromkeys(testcases))
+ return testcases
+
+
+def main():
+ args = cli()
+ setup_logging()
+ testcases = fetch_testcases(args)
+ return TestRunner(testcases).start()
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/tools/testing/selftests/kvm/runner/selftest.py b/tools/testing/selftests/kvm/runner/selftest.py
new file mode 100644
index 000000000000..34005f83f0c3
--- /dev/null
+++ b/tools/testing/selftests/kvm/runner/selftest.py
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright 2025 Google LLC
+#
+# Author: vipinsh@google.com (Vipin Sharma)
+
+import pathlib
+import enum
+import os
+import subprocess
+
+class SelftestStatus(enum.IntEnum):
+ """
+ Selftest Status. Integer values are just +1 to the logging.INFO level.
+ """
+
+ PASSED = 21
+ NO_RUN = 22
+ SKIPPED = 23
+ FAILED = 24
+
+ def __str__(self):
+ return str.__str__(self.name)
+
+class Selftest:
+ """
+ Represents a single selftest.
+
+ Extract the test execution command from test file and executes it.
+ """
+
+ def __init__(self, test_path):
+ test_command = pathlib.Path(test_path).read_text().strip()
+ if not test_command:
+ raise ValueError("Empty test command in " + test_path)
+
+ test_command = os.path.join(".", test_command)
+ self.exists = os.path.isfile(test_command.split(maxsplit=1)[0])
+ self.test_path = test_path
+ self.command = test_command
+ self.status = SelftestStatus.NO_RUN
+ self.stdout = ""
+ self.stderr = ""
+
+ def run(self):
+ if not self.exists:
+ self.stderr = "File doesn't exists."
+ return
+
+ run_args = {
+ "universal_newlines": True,
+ "shell": True,
+ "stdout": subprocess.PIPE,
+ "stderr": subprocess.PIPE
+ }
+ proc = subprocess.run(self.command, **run_args)
+ self.stdout = proc.stdout
+ self.stderr = proc.stderr
+
+ if proc.returncode == 0:
+ self.status = SelftestStatus.PASSED
+ elif proc.returncode == 4:
+ self.status = SelftestStatus.SKIPPED
+ else:
+ self.status = SelftestStatus.FAILED
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
new file mode 100644
index 000000000000..4418777d75e3
--- /dev/null
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -0,0 +1,37 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright 2025 Google LLC
+#
+# Author: vipinsh@google.com (Vipin Sharma)
+
+import logging
+from selftest import Selftest
+from selftest import SelftestStatus
+
+logger = logging.getLogger("runner")
+
+
+class TestRunner:
+ def __init__(self, testcases):
+ self.tests = []
+
+ for testcase in testcases:
+ self.tests.append(Selftest(testcase))
+
+ def _log_result(self, test_result):
+ logger.info("*** stdout ***\n" + test_result.stdout)
+ logger.info("*** stderr ***\n" + test_result.stderr)
+ logger.log(test_result.status,
+ f"[{test_result.status.name}] {test_result.test_path}")
+
+ def start(self):
+ ret = 0
+
+ for test in self.tests:
+ test.run()
+ self._log_result(test)
+
+ if (test.status not in [SelftestStatus.PASSED,
+ SelftestStatus.NO_RUN,
+ SelftestStatus.SKIPPED]):
+ ret = 1
+ return ret
diff --git a/tools/testing/selftests/kvm/tests/dirty_log_perf_test/2slot_5vcpu_10iter.test b/tools/testing/selftests/kvm/tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
new file mode 100644
index 000000000000..5b8d56b44a75
--- /dev/null
+++ b/tools/testing/selftests/kvm/tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
@@ -0,0 +1 @@
+dirty_log_perf_test -x 2 -v 5 -i 10
diff --git a/tools/testing/selftests/kvm/tests/dirty_log_perf_test/no_dirty_log_protect.test b/tools/testing/selftests/kvm/tests/dirty_log_perf_test/no_dirty_log_protect.test
new file mode 100644
index 000000000000..ed3490b1d1a1
--- /dev/null
+++ b/tools/testing/selftests/kvm/tests/dirty_log_perf_test/no_dirty_log_protect.test
@@ -0,0 +1 @@
+dirty_log_perf_test -g
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 2/9] KVM: selftests: Provide executables path option to the KVM selftest runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 3/9] KVM: selftests: Add timeout option in selftests runner Vipin Sharma
` (6 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add command line option, -p/--path, to specify the directory where
test binaries exist. If this option is not provided then default
to the current directory.
Example:
python3 runner --dirs test -p ~/build/selftests
This option enables executing tests from out-of-tree builds.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
tools/testing/selftests/kvm/runner/__main__.py | 8 +++++++-
tools/testing/selftests/kvm/runner/selftest.py | 4 ++--
tools/testing/selftests/kvm/runner/test_runner.py | 4 ++--
3 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
index 8d1a78450e41..943c3bfe2eb6 100644
--- a/tools/testing/selftests/kvm/runner/__main__.py
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -31,6 +31,12 @@ def cli():
default=[],
help="Run the testcases present in the given directory and all of its sub directories. Provide the space separated paths to add multiple directories.")
+ parser.add_argument("-p",
+ "--path",
+ nargs='?',
+ default=".",
+ help="Finds the test executables in the given path. Default is the current directory.")
+
return parser.parse_args()
@@ -87,7 +93,7 @@ def main():
args = cli()
setup_logging()
testcases = fetch_testcases(args)
- return TestRunner(testcases).start()
+ return TestRunner(testcases, args).start()
if __name__ == "__main__":
diff --git a/tools/testing/selftests/kvm/runner/selftest.py b/tools/testing/selftests/kvm/runner/selftest.py
index 34005f83f0c3..a94b6d4cda05 100644
--- a/tools/testing/selftests/kvm/runner/selftest.py
+++ b/tools/testing/selftests/kvm/runner/selftest.py
@@ -28,12 +28,12 @@ class Selftest:
Extract the test execution command from test file and executes it.
"""
- def __init__(self, test_path):
+ def __init__(self, test_path, path):
test_command = pathlib.Path(test_path).read_text().strip()
if not test_command:
raise ValueError("Empty test command in " + test_path)
- test_command = os.path.join(".", test_command)
+ test_command = os.path.join(path, test_command)
self.exists = os.path.isfile(test_command.split(maxsplit=1)[0])
self.test_path = test_path
self.command = test_command
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
index 4418777d75e3..acc9fb3dabde 100644
--- a/tools/testing/selftests/kvm/runner/test_runner.py
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -11,11 +11,11 @@ logger = logging.getLogger("runner")
class TestRunner:
- def __init__(self, testcases):
+ def __init__(self, testcases, args):
self.tests = []
for testcase in testcases:
- self.tests.append(Selftest(testcase))
+ self.tests.append(Selftest(testcase, args.path))
def _log_result(self, test_result):
logger.info("*** stdout ***\n" + test_result.stdout)
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 3/9] KVM: selftests: Add timeout option in selftests runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 2/9] KVM: selftests: Provide executables path option to the " Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 4/9] KVM: selftests: Add option to save selftest runner output to a directory Vipin Sharma
` (5 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add a command line argument in KVM selftest runner to limit amount of
time (seconds) given to a test for execution. Kill the test if it
exceeds the given timeout. Define a new SelftestStatus.TIMED_OUT to
denote a selftest final result. Add terminal color for status messages
of timed out tests.
Set the default value of 120 seconds for all tests.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
.../testing/selftests/kvm/runner/__main__.py | 9 ++++-
.../testing/selftests/kvm/runner/selftest.py | 33 ++++++++++++-------
.../selftests/kvm/runner/test_runner.py | 2 +-
3 files changed, 31 insertions(+), 13 deletions(-)
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
index 943c3bfe2eb6..5cedc5098a54 100644
--- a/tools/testing/selftests/kvm/runner/__main__.py
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -37,6 +37,11 @@ def cli():
default=".",
help="Finds the test executables in the given path. Default is the current directory.")
+ parser.add_argument("--timeout",
+ default=120,
+ type=int,
+ help="Timeout, in seconds, before runner kills the running test. (Default: 120 seconds)")
+
return parser.parse_args()
@@ -44,6 +49,7 @@ def setup_logging():
class TerminalColorFormatter(logging.Formatter):
reset = "\033[0m"
red_bold = "\033[31;1m"
+ red = "\033[31;1m"
green = "\033[32m"
yellow = "\033[33m"
blue = "\033[34m"
@@ -52,7 +58,8 @@ def setup_logging():
SelftestStatus.PASSED: green,
SelftestStatus.NO_RUN: blue,
SelftestStatus.SKIPPED: yellow,
- SelftestStatus.FAILED: red_bold
+ SelftestStatus.FAILED: red_bold,
+ SelftestStatus.TIMED_OUT: red
}
def __init__(self, fmt=None, datefmt=None):
diff --git a/tools/testing/selftests/kvm/runner/selftest.py b/tools/testing/selftests/kvm/runner/selftest.py
index a94b6d4cda05..4783785ca230 100644
--- a/tools/testing/selftests/kvm/runner/selftest.py
+++ b/tools/testing/selftests/kvm/runner/selftest.py
@@ -17,6 +17,7 @@ class SelftestStatus(enum.IntEnum):
NO_RUN = 22
SKIPPED = 23
FAILED = 24
+ TIMED_OUT = 25
def __str__(self):
return str.__str__(self.name)
@@ -28,7 +29,7 @@ class Selftest:
Extract the test execution command from test file and executes it.
"""
- def __init__(self, test_path, path):
+ def __init__(self, test_path, path, timeout):
test_command = pathlib.Path(test_path).read_text().strip()
if not test_command:
raise ValueError("Empty test command in " + test_path)
@@ -37,6 +38,7 @@ class Selftest:
self.exists = os.path.isfile(test_command.split(maxsplit=1)[0])
self.test_path = test_path
self.command = test_command
+ self.timeout = timeout
self.status = SelftestStatus.NO_RUN
self.stdout = ""
self.stderr = ""
@@ -50,15 +52,24 @@ class Selftest:
"universal_newlines": True,
"shell": True,
"stdout": subprocess.PIPE,
- "stderr": subprocess.PIPE
+ "stderr": subprocess.PIPE,
+ "timeout": self.timeout,
}
- proc = subprocess.run(self.command, **run_args)
- self.stdout = proc.stdout
- self.stderr = proc.stderr
- if proc.returncode == 0:
- self.status = SelftestStatus.PASSED
- elif proc.returncode == 4:
- self.status = SelftestStatus.SKIPPED
- else:
- self.status = SelftestStatus.FAILED
+ try:
+ proc = subprocess.run(self.command, **run_args)
+ self.stdout = proc.stdout
+ self.stderr = proc.stderr
+
+ if proc.returncode == 0:
+ self.status = SelftestStatus.PASSED
+ elif proc.returncode == 4:
+ self.status = SelftestStatus.SKIPPED
+ else:
+ self.status = SelftestStatus.FAILED
+ except subprocess.TimeoutExpired as e:
+ self.status = SelftestStatus.TIMED_OUT
+ if e.stdout is not None:
+ self.stdout = e.stdout
+ if e.stderr is not None:
+ self.stderr = e.stderr
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
index acc9fb3dabde..bea82c6239cd 100644
--- a/tools/testing/selftests/kvm/runner/test_runner.py
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -15,7 +15,7 @@ class TestRunner:
self.tests = []
for testcase in testcases:
- self.tests.append(Selftest(testcase, args.path))
+ self.tests.append(Selftest(testcase, args.path, args.timeout))
def _log_result(self, test_result):
logger.info("*** stdout ***\n" + test_result.stdout)
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 4/9] KVM: selftests: Add option to save selftest runner output to a directory
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
` (2 preceding siblings ...)
2025-09-30 16:36 ` [PATCH v3 3/9] KVM: selftests: Add timeout option in selftests runner Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 5/9] KVM: selftests: Run tests concurrently in KVM selftests runner Vipin Sharma
` (4 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add a command line flag, -o/--output, to selftest runner which enables
it to save individual tests output (stdout & stderr) stream to a
directory in a hierarchical way. Create folder hierarchy same as tests
hieararcy given by --testcases and --dirs.
Also, add a command line flag, --append-output-time, which will append
timestamp (format YYYY.M.DD.HH.MM.SS) to the directory name given in
--output flag.
Example:
python3 runner --dirs test -o test_result --append_output_time
This will create test_result.2025.06.06.08.45.57 directory.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
.../testing/selftests/kvm/runner/__main__.py | 34 +++++++++++++--
.../testing/selftests/kvm/runner/selftest.py | 42 ++++++++++++++++---
.../selftests/kvm/runner/test_runner.py | 4 +-
3 files changed, 69 insertions(+), 11 deletions(-)
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
index 5cedc5098a54..b27a41e86271 100644
--- a/tools/testing/selftests/kvm/runner/__main__.py
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -7,6 +7,8 @@ import argparse
import logging
import os
import sys
+import datetime
+import pathlib
from test_runner import TestRunner
from selftest import SelftestStatus
@@ -42,10 +44,20 @@ def cli():
type=int,
help="Timeout, in seconds, before runner kills the running test. (Default: 120 seconds)")
+ parser.add_argument("-o",
+ "--output",
+ nargs='?',
+ help="Dumps test runner output which includes each test execution result, their stdouts and stderrs hierarchically in the given directory.")
+
+ parser.add_argument("--append-output-time",
+ action="store_true",
+ default=False,
+ help="Appends timestamp to the output directory.")
+
return parser.parse_args()
-def setup_logging():
+def setup_logging(args):
class TerminalColorFormatter(logging.Formatter):
reset = "\033[0m"
red_bold = "\033[31;1m"
@@ -72,12 +84,26 @@ def setup_logging():
logger = logging.getLogger("runner")
logger.setLevel(logging.INFO)
+ formatter_args = {
+ "fmt": "%(asctime)s | %(message)s",
+ "datefmt": "%H:%M:%S"
+ }
+
ch = logging.StreamHandler()
- ch_formatter = TerminalColorFormatter(fmt="%(asctime)s | %(message)s",
- datefmt="%H:%M:%S")
+ ch_formatter = TerminalColorFormatter(**formatter_args)
ch.setFormatter(ch_formatter)
logger.addHandler(ch)
+ if args.output != None:
+ if (args.append_output_time):
+ args.output += datetime.datetime.now().strftime(".%Y.%m.%d.%H.%M.%S")
+ pathlib.Path(args.output).mkdir(parents=True, exist_ok=True)
+ logging_file = os.path.join(args.output, "log")
+ fh = logging.FileHandler(logging_file)
+ fh_formatter = logging.Formatter(**formatter_args)
+ fh.setFormatter(fh_formatter)
+ logger.addHandler(fh)
+
def fetch_testcases_in_dirs(dirs):
testcases = []
@@ -98,7 +124,7 @@ def fetch_testcases(args):
def main():
args = cli()
- setup_logging()
+ setup_logging(args)
testcases = fetch_testcases(args)
return TestRunner(testcases, args).start()
diff --git a/tools/testing/selftests/kvm/runner/selftest.py b/tools/testing/selftests/kvm/runner/selftest.py
index 4783785ca230..1aedeaeb5e74 100644
--- a/tools/testing/selftests/kvm/runner/selftest.py
+++ b/tools/testing/selftests/kvm/runner/selftest.py
@@ -7,6 +7,7 @@ import pathlib
import enum
import os
import subprocess
+import contextlib
class SelftestStatus(enum.IntEnum):
"""
@@ -29,7 +30,7 @@ class Selftest:
Extract the test execution command from test file and executes it.
"""
- def __init__(self, test_path, path, timeout):
+ def __init__(self, test_path, path, timeout, output_dir):
test_command = pathlib.Path(test_path).read_text().strip()
if not test_command:
raise ValueError("Empty test command in " + test_path)
@@ -39,15 +40,14 @@ class Selftest:
self.test_path = test_path
self.command = test_command
self.timeout = timeout
+ if output_dir is not None:
+ output_dir = os.path.join(output_dir, test_path.lstrip("./"))
+ self.output_dir = output_dir
self.status = SelftestStatus.NO_RUN
self.stdout = ""
self.stderr = ""
- def run(self):
- if not self.exists:
- self.stderr = "File doesn't exists."
- return
-
+ def _run(self, output=None, error=None):
run_args = {
"universal_newlines": True,
"shell": True,
@@ -59,7 +59,12 @@ class Selftest:
try:
proc = subprocess.run(self.command, **run_args)
self.stdout = proc.stdout
+ if output is not None:
+ output.write(proc.stdout)
+
self.stderr = proc.stderr
+ if error is not None:
+ error.write(proc.stderr)
if proc.returncode == 0:
self.status = SelftestStatus.PASSED
@@ -71,5 +76,30 @@ class Selftest:
self.status = SelftestStatus.TIMED_OUT
if e.stdout is not None:
self.stdout = e.stdout
+ if output is not None:
+ output.write(e.stdout)
if e.stderr is not None:
self.stderr = e.stderr
+ if error is not None:
+ error.write(e.stderr)
+
+ def run(self):
+ if not self.exists:
+ self.stderr = "File doesn't exists."
+ return
+
+ if self.output_dir is not None:
+ pathlib.Path(self.output_dir).mkdir(parents=True, exist_ok=True)
+
+ output = None
+ error = None
+ with contextlib.ExitStack() as stack:
+ if self.output_dir is not None:
+ output_path = os.path.join(self.output_dir, "stdout")
+ output = stack.enter_context(
+ open(output_path, encoding="utf-8", mode="w"))
+
+ error_path = os.path.join(self.output_dir, "stderr")
+ error = stack.enter_context(
+ open(error_path, encoding="utf-8", mode="w"))
+ return self._run(output, error)
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
index bea82c6239cd..b9101f0e0432 100644
--- a/tools/testing/selftests/kvm/runner/test_runner.py
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -13,9 +13,11 @@ logger = logging.getLogger("runner")
class TestRunner:
def __init__(self, testcases, args):
self.tests = []
+ self.output_dir = args.output
for testcase in testcases:
- self.tests.append(Selftest(testcase, args.path, args.timeout))
+ self.tests.append(Selftest(testcase, args.path, args.timeout,
+ args.output))
def _log_result(self, test_result):
logger.info("*** stdout ***\n" + test_result.stdout)
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 5/9] KVM: selftests: Run tests concurrently in KVM selftests runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
` (3 preceding siblings ...)
2025-09-30 16:36 ` [PATCH v3 4/9] KVM: selftests: Add option to save selftest runner output to a directory Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 6/9] KVM: selftests: Add various print flags to KVM selftest runner Vipin Sharma
` (3 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add a command line argument, --jobs, to specify how many tests can
execute concurrently. Set default to 1.
Example:
python3 runner --test-dirs tests -j 10
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
.../testing/selftests/kvm/runner/__main__.py | 6 ++++
.../selftests/kvm/runner/test_runner.py | 28 +++++++++++++------
2 files changed, 26 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
index b27a41e86271..b98f72c9f7ee 100644
--- a/tools/testing/selftests/kvm/runner/__main__.py
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -54,6 +54,12 @@ def cli():
default=False,
help="Appends timestamp to the output directory.")
+ parser.add_argument("-j",
+ "--jobs",
+ default=1,
+ type=int,
+ help="Maximum number of tests that can be run concurrently. (Default: 1)")
+
return parser.parse_args()
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
index b9101f0e0432..92eec18fe5c6 100644
--- a/tools/testing/selftests/kvm/runner/test_runner.py
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -4,6 +4,8 @@
# Author: vipinsh@google.com (Vipin Sharma)
import logging
+import concurrent.futures
+
from selftest import Selftest
from selftest import SelftestStatus
@@ -14,11 +16,16 @@ class TestRunner:
def __init__(self, testcases, args):
self.tests = []
self.output_dir = args.output
+ self.jobs = args.jobs
for testcase in testcases:
self.tests.append(Selftest(testcase, args.path, args.timeout,
args.output))
+ def _run_test(self, test):
+ test.run()
+ return test
+
def _log_result(self, test_result):
logger.info("*** stdout ***\n" + test_result.stdout)
logger.info("*** stderr ***\n" + test_result.stderr)
@@ -28,12 +35,17 @@ class TestRunner:
def start(self):
ret = 0
- for test in self.tests:
- test.run()
- self._log_result(test)
-
- if (test.status not in [SelftestStatus.PASSED,
- SelftestStatus.NO_RUN,
- SelftestStatus.SKIPPED]):
- ret = 1
+ with concurrent.futures.ProcessPoolExecutor(max_workers=self.jobs) as executor:
+ all_futures = []
+ for test in self.tests:
+ future = executor.submit(self._run_test, test)
+ all_futures.append(future)
+
+ for future in concurrent.futures.as_completed(all_futures):
+ test_result = future.result()
+ self._log_result(test_result)
+ if (test_result.status not in [SelftestStatus.PASSED,
+ SelftestStatus.NO_RUN,
+ SelftestStatus.SKIPPED]):
+ ret = 1
return ret
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 6/9] KVM: selftests: Add various print flags to KVM selftest runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
` (4 preceding siblings ...)
2025-09-30 16:36 ` [PATCH v3 5/9] KVM: selftests: Run tests concurrently in KVM selftests runner Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 7/9] KVM: selftests: Print sticky KVM selftests runner status at bottom Vipin Sharma
` (2 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add various print flags to selectively print outputs on terminal based
on test execution status (passed, failed, timed out, skipped, no run).
For each status provide further options (off, full, stderr, stdout,
status) to choose verbosity of their prints. Make "full" the default
choice.
Example: To print stderr for the failed tests and only status for the
passed test:
python3 runner --test-dirs tests --print-failed stderr \
--print-passed status
Above command with disable print off skipped, timed out, and no run
tests.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
.../testing/selftests/kvm/runner/__main__.py | 45 +++++++++++++++++++
.../selftests/kvm/runner/test_runner.py | 19 ++++++--
2 files changed, 60 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
index b98f72c9f7ee..4867e89c30f2 100644
--- a/tools/testing/selftests/kvm/runner/__main__.py
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -9,6 +9,7 @@ import os
import sys
import datetime
import pathlib
+import textwrap
from test_runner import TestRunner
from selftest import SelftestStatus
@@ -60,6 +61,50 @@ def cli():
type=int,
help="Maximum number of tests that can be run concurrently. (Default: 1)")
+ status_choices = ["off", "full", "stdout", "stderr", "status"]
+ status_help_text = textwrap.dedent('''\
+ Control output of the {} test.
+ off : dont print anything.
+ full : print stdout, stderr, and status of the test.
+ stdout: print stdout and status of the test.
+ stderr: print stderr and status of the test.
+ status: only print the status of test execution and no other output.''');
+
+ parser.add_argument("--print-passed",
+ default="full",
+ const="full",
+ nargs='?',
+ choices=status_choices,
+ help = status_help_text.format("passed"))
+
+ parser.add_argument("--print-failed",
+ default="full",
+ const="full",
+ nargs='?',
+ choices=status_choices,
+ help = status_help_text.format("failed"))
+
+ parser.add_argument("--print-skipped",
+ default="full",
+ const="full",
+ nargs='?',
+ choices=status_choices,
+ help = status_help_text.format("skipped"))
+
+ parser.add_argument("--print-timed-out",
+ default="full",
+ const="full",
+ nargs='?',
+ choices=status_choices,
+ help = status_help_text.format("timed-out"))
+
+ parser.add_argument("--print-no-run",
+ default="full",
+ const="full",
+ nargs='?',
+ choices=status_choices,
+ help = status_help_text.format("no-run"))
+
return parser.parse_args()
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
index 92eec18fe5c6..e8e8fd91c1ad 100644
--- a/tools/testing/selftests/kvm/runner/test_runner.py
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -17,6 +17,13 @@ class TestRunner:
self.tests = []
self.output_dir = args.output
self.jobs = args.jobs
+ self.print_stds = {
+ SelftestStatus.PASSED: args.print_passed,
+ SelftestStatus.FAILED: args.print_failed,
+ SelftestStatus.SKIPPED: args.print_skipped,
+ SelftestStatus.TIMED_OUT: args.print_timed_out,
+ SelftestStatus.NO_RUN: args.print_no_run
+ }
for testcase in testcases:
self.tests.append(Selftest(testcase, args.path, args.timeout,
@@ -27,10 +34,14 @@ class TestRunner:
return test
def _log_result(self, test_result):
- logger.info("*** stdout ***\n" + test_result.stdout)
- logger.info("*** stderr ***\n" + test_result.stderr)
- logger.log(test_result.status,
- f"[{test_result.status.name}] {test_result.test_path}")
+ print_level = self.print_stds.get(test_result.status, "full")
+
+ if (print_level == "full" or print_level == "stdout"):
+ logger.info("*** stdout ***\n" + test_result.stdout)
+ if (print_level == "full" or print_level == "stderr"):
+ logger.info("*** stderr ***\n" + test_result.stderr)
+ if (print_level != "off"):
+ logger.log(test_result.status, f"[{test_result.status.name}] {test_result.test_path}")
def start(self):
ret = 0
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 7/9] KVM: selftests: Print sticky KVM selftests runner status at bottom
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
` (5 preceding siblings ...)
2025-09-30 16:36 ` [PATCH v3 6/9] KVM: selftests: Add various print flags to KVM selftest runner Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 8/9] KVM: selftests: Add rule to generate default tests for KVM selftests runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 9/9] KVM: selftests: Provide README.rst " Vipin Sharma
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Print current state of the KVM selftest runner during its execution.
Show it as the bottom most line, make it sticky and colored. Provide
the following information:
- Total number of tests selected for run.
- How many have executed.
- Total for each end state.
Example:
Total: 3/3 Passed: 1 Failed: 1 Skipped: 0 Timed Out: 0 No Run: 1
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
.../testing/selftests/kvm/runner/test_runner.py | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/tools/testing/selftests/kvm/runner/test_runner.py b/tools/testing/selftests/kvm/runner/test_runner.py
index e8e8fd91c1ad..42274e695b76 100644
--- a/tools/testing/selftests/kvm/runner/test_runner.py
+++ b/tools/testing/selftests/kvm/runner/test_runner.py
@@ -15,6 +15,7 @@ logger = logging.getLogger("runner")
class TestRunner:
def __init__(self, testcases, args):
self.tests = []
+ self.status = {x: 0 for x in SelftestStatus}
self.output_dir = args.output
self.jobs = args.jobs
self.print_stds = {
@@ -33,9 +34,18 @@ class TestRunner:
test.run()
return test
+ def _sticky_update(self):
+ print(f"\r\033[1mTotal: {self.tests_ran}/{len(self.tests)}" \
+ f"\033[32;1m Passed: {self.status[SelftestStatus.PASSED]}" \
+ f"\033[31;1m Failed: {self.status[SelftestStatus.FAILED]}" \
+ f"\033[33;1m Skipped: {self.status[SelftestStatus.SKIPPED]}"\
+ f"\033[91;1m Timed Out: {self.status[SelftestStatus.TIMED_OUT]}"\
+ f"\033[34;1m No Run: {self.status[SelftestStatus.NO_RUN]}\033[0m", end="\r")
+
def _log_result(self, test_result):
print_level = self.print_stds.get(test_result.status, "full")
+ print("\033[2K", end="\r")
if (print_level == "full" or print_level == "stdout"):
logger.info("*** stdout ***\n" + test_result.stdout)
if (print_level == "full" or print_level == "stderr"):
@@ -43,8 +53,13 @@ class TestRunner:
if (print_level != "off"):
logger.log(test_result.status, f"[{test_result.status.name}] {test_result.test_path}")
+ self.status[test_result.status] += 1
+ # Sticky bottom line
+ self._sticky_update()
+
def start(self):
ret = 0
+ self.tests_ran = 0
with concurrent.futures.ProcessPoolExecutor(max_workers=self.jobs) as executor:
all_futures = []
@@ -54,9 +69,11 @@ class TestRunner:
for future in concurrent.futures.as_completed(all_futures):
test_result = future.result()
+ self.tests_ran += 1
self._log_result(test_result)
if (test_result.status not in [SelftestStatus.PASSED,
SelftestStatus.NO_RUN,
SelftestStatus.SKIPPED]):
ret = 1
+ print("\n")
return ret
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 8/9] KVM: selftests: Add rule to generate default tests for KVM selftests runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
` (6 preceding siblings ...)
2025-09-30 16:36 ` [PATCH v3 7/9] KVM: selftests: Print sticky KVM selftests runner status at bottom Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 9/9] KVM: selftests: Provide README.rst " Vipin Sharma
8 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add 'tests_install' rule in the Makefile.kvm to auto generate default
testcases for KVM selftests runner. Preserve the hierarchy of test
executables for autogenerated files. Remove these testcases on
invocation of 'make clean'.
Autogeneration of default test files allows runner to execute default
testcases easily. These default testcases don't need to be checked in as
they are just executing the test without any command line options.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile.kvm | 20 ++++++++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 95af97b1ff9e..548d435bde2f 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -7,6 +7,7 @@
!*.S
!*.sh
!*.test
+default.test
!.gitignore
!config
!settings
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index 41b40c676d7f..6bb63f88c0e6 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -306,11 +306,15 @@ $(SPLIT_TEST_GEN_PROGS): $(OUTPUT)/%: $(OUTPUT)/%.o $(OUTPUT)/$(ARCH)/%.o
$(SPLIT_TEST_GEN_OBJ): $(OUTPUT)/$(ARCH)/%.o: $(ARCH)/%.c
$(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@
+# Default testcases for KVM selftests runner will be generated in this directory.
+DEFAULT_TESTCASES = testcases_default_gen
+
EXTRA_CLEAN += $(GEN_HDRS) \
$(LIBKVM_OBJS) \
$(SPLIT_TEST_GEN_OBJ) \
$(TEST_DEP_FILES) \
$(TEST_GEN_OBJ) \
+ $(OUTPUT)/$(DEFAULT_TESTCASES) \
cscope.*
$(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(GEN_HDRS)
@@ -339,3 +343,19 @@ cscope:
find . -name '*.c' \
-exec realpath --relative-base=$(PWD) {} \;) | sort -u > cscope.files
cscope -b
+
+# Generate testcases in DEFAULT_TESTCASES directory.
+# $(OUTPUT) is either CWD or specified in the make command.
+tests_install: list_progs = $(patsubst $(OUTPUT)/%,%,$(TEST_GEN_PROGS))
+tests_install:
+ $(foreach tc, $(TEST_PROGS), \
+ $(shell mkdir -p $(OUTPUT)/$(DEFAULT_TESTCASES)/$(patsubst %.sh,%,$(tc))))
+ $(foreach tc, $(TEST_PROGS), \
+ $(shell echo $(tc) > $(patsubst %.sh,$(OUTPUT)/$(DEFAULT_TESTCASES)/%/default.test,$(tc))))
+
+ $(foreach tc, $(list_progs), \
+ $(shell mkdir -p $(OUTPUT)/$(DEFAULT_TESTCASES)/$(tc)))
+ $(foreach tc, $(list_progs), \
+ $(shell echo $(tc) > $(patsubst %,$(OUTPUT)/$(DEFAULT_TESTCASES)/%/default.test,$(tc))))
+
+ @:
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
` (7 preceding siblings ...)
2025-09-30 16:36 ` [PATCH v3 8/9] KVM: selftests: Add rule to generate default tests for KVM selftests runner Vipin Sharma
@ 2025-09-30 16:36 ` Vipin Sharma
2025-10-01 8:44 ` Marc Zyngier
2025-10-10 9:58 ` Brendan Jackman
8 siblings, 2 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 16:36 UTC (permalink / raw)
To: kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, Vipin Sharma
Add README.rst for KVM selftest runner and explain how to use the
runner.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/runner/README.rst | 54 +++++++++++++++++++
2 files changed, 55 insertions(+)
create mode 100644 tools/testing/selftests/kvm/runner/README.rst
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 548d435bde2f..83aa2fe01bac 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -4,6 +4,7 @@
!*.c
!*.h
!*.py
+!*.rst
!*.S
!*.sh
!*.test
diff --git a/tools/testing/selftests/kvm/runner/README.rst b/tools/testing/selftests/kvm/runner/README.rst
new file mode 100644
index 000000000000..83b071c0a0e6
--- /dev/null
+++ b/tools/testing/selftests/kvm/runner/README.rst
@@ -0,0 +1,54 @@
+KVM Selftest Runner
+===================
+
+KVM selftest runner is highly configurable test executor that allows to run
+tests with different configurations (not just the default), parallely, save
+output to disk hierarchically, control what gets printed on console, provide
+execution status.
+
+To generate default tests use::
+
+ # make tests_install
+
+This will create ``testcases_default_gen`` directory which will have testcases
+in `default.test` files. Each KVM selftest will have a directory in which
+`default.test` file will be created with executable path relative to KVM
+selftest root directory i.e. `/tools/testing/selftests/kvm`. For example, the
+`dirty_log_perf_test` will have::
+
+ # cat testcase_default_gen/dirty_log_perf_test/default.test
+ dirty_log_perf_test
+
+Runner will execute `dirty_log_perf_test`. Testcases files can also provide
+extra arguments to the test::
+
+ # cat tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
+ dirty_log_perf_test -x 2 -v 5 -i 10
+
+In this case runner will execute the `dirty_log_perf_test` with the options.
+
+Example
+=======
+
+To see all of the options::
+
+ # python3 runner -h
+
+To run all of the default tests::
+
+ # python3 runner -d testcases_default_gen
+
+To run tests parallely::
+
+ # python3 runner -d testcases_default_gen -j 40
+
+To print only passed test status and failed test stderr::
+
+ # python3 runner -d testcases_default_gen --print-passed status \
+ --print-failed stderr
+
+To run tests binary which are in some other directory (out of tree builds)::
+
+ # python3 runner -d testcases_default_gen -p /path/to/binaries
+
+
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner
2025-09-30 16:36 ` [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner Vipin Sharma
@ 2025-09-30 22:23 ` Vipin Sharma
2025-10-10 9:47 ` Brendan Jackman
1 sibling, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-09-30 22:23 UTC (permalink / raw)
To: seanjc
Cc: pbonzini, borntraeger, frankja, imbrenda, anup, atish.patra,
zhaotianrui, maobibo, chenhuacai, maz, oliver.upton, ajones, kvm,
kvmarm, kvm-riscv
On 2025-09-30 09:36:27, Vipin Sharma wrote:
> diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
> new file mode 100644
> index 000000000000..8d1a78450e41
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/runner/__main__.py
> +def main():
> + args = cli()
> + setup_logging()
> + testcases = fetch_testcases(args)
> + return TestRunner(testcases).start()
> +
> +
> +if __name__ == "__main__":
> + sys.exit(main())
Sean, as discussed offline, following diff adds a minimum python version
check. Runner can run on python 3.6, it might go lower but I haven't checked.
Linux kernel minimum requirement for python is 3.9
(Documentation/process/changes.rst)
diff --git a/tools/testing/selftests/kvm/runner/__main__.py b/tools/testing/selftests/kvm/runner/__main__.py
index 8d1a78450e41..db87f426331d 100644
--- a/tools/testing/selftests/kvm/runner/__main__.py
+++ b/tools/testing/selftests/kvm/runner/__main__.py
@@ -91,4 +91,9 @@ def main():
if __name__ == "__main__":
+ PYTHON_VERSION = (3, 6)
+ if sys.version_info < PYTHON_VERSION:
+ print(f"Minimum required python version {PYTHON_VERSION}, found {sys.version}")
+ sys.exit(1)
+
sys.exit(main())
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-09-30 16:36 ` [PATCH v3 9/9] KVM: selftests: Provide README.rst " Vipin Sharma
@ 2025-10-01 8:44 ` Marc Zyngier
2025-10-01 17:32 ` Vipin Sharma
2025-10-10 9:58 ` Brendan Jackman
1 sibling, 1 reply; 20+ messages in thread
From: Marc Zyngier @ 2025-10-01 8:44 UTC (permalink / raw)
To: Vipin Sharma
Cc: kvm, kvmarm, kvm-riscv, seanjc, pbonzini, borntraeger, frankja,
imbrenda, anup, atish.patra, zhaotianrui, maobibo, chenhuacai,
oliver.upton, ajones
On Tue, 30 Sep 2025 17:36:35 +0100,
Vipin Sharma <vipinsh@google.com> wrote:
>
> Add README.rst for KVM selftest runner and explain how to use the
> runner.
>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/runner/README.rst | 54 +++++++++++++++++++
> 2 files changed, 55 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/runner/README.rst
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 548d435bde2f..83aa2fe01bac 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -4,6 +4,7 @@
> !*.c
> !*.h
> !*.py
> +!*.rst
> !*.S
> !*.sh
> !*.test
> diff --git a/tools/testing/selftests/kvm/runner/README.rst b/tools/testing/selftests/kvm/runner/README.rst
> new file mode 100644
> index 000000000000..83b071c0a0e6
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/runner/README.rst
> @@ -0,0 +1,54 @@
> +KVM Selftest Runner
> +===================
> +
> +KVM selftest runner is highly configurable test executor that allows to run
> +tests with different configurations (not just the default), parallely, save
s/parallely/in parallel/
> +output to disk hierarchically, control what gets printed on console, provide
> +execution status.
> +
> +To generate default tests use::
> +
> + # make tests_install
> +
> +This will create ``testcases_default_gen`` directory which will have testcases
I don't think using the future tense is correct here. I'd rather see
something written in the present tense, possibly imperative. For
example:
"Create 'blah' directory containing 'foo' files, one per test-case.
> +in `default.test` files. Each KVM selftest will have a directory in which
> +`default.test` file will be created with executable path relative to KVM
> +selftest root directory i.e. `/tools/testing/selftests/kvm`.
Shouldn't this honor the existing build output directives? If it
actually does, then you want to call this out.
> For example, the
> +`dirty_log_perf_test` will have::
> +
> + # cat testcase_default_gen/dirty_log_perf_test/default.test
> + dirty_log_perf_test
> +
> +Runner will execute `dirty_log_perf_test`. Testcases files can also provide
> +extra arguments to the test::
> +
> + # cat tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
> + dirty_log_perf_test -x 2 -v 5 -i 10
> +
> +In this case runner will execute the `dirty_log_perf_test` with the options.
> +
The beginning of the text talks about "non-default' configurations,
but you only seem to talk about the default stuff. How does one deals
with a non-default config?
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-10-01 8:44 ` Marc Zyngier
@ 2025-10-01 17:32 ` Vipin Sharma
2025-10-02 14:41 ` Marc Zyngier
0 siblings, 1 reply; 20+ messages in thread
From: Vipin Sharma @ 2025-10-01 17:32 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvm, kvmarm, kvm-riscv, seanjc, pbonzini, borntraeger, frankja,
imbrenda, anup, atish.patra, zhaotianrui, maobibo, chenhuacai,
oliver.upton, ajones
On 2025-10-01 09:44:22, Marc Zyngier wrote:
> On Tue, 30 Sep 2025 17:36:35 +0100,
> Vipin Sharma <vipinsh@google.com> wrote:
> >
> > +KVM selftest runner is highly configurable test executor that allows to run
> > +tests with different configurations (not just the default), parallely, save
>
> s/parallely/in parallel/
>
Thanks, I will fix it.
> > +output to disk hierarchically, control what gets printed on console, provide
> > +execution status.
> > +
> > +To generate default tests use::
> > +
> > + # make tests_install
> > +
> > +This will create ``testcases_default_gen`` directory which will have testcases
>
> I don't think using the future tense is correct here. I'd rather see
> something written in the present tense, possibly imperative. For
> example:
>
> "Create 'blah' directory containing 'foo' files, one per test-case.
>
Thanks, I will fix it.
> > +in `default.test` files. Each KVM selftest will have a directory in which
> > +`default.test` file will be created with executable path relative to KVM
> > +selftest root directory i.e. `/tools/testing/selftests/kvm`.
>
> Shouldn't this honor the existing build output directives? If it
> actually does, then you want to call this out.
>
To generate default test files in a specific directory one can use
"OUTPUT" in the make command
make OUTPUT="~/test/directory/path" tests_install
This allows to generate testcases_default_gen in the given output
directory. default.test files will still have test binary path relative
kvm selftest root directory.
$OUTPUT
└── testcases_default_gen
├── access_tracking_perf_test
│ └── default.test
├── arch_timer
│ └── default.test
├── arm64
│ ├── aarch32_id_regs
│ │ └── default.test
│ ├── arch_timer_edge_cases
│ │ └── default.test
│ ├── debug-exceptions
│ │ └── default.test
│ ├── external_aborts
│ │ └── default.test
│ │ └── default.test
│ └── ...
├── coalesced_io_test
│ └── default.test
├── demand_paging_test
│ └── default.test
├── ...
So, arm64/aarch32_id_regs/default.test will have 'arm64/aarch32_id_regs'
User can then supply -p/--path with the path of build output directory
to runner.
python3 runner -p ~/path/to/selftest/binaries -d $OUTPUT/testcases_default_gen
If -p not given then current directory is considered for test
executables.
> > For example, the
> > +`dirty_log_perf_test` will have::
> > +
> > + # cat testcase_default_gen/dirty_log_perf_test/default.test
> > + dirty_log_perf_test
> > +
> > +Runner will execute `dirty_log_perf_test`. Testcases files can also provide
> > +extra arguments to the test::
> > +
> > + # cat tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
> > + dirty_log_perf_test -x 2 -v 5 -i 10
> > +
> > +In this case runner will execute the `dirty_log_perf_test` with the options.
> > +
>
> The beginning of the text talks about "non-default' configurations,
> but you only seem to talk about the default stuff. How does one deals
> with a non-default config?
>
In the patch 1, I created two sample tests files,
2slot_5vcpu_10iter.test and no_dirty_log_protect.test, in the directory
tools/testing/selftests/kvm/tests/dirty_log_perf_test.
Contents of those files provide non-default arguments to test, for example,
2slot_5vcpu10iter.test has the command:
dirty_log_perf_test -x 2 -v 5 -i 10
One can run these non-default tests as (assuming current directory is
kvm selftests):
python3 runner -d ./tests
Over the time we will add more of these non-default interesting
testcases. One can then run:
python3 runner -d ./tests ./testcases_default_gen
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-10-01 17:32 ` Vipin Sharma
@ 2025-10-02 14:41 ` Marc Zyngier
2025-10-03 1:02 ` Sean Christopherson
2025-10-03 6:39 ` Vipin Sharma
0 siblings, 2 replies; 20+ messages in thread
From: Marc Zyngier @ 2025-10-02 14:41 UTC (permalink / raw)
To: Vipin Sharma
Cc: kvm, kvmarm, kvm-riscv, seanjc, pbonzini, borntraeger, frankja,
imbrenda, anup, atish.patra, zhaotianrui, maobibo, chenhuacai,
oliver.upton, ajones
On Wed, 01 Oct 2025 18:32:25 +0100,
Vipin Sharma <vipinsh@google.com> wrote:
>
> On 2025-10-01 09:44:22, Marc Zyngier wrote:
> > On Tue, 30 Sep 2025 17:36:35 +0100,
> > Vipin Sharma <vipinsh@google.com> wrote:
> > >
> > > +KVM selftest runner is highly configurable test executor that allows to run
> > > +tests with different configurations (not just the default), parallely, save
> >
> > s/parallely/in parallel/
> >
>
> Thanks, I will fix it.
>
> > > +output to disk hierarchically, control what gets printed on console, provide
> > > +execution status.
> > > +
> > > +To generate default tests use::
> > > +
> > > + # make tests_install
> > > +
> > > +This will create ``testcases_default_gen`` directory which will have testcases
> >
> > I don't think using the future tense is correct here. I'd rather see
> > something written in the present tense, possibly imperative. For
> > example:
> >
> > "Create 'blah' directory containing 'foo' files, one per test-case.
> >
>
> Thanks, I will fix it.
>
> > > +in `default.test` files. Each KVM selftest will have a directory in which
> > > +`default.test` file will be created with executable path relative to KVM
> > > +selftest root directory i.e. `/tools/testing/selftests/kvm`.
> >
> > Shouldn't this honor the existing build output directives? If it
> > actually does, then you want to call this out.
> >
>
> To generate default test files in a specific directory one can use
> "OUTPUT" in the make command
The standard way to do this is documented in the top level Makefile:
<quote>
# This does not need to match to the root of the kernel source tree.
#
# For example, you can do this:
#
# cd /dir/to/store/output/files; make -f /dir/to/kernel/source/Makefile
#
# If you want to save output files in a different location, there are
# two syntaxes to specify it.
#
# 1) O=
# Use "make O=dir/to/store/output/files/"
#
# 2) Set KBUILD_OUTPUT
# Set the environment variable KBUILD_OUTPUT to point to the output directory.
# export KBUILD_OUTPUT=dir/to/store/output/files/; make
#
# The O= assignment takes precedence over the KBUILD_OUTPUT environment
# variable.
</quote>
Your new infrastructure should support the existing mechanism (and
avoid introducing a new one).
>
> make OUTPUT="~/test/directory/path" tests_install
>
> This allows to generate testcases_default_gen in the given output
> directory. default.test files will still have test binary path relative
> kvm selftest root directory.
>
> $OUTPUT
> └── testcases_default_gen
> ├── access_tracking_perf_test
> │ └── default.test
> ├── arch_timer
> │ └── default.test
> ├── arm64
> │ ├── aarch32_id_regs
> │ │ └── default.test
> │ ├── arch_timer_edge_cases
> │ │ └── default.test
> │ ├── debug-exceptions
> │ │ └── default.test
> │ ├── external_aborts
> │ │ └── default.test
> │ │ └── default.test
> │ └── ...
> ├── coalesced_io_test
> │ └── default.test
> ├── demand_paging_test
> │ └── default.test
> ├── ...
>
> So, arm64/aarch32_id_regs/default.test will have 'arm64/aarch32_id_regs'
>
> User can then supply -p/--path with the path of build output directory
> to runner.
>
> python3 runner -p ~/path/to/selftest/binaries -d $OUTPUT/testcases_default_gen
>
> If -p not given then current directory is considered for test
> executables.
>
> > > For example, the
> > > +`dirty_log_perf_test` will have::
> > > +
> > > + # cat testcase_default_gen/dirty_log_perf_test/default.test
> > > + dirty_log_perf_test
> > > +
> > > +Runner will execute `dirty_log_perf_test`. Testcases files can also provide
> > > +extra arguments to the test::
> > > +
> > > + # cat tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
> > > + dirty_log_perf_test -x 2 -v 5 -i 10
> > > +
> > > +In this case runner will execute the `dirty_log_perf_test` with the options.
> > > +
> >
> > The beginning of the text talks about "non-default' configurations,
> > but you only seem to talk about the default stuff. How does one deals
> > with a non-default config?
> >
>
> In the patch 1, I created two sample tests files,
> 2slot_5vcpu_10iter.test and no_dirty_log_protect.test, in the directory
> tools/testing/selftests/kvm/tests/dirty_log_perf_test.
>
> Contents of those files provide non-default arguments to test, for example,
> 2slot_5vcpu10iter.test has the command:
>
> dirty_log_perf_test -x 2 -v 5 -i 10
>
> One can run these non-default tests as (assuming current directory is
> kvm selftests):
>
> python3 runner -d ./tests
>
> Over the time we will add more of these non-default interesting
> testcases. One can then run:
>
> python3 runner -d ./tests ./testcases_default_gen
That's not what I am complaining about. What you call "configuration"
seems to just be "random set of parameters for a random test".
In practice, your runner does not seem configurable at all. You just
treat all possible configurations of a single test as different tests.
My (admittedly very personal) view of what a configuration should be
is "run this single test with these parameters varying in these
ranges, for this long".
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-10-02 14:41 ` Marc Zyngier
@ 2025-10-03 1:02 ` Sean Christopherson
2025-10-03 6:39 ` Vipin Sharma
1 sibling, 0 replies; 20+ messages in thread
From: Sean Christopherson @ 2025-10-03 1:02 UTC (permalink / raw)
To: Marc Zyngier
Cc: Vipin Sharma, kvm, kvmarm, kvm-riscv, pbonzini, borntraeger,
frankja, imbrenda, anup, atish.patra, zhaotianrui, maobibo,
chenhuacai, oliver.upton, ajones
On Thu, Oct 02, 2025, Marc Zyngier wrote:
> On Wed, 01 Oct 2025 18:32:25 +0100,
> > One can run these non-default tests as (assuming current directory is
> > kvm selftests):
> >
> > python3 runner -d ./tests
> >
> > Over the time we will add more of these non-default interesting
> > testcases. One can then run:
> >
> > python3 runner -d ./tests ./testcases_default_gen
>
> That's not what I am complaining about. What you call "configuration"
> seems to just be "random set of parameters for a random test".
Hopefully s/random/interesting, but yes, the design of the runner is specifically
to support running tests with different parameters, and not much more (from a
configuration perspective).
> In practice, your runner does not seem configurable at all. You just
> treat all possible configurations of a single test as different tests.
>
> My (admittedly very personal) view of what a configuration should be
> is "run this single test with these parameters varying in these
> ranges, for this long".
Ya, but personal preference is precisely why we kept the runner fairly minimal.
The goal is to provide:
1. A way to upstream non-standard test invocations so that they can be shared
with others, and to improve the coverage provided when developers just run
whatever tests are upstream (which probably covers most contributions?).
2. Provide "basic" functionality so that each developer doesn't have to reinvent
the wheel.
E.g. I have a (horrific) bash script to run selftests in parallel, and while
it works well enough for my purposes, it's far from perfect, e.g. there's no
timeouts, it's super hard to see what tests are still running, the logging is
hacky, etc.
The idea with this runner is to deal with those low-level details that are
painful to implement from scratch, and that generally don't require foisting
a highly opinionated view on anyone. E.g. if someone really doesn't want to
see certain output, or wants to fully serialize tests, it's easy to do so.
3. Tooling that power users (and hoepfully CI?) can build on, e.g. via wrapper
scripts, or something even fancier, again without having to be too opinionated.
E.g. thanks to the myraid module params in x86, I run all selftests with 5-6
different versions of KVM (by unloading and reloading KVM modules). We
deliberately chose not to allow specifying module params of sysfs knobs as
part of the runner, because:
(a) Handling system-wide changes in a runner gets nasty because of the
need to express and track dependencies/conflicts.
(b) It's easy (or should be easy) to query dependencies in selftests.
(c) Selftests need to query them anyways, e.g. to avoid failure when
run with a "bad configuration".
(d) Permuting on system-wide things outside of the runner isn't terribly
difficult (and often requires elevated privileges).
So yeah, there are definitely limitations, but for the most part they are self-
imposed. Partly to avoid boiling the ocean in the initial version (e.g. many
tests won't benefit from running with a range of values/parameters), but also so
that we don't end up in a situation where the runner only suits the needs of a
few people, e.g. because it's too opinionated and/or tailored to certain use cases.
I'm definitely not against providing more functionality/flexibility in the future,
but for a first go I'd like to stick to a relatively minimal implementation.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-10-02 14:41 ` Marc Zyngier
2025-10-03 1:02 ` Sean Christopherson
@ 2025-10-03 6:39 ` Vipin Sharma
1 sibling, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-10-03 6:39 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvm, kvmarm, kvm-riscv, seanjc, pbonzini, borntraeger, frankja,
imbrenda, anup, atish.patra, zhaotianrui, maobibo, chenhuacai,
oliver.upton, ajones
On 2025-10-02 15:41:26, Marc Zyngier wrote:
> On Wed, 01 Oct 2025 18:32:25 +0100,
> Vipin Sharma <vipinsh@google.com> wrote:
> >
> > On 2025-10-01 09:44:22, Marc Zyngier wrote:
> >
> > > > +in `default.test` files. Each KVM selftest will have a directory in which
> > > > +`default.test` file will be created with executable path relative to KVM
> > > > +selftest root directory i.e. `/tools/testing/selftests/kvm`.
> > >
> > > Shouldn't this honor the existing build output directives? If it
> > > actually does, then you want to call this out.
> > >
> >
> > To generate default test files in a specific directory one can use
> > "OUTPUT" in the make command
>
> The standard way to do this is documented in the top level Makefile:
>
> <quote>
> # This does not need to match to the root of the kernel source tree.
> #
> # For example, you can do this:
> #
> # cd /dir/to/store/output/files; make -f /dir/to/kernel/source/Makefile
> #
> # If you want to save output files in a different location, there are
> # two syntaxes to specify it.
> #
> # 1) O=
> # Use "make O=dir/to/store/output/files/"
> #
> # 2) Set KBUILD_OUTPUT
> # Set the environment variable KBUILD_OUTPUT to point to the output directory.
> # export KBUILD_OUTPUT=dir/to/store/output/files/; make
> #
> # The O= assignment takes precedence over the KBUILD_OUTPUT environment
> # variable.
> </quote>
>
> Your new infrastructure should support the existing mechanism (and
> avoid introducing a new one).
>
Options "O" and "KBUILD_OUTPUT" are not supported by KVM makefile. I
tried running below commands in KVM selftest directory
make O=~/some/dir
make KBUILD_OUTPUT=~/some/dir
Both of the command generate output in the KVM selftest directory. So,
my new Makefile rule "tests_install" is behaving as per the current KVM
behavior.
However, building through kselftest Makefile do support "O" and
"KBUILD_OUTPUT". For example, if we are in kernel root directory,
make O=~/some/dir -C tools/testing/selftests TARGETS=kvm
now output binaries will be generated in ~/some/dir. To make testcases
also be generated in the output directory below change is needed.
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index 6bb63f88c0e6..a2b5c064d3a3 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
include ../../../build/Build.include
-all:
+all: tests_install
LIBKVM += lib/assert.c
LIBKVM += lib/elf.c
@@ -358,4 +358,7 @@ tests_install:
$(foreach tc, $(list_progs), \
$(shell echo $(tc) > $(patsubst %,$(OUTPUT)/$(DEFAULT_TESTCASES)/%/default.test,$(tc))))
+ @if [ ! -d $(OUTPUT)/runner ]; then \
+ cp -r $(selfdir)/kvm/runner $(OUTPUT); \
+ fi
@:
Since testcases by themselves will not be useful, I have updated the
rule in the above diff to copy runner code as well to the output directory.
I think this brings my changes to honor existing build output
directives. Let me know if this needs some fixes.
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner
2025-09-30 16:36 ` [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner Vipin Sharma
2025-09-30 22:23 ` Vipin Sharma
@ 2025-10-10 9:47 ` Brendan Jackman
1 sibling, 0 replies; 20+ messages in thread
From: Brendan Jackman @ 2025-10-10 9:47 UTC (permalink / raw)
To: Vipin Sharma, kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, kvm-riscv
On Tue Sep 30, 2025 at 4:36 PM UTC, Vipin Sharma wrote:
> + def run(self):
> + if not self.exists:
> + self.stderr = "File doesn't exists."
> + return
Nit - typo here s/exists/exist/
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-09-30 16:36 ` [PATCH v3 9/9] KVM: selftests: Provide README.rst " Vipin Sharma
2025-10-01 8:44 ` Marc Zyngier
@ 2025-10-10 9:58 ` Brendan Jackman
2025-10-10 18:14 ` Sean Christopherson
1 sibling, 1 reply; 20+ messages in thread
From: Brendan Jackman @ 2025-10-10 9:58 UTC (permalink / raw)
To: Vipin Sharma, kvm, kvmarm, kvm-riscv
Cc: seanjc, pbonzini, borntraeger, frankja, imbrenda, anup,
atish.patra, zhaotianrui, maobibo, chenhuacai, maz, oliver.upton,
ajones, kvm-riscv
On Tue Sep 30, 2025 at 4:36 PM UTC, Vipin Sharma wrote:
> Add README.rst for KVM selftest runner and explain how to use the
> runner.
>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/runner/README.rst | 54 +++++++++++++++++++
> 2 files changed, 55 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/runner/README.rst
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 548d435bde2f..83aa2fe01bac 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -4,6 +4,7 @@
> !*.c
> !*.h
> !*.py
> +!*.rst
> !*.S
> !*.sh
> !*.test
> diff --git a/tools/testing/selftests/kvm/runner/README.rst b/tools/testing/selftests/kvm/runner/README.rst
> new file mode 100644
> index 000000000000..83b071c0a0e6
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/runner/README.rst
> @@ -0,0 +1,54 @@
> +KVM Selftest Runner
> +===================
> +
> +KVM selftest runner is highly configurable test executor that allows to run
> +tests with different configurations (not just the default), parallely, save
> +output to disk hierarchically, control what gets printed on console, provide
> +execution status.
> +
> +To generate default tests use::
> +
> + # make tests_install
> +
> +This will create ``testcases_default_gen`` directory which will have testcases
> +in `default.test` files. Each KVM selftest will have a directory in which
> +`default.test` file will be created with executable path relative to KVM
> +selftest root directory i.e. `/tools/testing/selftests/kvm`. For example, the
> +`dirty_log_perf_test` will have::
> +
> + # cat testcase_default_gen/dirty_log_perf_test/default.test
> + dirty_log_perf_test
> +
> +Runner will execute `dirty_log_perf_test`. Testcases files can also provide
> +extra arguments to the test::
> +
> + # cat tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
> + dirty_log_perf_test -x 2 -v 5 -i 10
> +
> +In this case runner will execute the `dirty_log_perf_test` with the options.
> +
> +Example
> +=======
> +
> +To see all of the options::
> +
> + # python3 runner -h
> +
> +To run all of the default tests::
> +
> + # python3 runner -d testcases_default_gen
> +
> +To run tests parallely::
> +
> + # python3 runner -d testcases_default_gen -j 40
> +
> +To print only passed test status and failed test stderr::
> +
> + # python3 runner -d testcases_default_gen --print-passed status \
> + --print-failed stderr
> +
> +To run tests binary which are in some other directory (out of tree builds)::
> +
> + # python3 runner -d testcases_default_gen -p /path/to/binaries
I understand that for reasons of velocity it might make sense to do this
as a KVM-specific thing, but IIUC very little of this has anything to do
with KVM in particular, right? Is there an expectation to evolve in a
more KVM-specific direction?
(One thing that might be KVM-specific is the concurrency. I assume there
are a bunch of KVM tests that are pretty isolated from one another and
reasonable to run in parallel. Testing _the_ mm like that just isn't
gonna work most of the time. I still think this is really specific to
individual sets of tests though, in a more mature system there would be
a metadata mechanism for marking tests as parallelisable wrt each other.
I guess this patchset is part of an effort to have a more mature system
that enables that kind of thing.).
To avoid confusing people and potentially leave the door open to a
cleaner integration, please can you add some bits here about how this
relates to the rest of the kselftest infrastructure? Some questions I
think are worth answering:
- As someone who runs KVM selftests, but doesn't work specifically on
KVM, to what extent do I need to know about this tool? Can I still run
the selftests "the old fashioned way" and if so what do I lose as
compared to using the KVM runner?
- Does this system change the "data model" of the selftests at all, and
if so how? I.e. I think (but honestly I'm not sure) that kselftests
are a 2-tier hierarchy of $suite:$test without any further
parameterisation or nesting (where there is more detail, it's hidden
as implementation details of individual $tests). Do the KVM selftests
have this structure? If it differs, how does that effect the view from
run_kselftest.sh?
- I think (again, not very sure) that in kselftest that each $test is a
command executing a process. And this process communicates its status
by printing KTAP and returning an exit code. Is that stuff the same
for this runner?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-10-10 9:58 ` Brendan Jackman
@ 2025-10-10 18:14 ` Sean Christopherson
2025-10-10 19:38 ` Vipin Sharma
0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2025-10-10 18:14 UTC (permalink / raw)
To: Brendan Jackman
Cc: Vipin Sharma, kvm, kvmarm, kvm-riscv, pbonzini, borntraeger,
frankja, imbrenda, anup, atish.patra, zhaotianrui, maobibo,
chenhuacai, maz, oliver.upton, ajones, kvm-riscv
On Fri, Oct 10, 2025, Brendan Jackman wrote:
> On Tue Sep 30, 2025 at 4:36 PM UTC, Vipin Sharma wrote:
> > @@ -0,0 +1,54 @@
> > +KVM Selftest Runner
> > +===================
> > +
> > +KVM selftest runner is highly configurable test executor that allows to run
> > +tests with different configurations (not just the default), parallely, save
> > +output to disk hierarchically, control what gets printed on console, provide
> > +execution status.
...
> I understand that for reasons of velocity
It's not just velocity, it's also for stability and maintainability. Selftests are
the wild, wild west; there's no central authority, and many subsystems have "needs"
and opinions. E.g. tools/testing/selftests/kselftest_harness.h is quite opionated,
_and_ it has fatally been broken multiple due to one subsystem making changes that
broke usage for other subsystems. Obviously those bugs got sorted out, but it's a
painful experience.
I guess you could say those things are all about velocity in the end; but I want
to call out that it's not just about the initial velocity of landing the series,
it's also about the long-term velocity of being able to make changes to fit KVM's
needs without getting bogged down due to other susbystems adding requirements and
use cases that are irrelevant or at odds with KVM's.
> it might make sense to do this as a KVM-specific thing, but IIUC very little
> of this has anything to do with KVM in particular, right?
The actual implementation doesn't have any dependencies on KVM, but the design
and its goal are tailored to the needs of KVM.
> Is there an expectation to evolve in a more KVM-specific direction?
Sort of? I don't think we'll ever pick up direct dependencies, but I do think
we'll continue to tailor the runner to the needs of the KVM community.
> (One thing that might be KVM-specific is the concurrency. I assume there
> are a bunch of KVM tests that are pretty isolated from one another and
> reasonable to run in parallel.
Every KVM selftest should be able to run in parallel. That's actually a very
intentional design property of the runner: any system-level configuration needs
to be done by a "higher" authority, e.g. the human manually running the test, a
wrapper script, some form of CI infrastructure, etc.
> Testing _the_ mm like that just isn't gonna work most of the time. I still
> think this is really specific to individual sets of tests though, in a more
> mature system there would be a metadata mechanism for marking tests as
> parallelisable wrt each other.
Dependency and friendliness tracking is again something we specifically avoided
doing, because the KVM selftests need to be self-contained anyways. E.g. if a
test requires KVM module param X to be enabled, then the test needs to skip.
The runner takes advantage of that behavior in order to simplify the code; it
really is just a "dumb" executor.
> I guess this patchset is part of an effort to have a more mature system that
> enables that kind of thing.).
Sort of? My response to Marc covered more of the goals in detail:
https://lore.kernel.org/all/aN8gkEMHuvIVPcCt@google.com
> To avoid confusing people and potentially leave the door open to a
> cleaner integration, please can you add some bits here about how this
> relates to the rest of the kselftest infrastructure? Some questions I
> think are worth answering:
>
> - As someone who runs KVM selftests, but doesn't work specifically on
> KVM, to what extent do I need to know about this tool? Can I still run
> the selftests "the old fashioned way" and if so what do I lose as
> compared to using the KVM runner?
The runner is purely optional. You'll lose whatever you don't have, that the
runner provides. E.g. I have (hacky) scripts to run KVM selftests in parallel,
but without much of the niceties provided by this runner.
> - Does this system change the "data model" of the selftests at all, and
> if so how? I.e. I think (but honestly I'm not sure) that kselftests
> are a 2-tier hierarchy of $suite:$test without any further
> parameterisation or nesting (where there is more detail, it's hidden
> as implementation details of individual $tests). Do the KVM selftests
> have this structure?
More or less.
> If it differs, how does that effect the view from run_kselftest.sh?
AFAIK, nothing in KVM selftests is at odds with run_kselftest.sh.
> - I think (again, not very sure) that in kselftest that each $test is a
> command executing a process. And this process communicates its status
> by printing KTAP and returning an exit code. Is that stuff the same
> for this runner?
Yes? Except most KVM selftests don't support TAP (yet).
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
2025-10-10 18:14 ` Sean Christopherson
@ 2025-10-10 19:38 ` Vipin Sharma
0 siblings, 0 replies; 20+ messages in thread
From: Vipin Sharma @ 2025-10-10 19:38 UTC (permalink / raw)
To: Sean Christopherson, Brendan Jackman
Cc: kvm, kvmarm, kvm-riscv, pbonzini, borntraeger, frankja, imbrenda,
anup, atish.patra, zhaotianrui, maobibo, chenhuacai, maz,
oliver.upton, ajones, kvm-riscv
On Fri, Oct 10, 2025 at 11:14 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Oct 10, 2025, Brendan Jackman wrote:
> > To avoid confusing people and potentially leave the door open to a
> > cleaner integration, please can you add some bits here about how this
> > relates to the rest of the kselftest infrastructure? Some questions I
> > think are worth answering:
> >
It's a good point, I should write about how it compares with kselftest
infrastructure.
> > - As someone who runs KVM selftests, but doesn't work specifically on
> > KVM, to what extent do I need to know about this tool? Can I still run
> > the selftests "the old fashioned way" and if so what do I lose as
> > compared to using the KVM runner?
>
> The runner is purely optional. You'll lose whatever you don't have, that the
> runner provides. E.g. I have (hacky) scripts to run KVM selftests in parallel,
> but without much of the niceties provided by this runner.
>
For occasional contributors, it will provide opportunity to run the
default tests and run tests with different combinations of their
command line options. Overall, provides better test coverage and
confidence before sending patches upstream.
For regular contributors and maintainers, a fast execution of
selftests + more coverage. Runner allows them to configure output and
results as per their wish/workflows.
For everyone, to upload their interesting test combinations they use
so that the overall KVM community gets benefit out of it.
> > - Does this system change the "data model" of the selftests at all, and
> > if so how? I.e. I think (but honestly I'm not sure) that kselftests
> > are a 2-tier hierarchy of $suite:$test without any further
> > parameterisation or nesting (where there is more detail, it's hidden
> > as implementation details of individual $tests). Do the KVM selftests
> > have this structure?
>
> More or less.
>
> > If it differs, how does that effect the view from run_kselftest.sh?
>
> AFAIK, nothing in KVM selftests is at odds with run_kselftest.sh.
>
> > - I think (again, not very sure) that in kselftest that each $test is a
> > command executing a process. And this process communicates its status
> > by printing KTAP and returning an exit code. Is that stuff the same
> > for this runner?
>
> Yes? Except most KVM selftests don't support TAP (yet).
Here each test is executed in its own subprocess. Output of runner is
not in KTAP format, same for majority of the KVM tests.
Execution is any command written in *.test file, which is interpreted
as a command to execute in a shell. This allows us to write tests
which can run based on the resources on the host like number of vcpus
to use, how much memory to allocate, etc.
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-10-10 19:38 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-30 16:36 [PATCH v3 0/9] KVM Selftest Runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 1/9] KVM: selftest: Create KVM selftest runner Vipin Sharma
2025-09-30 22:23 ` Vipin Sharma
2025-10-10 9:47 ` Brendan Jackman
2025-09-30 16:36 ` [PATCH v3 2/9] KVM: selftests: Provide executables path option to the " Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 3/9] KVM: selftests: Add timeout option in selftests runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 4/9] KVM: selftests: Add option to save selftest runner output to a directory Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 5/9] KVM: selftests: Run tests concurrently in KVM selftests runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 6/9] KVM: selftests: Add various print flags to KVM selftest runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 7/9] KVM: selftests: Print sticky KVM selftests runner status at bottom Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 8/9] KVM: selftests: Add rule to generate default tests for KVM selftests runner Vipin Sharma
2025-09-30 16:36 ` [PATCH v3 9/9] KVM: selftests: Provide README.rst " Vipin Sharma
2025-10-01 8:44 ` Marc Zyngier
2025-10-01 17:32 ` Vipin Sharma
2025-10-02 14:41 ` Marc Zyngier
2025-10-03 1:02 ` Sean Christopherson
2025-10-03 6:39 ` Vipin Sharma
2025-10-10 9:58 ` Brendan Jackman
2025-10-10 18:14 ` Sean Christopherson
2025-10-10 19:38 ` Vipin Sharma
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox