public inbox for linux-rt-users@vger.kernel.org
 help / color / mirror / Atom feed
From: Tomas Glozar <tglozar@redhat.com>
To: John Kacur <jkacur@redhat.com>, Clark Williams <williams@redhat.com>
Cc: Linux RT Users <linux-rt-users@vger.kernel.org>,
	Tomas Glozar <tglozar@redhat.com>
Subject: [PATCH 2/2] rteval: Add README-tests
Date: Tue, 25 Nov 2025 12:02:39 +0100	[thread overview]
Message-ID: <20251125110241.277542-2-tglozar@redhat.com> (raw)
In-Reply-To: <20251125110241.277542-1-tglozar@redhat.com>

Add a README-tests file describing what tests are implemented for
rteval and how to run them.

Signed-off-by: Tomas Glozar <tglozar@redhat.com>
---
 README-tests | 102 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 102 insertions(+)
 create mode 100644 README-tests

diff --git a/README-tests b/README-tests
new file mode 100644
index 0000000..802d4eb
--- /dev/null
+++ b/README-tests
@@ -0,0 +1,102 @@
+There currently exist four kinds of tests for rteval:
+
+- Unit tests
+
+Unit tests reside in the tests/ directory. They are Python modules which import
+the Python unittest library.
+
+The Makefile target "unittest" (or, alternatively, just "test") invokes unit
+tests:
+
+$ make test
+Running unit tests...
+./run_tests.sh
+=========================================
+Running rteval Unit Tests
+=========================================
+
+Running: test_measurement_module_selection
+---
+test_argparse_rejects_invalid_module (__main__.TestMeasurementModuleSelection)
+Test that argparse rejects invalid module names ... usage: test_measurement_module_selection.py [--measurement-module {cyclictest,timerlat}]
+test_measurement_module_selection.py: error: argument --measurement-module: invalid choice: 'invalid' (choose from 'cyclictest', 'timerlat')
+ok
+...
+
+----------------------------------------------------------------------
+Ran 6 tests in 0.003s
+
+OK
+✓ PASSED: test_measurement_module_selection
+
+=========================================
+Test Summary
+=========================================
+Total tests run: 1
+Passed: 1
+Failed: 0
+
+✓ All tests passed!
+
+- End-to-end tests
+
+End-to-end tests reside in the tests/e2e subdirectory, in files with
+the .t extension. They are written in bash and produce output compatible with
+the TEP standard.
+
+To run end-to-end tests, the rteval kcompile source tarball has to be present;
+see LOADS variable in Makefile for the tarballs used by the current version of
+rteval.
+
+The Makefile target "check" invokes the end-to-end tests under Test::Harness
+via the "prove" command:
+
+$ sudo make e2e-tests
+PYTHON="python3" RTEVAL="/usr/src/rteval/rteval-cmd" RTEVAL_PKG="/usr/src/rteval" prove -o -f -v tests/e2e/
+...
+All tests successful.
+Files=3, Tests=33, 288 wallclock secs ( 0.02 usr  0.00 sys + 127.59 cusr 155.27 csys = 282.88 CPU)
+Result: PASS
+
+Test::Harness here serves the purpose of measuring time, counting the total
+number of tests, and verifying if each test suite reported the correct number
+of test results.
+
+Note: rteval requires root privileges to run. All tests, with the exception of
+a part of the unit tests, thus require root.
+
+- Pre-defined rteval commands for manual testing
+
+These are accessible under the Makefile targets: "runit", "load", and
+"sysreport".
+
+"runit" tests both measurements and loads, "load" tests only loads, and
+"sysreport" runs both measurements and loads while also generating a SOS report
+at the end of the rteval run.
+
+- Legacy unit tests
+
+Legacy unit tests are embedded directly in rteval source code. The test engine
+to run them is located in tests/unittest-legacy.py, and may be run with
+the following sequence of commands (starting in the root directory):
+
+$ cd rteval/
+$ sudo python3 ../tests/unittest-legacy.py
+...
+ --------------------
+  ** TEST SUMMARY **
+ --------------------
+
+  - Modules:
+      Declared for test:      4
+      Successfully imported:  4
+      Failed import:          0
+
+  - Tests:
+      Tests scheduled:        4
+      Sucessfully tests:      4
+      Failed tests:           0
+      Missing unit_test()     0
+
+Note that some of the tests require root and will fail if run under normal
+user.
-- 
2.51.1


  reply	other threads:[~2025-11-25 11:02 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-25 11:02 [PATCH 1/2] rteval: Move unittest.py to tests/unittest-legacy.py Tomas Glozar
2025-11-25 11:02 ` Tomas Glozar [this message]
2025-11-25 19:05   ` [PATCH 2/2] rteval: Add README-tests John Kacur
2025-11-25 19:04 ` [PATCH 1/2] rteval: Move unittest.py to tests/unittest-legacy.py John Kacur

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251125110241.277542-2-tglozar@redhat.com \
    --to=tglozar@redhat.com \
    --cc=jkacur@redhat.com \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=williams@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox