public inbox for linux-rt-users@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] rteval: Introduce E2E tests with output checking
@ 2025-07-25  9:02 Tomas Glozar
  2025-09-18 17:56 ` John Kacur
  2025-09-18 19:22 ` Crystal Wood
  0 siblings, 2 replies; 8+ messages in thread
From: Tomas Glozar @ 2025-07-25  9:02 UTC (permalink / raw)
  To: John Kacur, Clark Williams; +Cc: Linux RT Users, Tomas Glozar

Currently, rteval has two kinds of tests:
- Unit tests, embedded directly in code, and run by
  unit-tests/unittest.py.
- End-to-end tests, implemented in Makefile targets runit, load, and
  sysreport.

Introduce a new test suite in folder e2e-tests/ (analogically to
unit-tests) that uses Test::Harness ("prove" command) together with a
simple test engine adopted from RTLA.

The test suite runs rteval in a temporary folder for each test case,
collects its exit value and output, and validates both according to the
test specification. grep is used to check the output, optionally with
custom flags. rteval.conf is generated individually based on the test
specification of each case.

Three test sets are implemented in the commit:
- "basic", running base checks without any specific modules or module
  options. Replaces "make runit".
- "loads", testing various kinds of loads, including their options.
- "measurement", testing both cyclictest and timerlat.

The latter two check in rteval output whether the command run by rteval
corresponds to the rteval options and to the specified load or
measurement, if possible.

"make runit" and "make load" targets are depracated with a warning;
"make sysreport" is kept until sysreport tests are added to the new test
suite.

Signed-off-by: Tomas Glozar <tglozar@redhat.com>
---
 Makefile                |   5 ++
 e2e-tests/basic.t       |  26 +++++++++++
 e2e-tests/engine.sh     |  81 ++++++++++++++++++++++++++++++++
 e2e-tests/loads.t       |  72 ++++++++++++++++++++++++++++
 e2e-tests/measurement.t | 101 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 285 insertions(+)
 create mode 100644 e2e-tests/basic.t
 create mode 100644 e2e-tests/engine.sh
 create mode 100644 e2e-tests/loads.t
 create mode 100644 e2e-tests/measurement.t

diff --git a/Makefile b/Makefile
index a250b18..dbb2aef 100644
--- a/Makefile
+++ b/Makefile
@@ -18,11 +18,16 @@ KLOAD	:=	$(LOADDIR)/linux-6.12-rc4.tar.gz
 BLOAD	:=	$(LOADDIR)/dbench-4.0.tar.gz
 LOADS	:=	$(KLOAD) $(BLOAD)
 
+check: rteval-cmd
+	PYTHON="$(PYTHON)" RTEVAL="$(HERE)/rteval-cmd" RTEVAL_PKG="$(HERE)" prove -o -f e2e-tests/
+
 runit:
+	$(warning "'make runit' is depracated, please use 'make check'")
 	[ -d $(HERE)/run ] || mkdir run
 	$(PYTHON) rteval-cmd -D -L -v --workdir=$(HERE)/run --loaddir=$(HERE)/loadsource --duration=$(D) -f $(HERE)/rteval.conf -i $(HERE)/rteval $(EXTRA)
 
 load:
+	$(warning "'make load' is depracated, please use 'make check'")
 	[ -d ./run ] || mkdir run
 	$(PYTHON) rteval-cmd --onlyload -D -L -v --workdir=./run --loaddir=$(HERE)/loadsource -f $(HERE)/rteval/rteval.conf -i $(HERE)/rteval
 
diff --git a/e2e-tests/basic.t b/e2e-tests/basic.t
new file mode 100644
index 0000000..c7b7b17
--- /dev/null
+++ b/e2e-tests/basic.t
@@ -0,0 +1,26 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+source e2e-tests/engine.sh
+test_begin
+
+set_timeout 2m
+
+check "help message" \
+  "--help" 0 "usage: rteval-cmd"
+
+check "help message short" \
+  "-h" 0 "usage: rteval-cmd"
+
+check "debug" \
+  "-D -d 1" 0 '\[DEBUG\]'
+
+check "duration" \
+  "-d 5" 0 "Run duration: 5.0 seconds"
+
+check "verbose" \
+  "-v -d 5" 0 '\[INFO\]'
+
+check "quiet" \
+  "-d 5" 0 '(\[INFO\])|(\[DEBUG\])' "-v"
+
+test_end
diff --git a/e2e-tests/engine.sh b/e2e-tests/engine.sh
new file mode 100644
index 0000000..ff1a882
--- /dev/null
+++ b/e2e-tests/engine.sh
@@ -0,0 +1,81 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+load_default_config() {
+	rteval_config=$(<$RTEVAL_PKG/rteval.conf)
+}
+
+test_begin() {
+	# Count tests to allow the test harness to double-check if all were
+	# included correctly.
+	ctr=0
+	[ -z "$PYTHON" ] && PYTHON="python3"
+	[ -z "$RTEVAL" ] && RTEVAL="$PWD/rteval-cmd"
+	[ -z "$RTEVAL_PKG" ] && RTEVAL_PKG="$PWD"
+	[ -n "$TEST_COUNT" ] && echo "1..$TEST_COUNT"
+	load_default_config
+}
+
+check() {
+	test_name=$0
+	tested_command=$1
+	expected_exitcode=${3:-0}
+	expected_output=$4
+	grep_flags=$5
+	# Simple check: run rteval with given arguments and test exit code.
+	# If TEST_COUNT is set, run the test. Otherwise, just count.
+	ctr=$(($ctr + 1))
+	if [ -n "$TEST_COUNT" ]
+	then
+		# Create a temporary directory to contain rteval output
+		tmpdir=$(mktemp -d)
+		pushd $tmpdir >/dev/null
+		cat <<< $rteval_config > rteval.conf
+		# Run rteval; in case of failure, include its output as comment
+		# in the test results.
+		result=$(PYTHONPATH="$RTEVAL_PKG" stdbuf -oL $TIMEOUT $PYTHON "$RTEVAL" $2 2>&1); exitcode=$?
+		# Test if the results matches if requested
+		if [ -n "$expected_output" ]
+		then
+			grep $grep_flags -E "$expected_output" <<< "$result" > /dev/null; grep_result=$?
+		else
+			grep_result=0
+		fi
+
+		# If expected exitcode is any, allow any exit code
+		[ "$expected_exitcode" == "any" ] && expected_exitcode=$exitcode
+
+		if [ $exitcode -eq $expected_exitcode ] && [ $grep_result -eq 0 ]
+		then
+			echo "ok $ctr - $1"
+		else
+			echo "not ok $ctr - $1"
+			# Add rtla output and exit code as comments in case of failure
+			echo "$result" | col -b | while read line; do echo "# $line"; done
+			printf "#\n# exit code %s\n" $exitcode
+			[ -n "$expected_output" ] && [ $grep_result -ne 0 ] && \
+				printf "# Output match failed: \"%s\"\n" "$expected_output"
+		fi
+
+		# Remove temporary directory
+		popd >/dev/null
+		rm -r $tmpdir
+	fi
+}
+
+set_timeout() {
+	TIMEOUT="timeout -v -k 15s $1"
+}
+
+unset_timeout() {
+	unset TIMEOUT
+}
+
+test_end() {
+	# If running without TEST_COUNT, tests are not actually run, just
+	# counted. In that case, re-run the test with the correct count.
+	[ -z "$TEST_COUNT" ] && TEST_COUNT=$ctr exec bash $0 || true
+}
+
+# Avoid any environmental discrepancies
+export LC_ALL=C
+unset_timeout
diff --git a/e2e-tests/loads.t b/e2e-tests/loads.t
new file mode 100644
index 0000000..b43d957
--- /dev/null
+++ b/e2e-tests/loads.t
@@ -0,0 +1,72 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+source e2e-tests/engine.sh
+test_begin
+
+set_timeout 2m
+
+# stress-ng checks
+rteval_config="[rteval]
+duration:  60.0
+report_interval: 600
+
+[measurement]
+
+[loads]
+stressng:  module
+"
+
+check "stress-ng debug" \
+    "--onlyload -D -d 1" 0 '\[DEBUG\]'
+
+check "stress-ng command" \
+    "--onlyload -D -d 1 --stressng-option procfs --stressng-arg 1" 0 \
+    'starting with stress-ng --procfs 1 --taskset'
+
+check "stress-ng command, with --loads-cpulist" \
+    "--onlyload -D -d 1 --loads-cpulist=0-2 --stressng-option procfs --stressng-arg 1" 0 \
+    'starting with stress-ng --procfs 1 --taskset 0,1,2'
+
+check "stress-ng command, with --stressng-timeout" \
+    "--onlyload -D -d 1 --stressng-option procfs --stressng-arg 1 --stressng-timeout 2" 0 \
+    'starting with stress-ng --procfs 1 --timeout 2'
+
+# hackbench checks
+rteval_config="[rteval]
+duration:  60.0
+report_interval: 600
+
+[measurement]
+
+[loads]
+hackbench:  module
+"
+
+check "hackbench command" \
+    "--onlyload --hackbench-runlowmem=True -D -d 1" 0 \
+    "starting on node 0: args = ['taskset', '-c', '[0-9|,]+', 'hackbench', '-P', '-g', '42', '-l', '1000', '-s', '1000']"
+
+check "hackbench command, with --loads-cpulist" \
+    "--onlyload --hackbench-runlowmem=True --loads-cpulist=0-2 -D -d 1" 0 \
+    "starting on node 0: args = ['taskset', '-c', '0,1,2', 'hackbench', '-P', '-g', '42', '-l', '1000', '-s', '1000']"
+
+# kcompile checks
+rteval_config="[rteval]
+duration:  60.0
+report_interval: 600
+
+[measurement]
+
+[loads]
+kcompile:  module
+"
+
+check "kcompile command" \
+    "--onlyload -D -d 1" 0 \
+    'running on node 0: taskset -c [0-9|,]+ make O=.* -C .* -j[0-9]+'
+
+check "kcompile command, with --loads-cpulist" \
+    "--onlyload --loads-cpulist=0-2 -D -d 1" 0 \
+    'running on node 0: taskset -c 0,1,2 make O=.* -C .* -j6'
+
+test_end
diff --git a/e2e-tests/measurement.t b/e2e-tests/measurement.t
new file mode 100644
index 0000000..3aa24c7
--- /dev/null
+++ b/e2e-tests/measurement.t
@@ -0,0 +1,101 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+source e2e-tests/engine.sh
+test_begin
+
+set_timeout 2m
+
+# cyclictest checks
+rteval_config="[rteval]
+duration:  60.0
+report_interval: 600
+
+[measurement]
+cyclictest: module
+
+[loads]
+"
+
+check "cyclictest debug" \
+  "--noload -D -d 1" 0 '\[DEBUG\]'
+
+check "cyclictest duration" \
+  "--noload -d 5" 0 "Run duration: 5.0 seconds"
+
+check "cyclictest command, no extra options" \
+  "--noload -d 1" 0 'Command: cyclictest -i100 -qmu -h 3500 -p95'
+
+check "cyclictest command, with --measurement-cpulist" \
+  "--noload -d 1 --measurement-cpulist=0-1" 0 \
+  'Command: cyclictest -i100 -qmu -h 3500 -p95 -t2 -a0-1'
+
+check "cyclictest command, with --measurement-run-on-isolcpus" \
+  "--noload -d 1 --measurement-run-on-isolcpus" 0 \
+  'Command: cyclictest -i100 -qmu -h 3500 -p95'
+
+check "cyclictest command, with --cyclictest-priority" \
+  "--noload -d 1 --cyclictest-priority=80" 0 \
+  'Command: cyclictest -i100 -qmu -h 3500 -p80'
+
+check "cyclictest command, with --cyclictest-interval" \
+  "--noload -d 1 --cyclictest-interval=1000" 0 \
+  'Command: cyclictest -i1000 -qmu -h 3500 -p95'
+
+check "cyclictest command, with --cyclictest-buckets" \
+  "--noload -d 1 --cyclictest-buckets=2000" 0 \
+  'Command: cyclictest -i100 -qmu -h 2000 -p95'
+
+check "cyclictest command, with --cyclictest-breaktrace" \
+  "--noload -d 1 --cyclictest-breaktrace=1" any \
+  'Command: cyclictest -i100 -qmu -h 3500 -p95 -t[0-9]+ -a[0-9|-]+ -b1 --tracemark'
+
+check "cyclictest command, with --cyclictest-threshold" \
+  "--noload -d 1 --cyclictest-threshold=1" any \
+  'Command: cyclictest -i100 -qmu -h 3500 -p95 -t[0-9]+ -a[0-9|-]+ -b1'
+
+# timerlat checks
+rteval_config="[rteval]
+duration:  60.0
+report_interval: 600
+
+[measurement]
+timerlat: module
+
+[loads]
+"
+
+check "timerlat debug" \
+  "--noload -D -d 1" 0 '\[DEBUG\]'
+
+check "timerlat duration" \
+  "--noload -d 5" 0 "Run duration: 5.0 seconds"
+
+check "timerlat command, with --measurement-cpulist" \
+  "--noload -d 1 --measurement-cpulist=0-1" 0 \
+  'Command: rtla timerlat hist -p1100 -P f:95 -u -c0-1'
+
+check "timerlat command, with --measurement-run-on-isolcpus" \
+  "--noload -d 1 --measurement-run-on-isolcpus" 0 \
+  'Command: rtla timerlat hist -p1100 -P f:95 -u'
+
+check "timerlat command, with --timerlat-interval" \
+  "--noload -d 1 --timerlat-interval 2000" 0 \
+  'Command: rtla timerlat hist -p2000 -P f:95'
+
+check "timerlat command, with --timerlat-priority" \
+  "--noload -d 1 --timerlat-priority 80" 0 \
+  'Command: rtla timerlat hist -p1100 -P f:80 -u'
+
+check "timerlat command, with --timerlat-buckets" \
+  "--noload -d 1 --timerlat-buckets 4000" 0 \
+  'Command: rtla timerlat hist -p1100 -P f:95 -u -c[0-9|-]+ -E4000'
+
+check "timerlat command, with --timerlat-stoptrace" \
+  "--noload -d 1 --timerlat-stoptrace 1" any \
+  'Command: rtla timerlat hist -p1100 -P f:95 -u -c[0-9|-]+ -E3500 --no-summary -T1'
+
+check "timerlat command, with --timerlat-trace" \
+  "--noload -d 1 --timerlat-stoptrace 1 --timerlat-trace trace.txt" any \
+  'Command: rtla timerlat hist -p1100 -P f:95 -u -c[0-9|-]+ -E3500 --no-summary -T1 -t=trace.txt'
+
+test_end
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-07-25  9:02 [PATCH] rteval: Introduce E2E tests with output checking Tomas Glozar
@ 2025-09-18 17:56 ` John Kacur
  2025-09-18 19:24   ` Crystal Wood
                     ` (2 more replies)
  2025-09-18 19:22 ` Crystal Wood
  1 sibling, 3 replies; 8+ messages in thread
From: John Kacur @ 2025-09-18 17:56 UTC (permalink / raw)
  To: Tomas Glozar; +Cc: Clark Williams, Linux RT Users



On Fri, 25 Jul 2025, Tomas Glozar wrote:

> Currently, rteval has two kinds of tests:
> - Unit tests, embedded directly in code, and run by
>   unit-tests/unittest.py.
> - End-to-end tests, implemented in Makefile targets runit, load, and
>   sysreport.
> 
> Introduce a new test suite in folder e2e-tests/ (analogically to
> unit-tests) that uses Test::Harness ("prove" command) together with a
> simple test engine adopted from RTLA.
> 
> The test suite runs rteval in a temporary folder for each test case,
> collects its exit value and output, and validates both according to the
> test specification. grep is used to check the output, optionally with
> custom flags. rteval.conf is generated individually based on the test
> specification of each case.
> 
> Three test sets are implemented in the commit:
> - "basic", running base checks without any specific modules or module
>   options. Replaces "make runit".
> - "loads", testing various kinds of loads, including their options.
> - "measurement", testing both cyclictest and timerlat.
> 
> The latter two check in rteval output whether the command run by rteval
> corresponds to the rteval options and to the specified load or
> measurement, if possible.
> 
> "make runit" and "make load" targets are depracated with a warning;
> "make sysreport" is kept until sysreport tests are added to the new test
> suite.
> 

Interesting.
Note the correct spelling is "deprecated"

I'm not sure we want to deprecate runit and loads, they have different 
purposes. I often run in a non-rt environment and without tuned if I'm 
just testing functionality and not performance, and I got this after 
running make check (skipping some of the output before that)

# Output match failed: "Command: rtla timerlat hist -p1100 -P f:95 -u 
-c[0-9|-]+ -E3500 --no-summary -T1 -t=trace.txt"
e2e-tests/measurement.t .. Failed 6/19 subtests 

Test Summary Report
-------------------
e2e-tests/loads.t      (Wstat: 0 Tests: 8 Failed: 1)
  Failed test:  5
e2e-tests/measurement.t (Wstat: 0 Tests: 19 Failed: 6)
  Failed tests:  13-14, 16-19
Files=3, Tests=33, 308 wallclock secs ( 0.06 usr  0.00 sys + 146.20 cusr 
281.79 csys = 428.05 CPU)
Result: FAIL
make: *** [Makefile:22: check] Error 1

How do I identify which test is test number 5?
Am I failing tests because of performance reasons or because the tests 
expect an environment different from mine?

Thanks

John Kacur


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-07-25  9:02 [PATCH] rteval: Introduce E2E tests with output checking Tomas Glozar
  2025-09-18 17:56 ` John Kacur
@ 2025-09-18 19:22 ` Crystal Wood
  2025-09-26 11:09   ` Tomas Glozar
  1 sibling, 1 reply; 8+ messages in thread
From: Crystal Wood @ 2025-09-18 19:22 UTC (permalink / raw)
  To: Tomas Glozar, John Kacur, Clark Williams; +Cc: Linux RT Users

On Fri, 2025-07-25 at 11:02 +0200, Tomas Glozar wrote:
> Currently, rteval has two kinds of tests:
> - Unit tests, embedded directly in code, and run by
>   unit-tests/unittest.py.
> - End-to-end tests, implemented in Makefile targets runit, load, and
>   sysreport.
> 
> Introduce a new test suite in folder e2e-tests/ (analogically to
> unit-tests) that uses Test::Harness ("prove" command) together with a
> simple test engine adopted from RTLA.

If we're going to use this, please mention the dependency in the README
along with how to install it.  Don't assume familiarity with the Perl
ecosystem.

Can we just put all the tests in tests/ rather than making this
distinction?  And have the unit tests be something that can be run just
like any other test?

> +		if [ $exitcode -eq $expected_exitcode ] && [ $grep_result -eq 0 ]
> +		then
> +			echo "ok $ctr - $1"
> +		else
> +			echo "not ok $ctr - $1"
> +			# Add rtla output and exit code as comments in case of failure
> +			echo "$result" | col -b | while read line; do echo "# $line"; done
> +			printf "#\n# exit code %s\n" $exitcode
> +			[ -n "$expected_output" ] && [ $grep_result -ne 0 ] && \
> +				printf "# Output match failed: \"%s\"\n" "$expected_output"
> +		fi

Any reason to not take the updated version of the engine from the rtla
consolidation patchset?

It would also be nice if there were a way to just run specific tests by
number, rather than a whole .t file.

-Crystal


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-09-18 17:56 ` John Kacur
@ 2025-09-18 19:24   ` Crystal Wood
  2025-09-26 11:12   ` Tomas Glozar
  2025-11-04 12:57   ` Tomas Glozar
  2 siblings, 0 replies; 8+ messages in thread
From: Crystal Wood @ 2025-09-18 19:24 UTC (permalink / raw)
  To: John Kacur, Tomas Glozar; +Cc: Clark Williams, Linux RT Users

On Thu, 2025-09-18 at 13:56 -0400, John Kacur wrote:

> Test Summary Report
> -------------------
> e2e-tests/loads.t      (Wstat: 0 Tests: 8 Failed: 1)
>   Failed test:  5
> e2e-tests/measurement.t (Wstat: 0 Tests: 19 Failed: 6)
>   Failed tests:  13-14, 16-19
> Files=3, Tests=33, 308 wallclock secs ( 0.06 usr  0.00 sys + 146.20 cusr 
> 281.79 csys = 428.05 CPU)
> Result: FAIL
> make: *** [Makefile:22: check] Error 1
> 
> How do I identify which test is test number 5?
> Am I failing tests because of performance reasons or because the tests 
> expect an environment different from mine?

Passing -v to prove does this; not sure why we aren't using it in make
check.

-Crystal


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-09-18 19:22 ` Crystal Wood
@ 2025-09-26 11:09   ` Tomas Glozar
  2025-09-30 22:57     ` Crystal Wood
  0 siblings, 1 reply; 8+ messages in thread
From: Tomas Glozar @ 2025-09-26 11:09 UTC (permalink / raw)
  To: Crystal Wood; +Cc: John Kacur, Clark Williams, Linux RT Users

čt 18. 9. 2025 v 21:22 odesílatel Crystal Wood <crwood@redhat.com> napsal:
>
> Can we just put all the tests in tests/ rather than making this
> distinction?  And have the unit tests be something that can be run just
> like any other test?
>

I agree that the organization is not the best. This patch only deals
with end-to-end tests, though. As long as there is a top-level
unit-tests/ directory, adding another one named tests/ would be
confusing.

> > +             if [ $exitcode -eq $expected_exitcode ] && [ $grep_result -eq 0 ]
> > +             then
> > +                     echo "ok $ctr - $1"
> > +             else
> > +                     echo "not ok $ctr - $1"
> > +                     # Add rtla output and exit code as comments in case of failure
> > +                     echo "$result" | col -b | while read line; do echo "# $line"; done
> > +                     printf "#\n# exit code %s\n" $exitcode
> > +                     [ -n "$expected_output" ] && [ $grep_result -ne 0 ] && \
> > +                             printf "# Output match failed: \"%s\"\n" "$expected_output"
> > +             fi
>
> Any reason to not take the updated version of the engine from the rtla
> consolidation patchset?
>

This patch was sent quite some time ago, I think it was before the
updated version of the engine.

> It would also be nice if there were a way to just run specific tests by
> number, rather than a whole .t file.
>

Yes, that can be implemented.

Tomas


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-09-18 17:56 ` John Kacur
  2025-09-18 19:24   ` Crystal Wood
@ 2025-09-26 11:12   ` Tomas Glozar
  2025-11-04 12:57   ` Tomas Glozar
  2 siblings, 0 replies; 8+ messages in thread
From: Tomas Glozar @ 2025-09-26 11:12 UTC (permalink / raw)
  To: John Kacur; +Cc: Clark Williams, Linux RT Users

čt 18. 9. 2025 v 19:56 odesílatel John Kacur <jkacur@redhat.com> napsal:
>
> Interesting.
> Note the correct spelling is "deprecated"
>

Oops. Thanks for noticing that.

> I'm not sure we want to deprecate runit and loads, they have different
> purposes. I often run in a non-rt environment and without tuned if I'm
> just testing functionality and not performance, and I got this after
> running make check (skipping some of the output before that)
>
> # Output match failed: "Command: rtla timerlat hist -p1100 -P f:95 -u
> -c[0-9|-]+ -E3500 --no-summary -T1 -t=trace.txt"
> e2e-tests/measurement.t .. Failed 6/19 subtests
>
> Test Summary Report
> -------------------
> e2e-tests/loads.t      (Wstat: 0 Tests: 8 Failed: 1)
>   Failed test:  5
> e2e-tests/measurement.t (Wstat: 0 Tests: 19 Failed: 6)
>   Failed tests:  13-14, 16-19
> Files=3, Tests=33, 308 wallclock secs ( 0.06 usr  0.00 sys + 146.20 cusr
> 281.79 csys = 428.05 CPU)
> Result: FAIL
> make: *** [Makefile:22: check] Error 1
>

> How do I identify which test is test number 5?
> Am I failing tests because of performance reasons or because the tests
> expect an environment different from mine?
>

The tests should work in any environment, I have to fix that.

Thanks for testing the patch.

Tomas


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-09-26 11:09   ` Tomas Glozar
@ 2025-09-30 22:57     ` Crystal Wood
  0 siblings, 0 replies; 8+ messages in thread
From: Crystal Wood @ 2025-09-30 22:57 UTC (permalink / raw)
  To: Tomas Glozar; +Cc: John Kacur, Clark Williams, Linux RT Users

On Fri, 2025-09-26 at 13:09 +0200, Tomas Glozar wrote:
> čt 18. 9. 2025 v 21:22 odesílatel Crystal Wood <crwood@redhat.com> napsal:
> > 
> > Can we just put all the tests in tests/ rather than making this
> > distinction?  And have the unit tests be something that can be run just
> > like any other test?
> > 
> 
> I agree that the organization is not the best. This patch only deals
> with end-to-end tests, though. As long as there is a top-level
> unit-tests/ directory, adding another one named tests/ would be
> confusing.

I was suggesting putting *all* tests in a single tests/ directory,
including the unit tests.  So, there wouldn't be a unit-tests/
directory.

> 
> > > +             if [ $exitcode -eq $expected_exitcode ] && [ $grep_result -eq 0 ]
> > > +             then
> > > +                     echo "ok $ctr - $1"
> > > +             else
> > > +                     echo "not ok $ctr - $1"
> > > +                     # Add rtla output and exit code as comments in case of failure
> > > +                     echo "$result" | col -b | while read line; do echo "# $line"; done
> > > +                     printf "#\n# exit code %s\n" $exitcode
> > > +                     [ -n "$expected_output" ] && [ $grep_result -ne 0 ] && \
> > > +                             printf "# Output match failed: \"%s\"\n" "$expected_output"
> > > +             fi
> > 
> > Any reason to not take the updated version of the engine from the rtla
> > consolidation patchset?
> > 
> 
> This patch was sent quite some time ago, I think it was before the
> updated version of the engine.

Ah, I thought it was a new patch, not a reply to an older patch.


-Crystal


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] rteval: Introduce E2E tests with output checking
  2025-09-18 17:56 ` John Kacur
  2025-09-18 19:24   ` Crystal Wood
  2025-09-26 11:12   ` Tomas Glozar
@ 2025-11-04 12:57   ` Tomas Glozar
  2 siblings, 0 replies; 8+ messages in thread
From: Tomas Glozar @ 2025-11-04 12:57 UTC (permalink / raw)
  To: John Kacur; +Cc: Clark Williams, Linux RT Users

čt 18. 9. 2025 v 19:56 odesílatel John Kacur <jkacur@redhat.com> napsal:
> I got this after running make check (skipping some of the output before that)
>
> # Output match failed: "Command: rtla timerlat hist -p1100 -P f:95 -u
> -c[0-9|-]+ -E3500 --no-summary -T1 -t=trace.txt"
> e2e-tests/measurement.t .. Failed 6/19 subtests
>
> Test Summary Report
> -------------------
> e2e-tests/loads.t      (Wstat: 0 Tests: 8 Failed: 1)
>   Failed test:  5
> e2e-tests/measurement.t (Wstat: 0 Tests: 19 Failed: 6)
>   Failed tests:  13-14, 16-19
> Files=3, Tests=33, 308 wallclock secs ( 0.06 usr  0.00 sys + 146.20 cusr
> 281.79 csys = 428.05 CPU)
> Result: FAIL
> make: *** [Makefile:22: check] Error 1
>
> How do I identify which test is test number 5?
> Am I failing tests because of performance reasons or because the tests
> expect an environment different from mine?
>

It is failing because it is expecting the timerlat interval of 1100us,
see the "Output match failed" part. The interval changed in the
meanwhile to 100us, the patch simply needs updating.

I'm working on a v2 fixing this and incorporating feedback. It will
also enable verbose messages, so that all test names are printed, just
like [1] does for RTLA.

[1] https://lore.kernel.org/linux-trace-kernel/20251027153401.1039217-6-tglozar@redhat.com/

Tomas


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-11-04 12:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-25  9:02 [PATCH] rteval: Introduce E2E tests with output checking Tomas Glozar
2025-09-18 17:56 ` John Kacur
2025-09-18 19:24   ` Crystal Wood
2025-09-26 11:12   ` Tomas Glozar
2025-11-04 12:57   ` Tomas Glozar
2025-09-18 19:22 ` Crystal Wood
2025-09-26 11:09   ` Tomas Glozar
2025-09-30 22:57     ` Crystal Wood

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox