* [LTP] [RFC] replacement for runltp + ltp-pan
@ 2013-02-20 11:55 chrubis
[not found] ` <114382587.6789703.1361448130009.JavaMail.root@redhat.com>
0 siblings, 1 reply; 2+ messages in thread
From: chrubis @ 2013-02-20 11:55 UTC (permalink / raw)
To: ltp-list
Hi!
Some time ago I started to think about replacement for the current
runltp + ltp-pan test execution system, before I start writing any code
here are a few points that I currently have in mind. Feel free to
comment on them.
Here is what I think that is wrong with the current system:
* Open Posix Testsuite is not executed by runtest + ltp-pan
- technically this should be as easy as translating the return
codes and generating runtest files as we build the testsuite
* Current system doesn't support timeouts
- which should be easy to implement but would require change
to the runtest file format (one timeout for all doesn't
work as we have tests that runs less than second and tests
that takes more than ten minutes).
* The output to stdout is messy
What I would like is:
[004/243] getrusage04 .................................. [FAILED]
[005/243] getsid01 ..................................... [RUNNING]
* The new tool should be able search runtest files for particular test
names, blacklist some tests if needed, etc. Which is now done by
grepping runtest files in the hacked together runltp script.
* There are tests that needs a disk partition or loopback image, we
should handle that and export right environment variables. This should
be designed, documented and fixed.
* The current system doesn't produce usable logs
Well you get list of testcases that succeeded and list of testcases
that has failed, which is better than nothing.
But what I'm thinking about is some more structured format that
will include test output, possibly test duration and so. I personally
think that JSON is good candidate for this job as it's structured,
easy to read and widely supported as there are libraries for python,
perl, etc.
Then we could create git repo with reference results. Generate html
pages from the repo with results. Write scripts for result comparsion,
etc.
The JSON log may look like:
"LTP Version": 20130109,
"System Info": {
"Kernel Version": "3.7.1",
"Hardware": {
"Arch": "x86_64",
"CPUs": 2,
"Ram": 4051840,
(more filelds like this follows),
},
},
"Results": [
{
"Test Name": "getrusage04",
"Test Result": "FAILED",
"Test Runtime": 0.011532,
"Test CPU Time": 0.003234,
"Test Output": [
"TINFO : Using 1 as multiply factor for max increment",
"TINFO : utime: 0us; stime 0us",
"TINFO : utime: 0us; stime 4000us",
"TFAIL : stime increased > 2000us",
],
},
(more test results follows),
]
--
Cyril Hrubis
chrubis@suse.cz
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [LTP] [RFC] replacement for runltp + ltp-pan
[not found] ` <114382587.6789703.1361448130009.JavaMail.root@redhat.com>
@ 2013-02-21 12:37 ` chrubis
0 siblings, 0 replies; 2+ messages in thread
From: chrubis @ 2013-02-21 12:37 UTC (permalink / raw)
To: Jan Stancek; +Cc: ltp-list
Hi!
> > * Current system doesn't support timeouts
> >
> > - which should be easy to implement but would require change
> > to the runtest file format (one timeout for all doesn't
> > work as we have tests that runs less than second and tests
> > that takes more than ten minutes).
>
> Finding the default for some may be tricky, as they can depend on
> drive speed or amount of RAM.
There is a simple solution to this. We can define multiplier constant
for the timeouts that could be either set manually or measured at
runtime. Tuning this would be a little of work, but I think that this
will work well.
Moreover the timeouts are more of safety measure for tests that will
deadlock or loop infinitely, so that the testrun is finished in morning
even if there are some broken tests. For tests that takes less than
second, five or ten minutes timeout is good enough. It's more
problematic for testcases that do heavy IO and could take anything from
ten minutes to half of hour.
> Is there a plan for some transition period where both of these
> would work side-by-side?
Sure, I do not plan to remove the current system until the new system is
finished, tested and widely used.
That would unfortunately means that the runtest files would need to be
duplicated, or split into several files and preprocessed before they are
used in new environment.
> Looking at current output, almost all fields look useful.
> Well except 'contacts', not sure if I have ever seen this set to non-empty string.
>
> <<<test_start>>>
> tag=abs01 stime=1361370087
> cmdline="abs01"
> contacts=""
> analysis=exit
> <<<test_output>>>
> abs01 1 TPASS : Test passed
> abs01 2 TPASS : Test passed
> abs01 3 TPASS : Test passed
> <<<execution_status>>>
> initiation_status="ok"
> duration=0 termination_type=exited termination_id=0 corefile=no
> cutime=0 cstime=0
> <<<test_end>>>
I would be happier with less verbose output, but we can easily add a
switch to control the way the output is printed into the stdout.
I my idea is to make the tests print less information into stdout by
default and store more of it into the logs.
> > "Results": [
> > {
> > "Test Name": "getrusage04",
> > "Test Result": "FAILED",
> > "Test Runtime": 0.011532,
> > "Test CPU Time": 0.003234,
> > "Test Output": [
> > "TINFO : Using 1 as multiply factor for max increment",
> > "TINFO : utime: 0us; stime 0us",
> > "TINFO : utime: 0us; stime 4000us",
> > "TFAIL : stime increased > 2000us",
> > ],
> > },
> > (more test results follows),
> > ]
>
> I don't have objections to JSON. I do not parse existing logs, I just search/read them
> when there is failure. And it should be easy to just convert it to plain text if there is
> need for it.
That should be easy enough with short python script.
The main motivation for more structured logs is, at least from my point
of view, the ability to compare results. There are still testcases that
fails randomly, in such case you can do several runs, process the data
and easily spot that the test failed three times out of four.
Moreover currently LTP has no support for benchmarks, we have no
information on how fast syscalls are. If the logs are done right, it
should be possible to do several runs, reboot into different kernel, do
another round of runs, process the data and see if something has been
slowed down significantly.
--
Cyril Hrubis
chrubis@suse.cz
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-02-21 12:37 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-02-20 11:55 [LTP] [RFC] replacement for runltp + ltp-pan chrubis
[not found] ` <114382587.6789703.1361448130009.JavaMail.root@redhat.com>
2013-02-21 12:37 ` chrubis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox