From: wagi@monom.org (Daniel Wagner)
To: cip-dev@lists.cip-project.org
Subject: [cip-dev] RT Testing
Date: Thu, 16 Jan 2020 10:24:41 +0100 [thread overview]
Message-ID: <b0abdce4-62f4-400a-8c27-73c44d8ff9ca@monom.org> (raw)
In-Reply-To: <TYAPR01MB2285FB4BDC74EB669E362E89B7340@TYAPR01MB2285.jpnprd01.prod.outlook.com>
Hi Chris,
On Tue, Jan 14, 2020 at 05:01:54PM +0000, Chris Paterson wrote:
> Hello Pavel, Hayashi-san, Jan, Daniel,
>
> Addressing this email to all of you as both RT and CIP Core are involved.
>
> I started to look into RT testing in more detail today.
Welcome on board :)
> I've created an RT configuration for the RZ/G1 boards:
> https://gitlab.com/patersonc/cip-kernel-config/blob/chris/add_renesas_rt_configs/4.4.y-cip-rt/arm/renesas_shmobile-rt_defconfig
> I'll do something similar for the RZ/G2 boards soon.
I am using merge_config.sh to build the configuration. There are some
catches but generally it works good. I've hacked a small tool for
automization around it. With this the configuration is allways
generated from scratch using kconfig.
> Built it with linux-4.4.y-cip-rt and run cyclic test:
> https://lava.ciplatform.org/scheduler/job/9828
> Times look okay to an rt-untrained eye:
> T: 0 ( 1169) P:98 I:1000 C: 59993 Min: 13 Act: 16 Avg: 16 Max: 33
>
> Compared to a run with linux-4.4.y-cip:
> https://lava.ciplatform.org/scheduler/job/9829
> T: 0 ( 938) P:98 I:1000 C: 6000 Min: 1618 Act: 9604 Avg: 9603 Max: 14550
>
> Pavel, does the above look okay/useful to you? Or is cyclictest not worth running unless there is some load on the system?
Without load, it's not that interesting.
> Currently there is an issue with the way that the cyclic test case
> results are shown (i.e. they aren't) in LAVA due to a change [0]
> made to Linaro's cyclictest.sh.
My current test suite for LAVA contains these here:
rt_suites = ['0_jd-hackbench',
'0_jd-compile',
'0_jd-stress_ptrace',
'0_cyclicdeadline',
'0_cyclictest',
'0_pi-stress',
'0_pmqtest',
'0_ptsematest',
'0_rt-migrate-test',
'0_signaltest',
'0_sigwaittest',
'0_svsematest']
> That means that the test parsing now depends on Python, which isn't included in the cip-core RFS [1] that is currently being used.
Sorry about that. I am changed this so that the test are marked failed
if a value is seen higher as the maximum. Again, I am using my hack
script to read out the results from the test and use coloring for it:
$ srt-build c2d jobs result
0_jd-hackbench t1-max-latency : pass 11.00
0_jd-hackbench t1-avg-latency : pass 2.37
0_jd-hackbench t1-min-latency : pass 1.00
0_jd-hackbench t0-max-latency : pass 10.00
0_jd-hackbench t0-avg-latency : pass 2.31
0_jd-hackbench t0-min-latency : pass 1.00
0_jd-compile t1-max-latency : pass 14.00
0_jd-compile t1-avg-latency : pass 3.37
0_jd-compile t1-min-latency : pass 1.00
0_jd-compile t0-max-latency : pass 14.00
0_jd-compile t0-avg-latency : pass 3.37
0_jd-compile t0-min-latency : pass 1.00
0_jd-stress_ptrace t1-max-latency : pass 7.00
0_jd-stress_ptrace t1-avg-latency : pass 2.19
0_jd-stress_ptrace t1-min-latency : pass 2.00
0_jd-stress_ptrace t0-max-latency : pass 9.00
0_jd-stress_ptrace t0-avg-latency : pass 2.22
0_jd-stress_ptrace t0-min-latency : pass 2.00
0_cyclicdeadline t1-max-latency : fail 3462.00
0_cyclicdeadline t1-avg-latency : pass 1594.00
0_cyclicdeadline t1-min-latency : pass 1.00
0_cyclicdeadline t0-max-latency : fail 3470.00
0_cyclicdeadline t0-avg-latency : pass 1602.00
0_cyclicdeadline t0-min-latency : pass 8.00
0_cyclictest t1-max-latency : pass 11.00
0_cyclictest t1-avg-latency : pass 2.00
0_cyclictest t1-min-latency : pass 1.00
0_cyclictest t0-max-latency : pass 13.00
0_cyclictest t0-avg-latency : pass 3.00
0_cyclictest t0-min-latency : pass 1.00
0_pi-stress pi-stress : pass 0.00
0_pmqtest t3-2-max-latency : pass 13.00
0_pmqtest t3-2-avg-latency : pass 4.00
0_pmqtest t3-2-min-latency : pass 2.00
0_pmqtest t1-0-max-latency : pass 23.00
0_pmqtest t1-0-avg-latency : pass 4.00
0_pmqtest t1-0-min-latency : pass 2.00
0_ptsematest t3-2-max-latency : pass 11.00
0_ptsematest t3-2-avg-latency : pass 3.00
0_ptsematest t3-2-min-latency : pass 2.00
0_ptsematest t1-0-max-latency : pass 14.00
0_ptsematest t1-0-avg-latency : pass 3.00
0_ptsematest t1-0-min-latency : pass 2.00
0_rt-migrate-test t2-p98-avg : pass 11.00
0_rt-migrate-test t2-p98-tot : pass 581.00
0_rt-migrate-test t2-p98-min : pass 9.00
0_rt-migrate-test t2-p98-max : pass 28.00
0_rt-migrate-test t1-p97-avg : pass 13.00
0_rt-migrate-test t1-p97-tot : pass 654.00
0_rt-migrate-test t1-p97-min : pass 8.00
0_rt-migrate-test t1-p97-max : pass 34.00
0_rt-migrate-test t0-p96-avg : pass 19213.00
0_rt-migrate-test t0-p96-tot : pass 960652.00
0_rt-migrate-test t0-p96-min : pass 13.00
0_rt-migrate-test t0-p96-max : pass 20031.00
0_signaltest t0-max-latency : pass 29.00
0_signaltest t0-avg-latency : pass 8.00
0_signaltest t0-min-latency : pass 4.00
0_sigwaittest t3-2-max-latency : pass 16.00
0_sigwaittest t3-2-avg-latency : pass 4.00
0_sigwaittest t3-2-min-latency : pass 2.00
0_sigwaittest t1-0-max-latency : pass 24.00
0_sigwaittest t1-0-avg-latency : pass 4.00
0_sigwaittest t1-0-min-latency : pass 2.00
0_svsematest t3-2-max-latency : pass 13.00
0_svsematest t3-2-avg-latency : pass 4.00
0_svsematest t3-2-min-latency : pass 2.00
0_svsematest t1-0-max-latency : pass 13.00
0_svsematest t1-0-avg-latency : pass 4.00
0_svsematest t1-0-min-latency : pass 2.00
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
Trying to figure out from the web interface of LAVA is a bit
combersome.
> Do either of the CIP Core profiles include Python support?
>
> Linaro test-definitions [2] have the following tests marked within the preempt-rt scope:
> cyclicdeadline/cyclicdeadline.yaml
> pmqtest/pmqtest.yaml
> rt-migrate-test/rt-migrate-test.yaml
> cyclictest/cyclictest.yaml
> svsematest/svsematest.yaml
> pi-stress/pi-stress.yaml
> signaltest/signaltest.yaml
> ptsematest/ptsematest.yaml
> sigwaittest/sigwaittest.yaml
> hackbench/hackbench.yaml
> ltp-realtime/ltp-realtime.yaml
See above.
> Which of the above would be valuable to run on CIP RT Kernels?
Basically you want to run all of the tests in rt-tests.
> A while back Daniel Wagner also did some work on a Jitterdebugger
> test [3], but it hasn't been merged yet and I'm not sure what the
> current status is. Any updates Daniel?
Yeah, I was holding a bit back until I am happy with my setup and
workflow. One of the major limitations with the current
test-definitions is the difficulties to setup background workload. To
make it short, I am not too happy with my current version but it
works.
> Is anyone able to provide RT config/defconfigs for the x86 and arm
> boards in the Mentor lab? Or BBB, QEMU etc.? (assuming that the
> hardware is suitable).
If you are interested in my configs I have configs for ARMv7 (bbb),
ARMv8 (RPi3) and x86_64 via my hacktool. It also builds the kernel
because I didn't setup kernelci for it, so in short I get a complete
config, build, test setup via 'srt-build bbb lava'
Thanks,
Daniel
next prev parent reply other threads:[~2020-01-16 9:24 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-14 17:01 [cip-dev] RT Testing Chris Paterson
2020-01-16 6:54 ` kazuhiro3.hayashi at toshiba.co.jp
2020-01-16 9:24 ` Daniel Wagner
2020-01-16 10:42 ` kazuhiro3.hayashi at toshiba.co.jp
2020-01-16 9:57 ` Chris Paterson
2020-01-17 1:01 ` Punit Agrawal
2020-01-17 11:34 ` Chris Paterson
2020-01-20 1:54 ` Punit Agrawal
2020-01-17 1:41 ` kazuhiro3.hayashi at toshiba.co.jp
2020-01-17 11:39 ` Chris Paterson
2020-01-16 9:24 ` Daniel Wagner [this message]
2020-01-16 11:36 ` Chris Paterson
2020-01-16 13:25 ` Pavel Machek
2020-01-16 17:13 ` Daniel Wagner
[not found] <TYAPR01MB2285F72DD7C51D8D1E2C38B3B7F10@TYAPR01MB2285.jpnprd01.prod.outlook.com>
2019-07-15 8:41 ` [cip-dev] RT testing Pavel Machek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0abdce4-62f4-400a-8c27-73c44d8ff9ca@monom.org \
--to=wagi@monom.org \
--cc=cip-dev@lists.cip-project.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox