From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
Ian Campbell <ian.campbell@citrix.com>
Subject: [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition
Date: Sat, 03 Oct 2015 02:39:30 +0200 [thread overview]
Message-ID: <20151003003929.12311.52265.stgit@Solace.station> (raw)
In-Reply-To: <20151003003554.12311.97039.stgit@Solace.station>
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v2:
* restrict test generation to xl only.
Changes from v1:
* added invocation to ts-guest-stop in the recipe to kill
leak-check complaints (which went unnoticed during v1
testing, sorry)
* moved the test before the "ARM cutoff", and remove the
per-arch filtering, so that the test can run on ARM
hardware too
---
make-flight | 12 ++++++++++++
sg-run-job | 7 +++++++
2 files changed, 19 insertions(+)
diff --git a/make-flight b/make-flight
index 8c75a9c..d27a02c 100755
--- a/make-flight
+++ b/make-flight
@@ -373,6 +373,16 @@ do_multivcpu_tests () {
$debian_runvars all_hostflags=$most_hostflags
}
+do_cpupools_tests () {
+ if [ x$toolstack != xxl -a $xenarch != $dom0arch ]; then
+ return
+ fi
+
+ job_create_test test-$xenarch$kern-$dom0arch-xl-cpupools \
+ test-cpupools xl $xenarch $dom0arch \
+ $debian_runvars all_hostflags=$most_hostflags
+}
+
do_passthrough_tests () {
if [ $xenarch != amd64 -o $dom0arch != amd64 -o "$kern" != "" ]; then
return
@@ -498,6 +508,8 @@ test_matrix_do_one () {
do_rtds_tests
do_credit2_tests
+ do_cpupools_tests
+
# No further arm tests at the moment
if [ $dom0arch = armhf ]; then
return
diff --git a/sg-run-job b/sg-run-job
index 66145b8..ea48a03 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -296,6 +296,13 @@ proc run-job/test-debianhvm {} {
test-guest debianhvm
}
+proc need-hosts/test-cpupools {} { return host }
+proc run-job/test-cpupools {} {
+ install-guest-debian
+ run-ts . = ts-cpupools + host debian
+ run-ts . = ts-guest-stop + host debian
+}
+
proc setup-test-pair {} {
run-ts . = ts-debian-install dst_host
run-ts . = ts-debian-fixup dst_host + debian
next prev parent reply other threads:[~2015-10-03 0:39 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-03 0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
2015-10-03 0:39 ` [OSSTEST PATCH v3 1/3] ts-cpupools: new test script Dario Faggioli
2015-10-08 16:38 ` Ian Campbell
2015-10-03 0:39 ` Dario Faggioli [this message]
2015-10-09 14:34 ` [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition Ian Campbell
2015-10-03 0:39 ` [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs Dario Faggioli
2015-10-09 14:36 ` Ian Campbell
2015-10-03 0:45 ` [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
2015-10-08 15:20 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151003003929.12311.52265.stgit@Solace.station \
--to=dario.faggioli@citrix.com \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jgross@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).