From: osstest service owner <osstest-admin@xenproject.org>
To: xen-devel@lists.xensource.com, osstest-admin@xenproject.org
Subject: [xen-unstable test] 101590: regressions - FAIL
Date: Sat, 22 Oct 2016 03:54:33 +0000 [thread overview]
Message-ID: <osstest-101590-mainreport@xen.org> (raw)
flight 101590 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101590/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl-credit2 15 guest-start/debian.repeat fail REGR. vs. 101563
Regressions which are regarded as allowable (not blocking):
test-armhf-armhf-libvirt 13 saverestore-support-check fail like 101563
test-armhf-armhf-libvirt-qcow2 12 saverestore-support-check fail like 101563
test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 101563
test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 101563
test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 101563
test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 101563
test-armhf-armhf-libvirt-xsm 13 saverestore-support-check fail like 101563
test-armhf-armhf-libvirt-raw 12 saverestore-support-check fail like 101563
test-amd64-amd64-xl-rtds 9 debian-install fail like 101563
Tests which did not succeed, but are not blocking:
test-amd64-amd64-rumprun-amd64 1 build-check(1) blocked n/a
test-amd64-i386-rumprun-i386 1 build-check(1) blocked n/a
build-i386-rumprun 5 rumprun-build fail never pass
build-amd64-rumprun 5 rumprun-build fail never pass
test-amd64-amd64-xl-pvh-amd 11 guest-start fail never pass
test-amd64-amd64-libvirt 12 migrate-support-check fail never pass
test-amd64-amd64-libvirt-xsm 12 migrate-support-check fail never pass
test-amd64-i386-libvirt 12 migrate-support-check fail never pass
test-amd64-i386-libvirt-xsm 12 migrate-support-check fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
test-amd64-amd64-libvirt-vhd 11 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 12 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 13 saverestore-support-check fail never pass
test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2 fail never pass
test-armhf-armhf-xl 12 migrate-support-check fail never pass
test-armhf-armhf-libvirt 12 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail never pass
test-armhf-armhf-xl 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 12 migrate-support-check fail never pass
test-armhf-armhf-xl-credit2 13 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-qcow2 11 migrate-support-check fail never pass
test-amd64-amd64-xl-pvh-intel 11 guest-start fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
test-armhf-armhf-xl-xsm 12 migrate-support-check fail never pass
test-armhf-armhf-xl-xsm 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-rtds 12 migrate-support-check fail never pass
test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail never pass
test-armhf-armhf-xl-rtds 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail never pass
test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-raw 11 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 11 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 12 saverestore-support-check fail never pass
version targeted for testing:
xen 4a6070ea95b17e5c5f051ebe6886783dd50e911c
baseline version:
xen a05b4d8a13f41bdff3e1242a3888388c9bde4f6f
Last test of basis 101563 2016-10-20 14:17:09 Z 1 days
Failing since 101571 2016-10-21 02:35:48 Z 1 days 2 attempts
Testing same since 101590 2016-10-21 16:15:05 Z 0 days 1 attempts
------------------------------------------------------------
People who touched revisions under test:
Andrew Cooper <andrew.cooper3@citrix.com>
Dario Faggioli <dario.faggioli@citrix.com>
Kyle Huey <khuey@kylehuey.com>
Kyle Huey <me@kylehuey.com>
Meng Xu <mengxu@cis.upenn.edu>
Stefano Stabellini <sstabellini@kernel.org>
Tamas K Lengyel <tamas.lengyel@zentific.com>
Wei Liu <wei.liu2@citrix.com>
jobs:
build-amd64-xsm pass
build-armhf-xsm pass
build-i386-xsm pass
build-amd64-xtf pass
build-amd64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-oldkern pass
build-i386-oldkern pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-armhf-pvops pass
build-i386-pvops pass
build-amd64-rumprun fail
build-i386-rumprun fail
test-xtf-amd64-amd64-1 pass
test-xtf-amd64-amd64-2 pass
test-xtf-amd64-amd64-3 pass
test-xtf-amd64-amd64-4 pass
test-xtf-amd64-amd64-5 pass
test-amd64-amd64-xl pass
test-armhf-armhf-xl pass
test-amd64-i386-xl pass
test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-xsm pass
test-armhf-armhf-libvirt-xsm pass
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-armhf-armhf-xl-xsm pass
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-amd64-xl-pvh-amd fail
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
test-amd64-i386-xl-qemuu-ovmf-amd64 pass
test-amd64-amd64-rumprun-amd64 blocked
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-armhf-armhf-xl-arndale pass
test-amd64-amd64-xl-credit2 pass
test-armhf-armhf-xl-credit2 fail
test-armhf-armhf-xl-cubietruck pass
test-amd64-i386-freebsd10-i386 pass
test-amd64-i386-rumprun-i386 blocked
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-amd64-xl-pvh-intel fail
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt pass
test-amd64-i386-libvirt pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair pass
test-amd64-i386-libvirt-pair pass
test-amd64-amd64-amd64-pvgrub pass
test-amd64-amd64-i386-pvgrub pass
test-amd64-amd64-pygrub pass
test-armhf-armhf-libvirt-qcow2 pass
test-amd64-amd64-xl-qcow2 pass
test-armhf-armhf-libvirt-raw pass
test-amd64-i386-xl-raw pass
test-amd64-amd64-xl-rtds fail
test-armhf-armhf-xl-rtds pass
test-amd64-i386-xl-qemut-winxpsp3-vcpus1 pass
test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass
test-amd64-amd64-libvirt-vhd pass
test-armhf-armhf-xl-vhd pass
test-amd64-amd64-xl-qemut-winxpsp3 pass
test-amd64-i386-xl-qemut-winxpsp3 pass
test-amd64-amd64-xl-qemuu-winxpsp3 pass
test-amd64-i386-xl-qemuu-winxpsp3 pass
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
Not pushing.
------------------------------------------------------------
commit 4a6070ea95b17e5c5f051ebe6886783dd50e911c
Author: Dario Faggioli <dario.faggioli@citrix.com>
Date: Fri Oct 21 15:49:30 2016 +0200
libxl: avoid considering pCPUs outside of the cpupool during NUMA placement
During NUMA automatic placement, the information
of how many vCPUs can run on what NUMA nodes is used,
in order to spread the load as evenly as possible.
Such information is derived from vCPU hard and soft
affinity, but that is not enough. In fact, affinity
can be set to be a superset of the pCPUs that belongs
to the cpupool in which a domain is but, of course,
the domain will never run on pCPUs outside of its
cpupool.
Take this into account in the placement algorithm.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reported-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit 7e12213fdc736ec5b9bb978ea95391658e0ad4e7
Author: Meng Xu <mengxu@cis.upenn.edu>
Date: Wed Oct 19 10:48:39 2016 -0400
docs:RTDS: Correct mistakes in feature doc
Correct the mistakes in the example command
Correct a simple typo.
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Acked-by: Dario Faggioli <dario.faggioli@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit d93539cc486aa6022195305dbea5fe12f90b69fe
Author: Stefano Stabellini <sstabellini@kernel.org>
Date: Wed Oct 19 12:22:35 2016 -0700
vscsiif.h: replace PAGE_SIZE with VSCSIIF_PAGE_SIZE
Do not reference PAGE_SIZE directly: it could be undefined, or it could
have different values in the frontend or in the backend.
Define VSCSIIF_PAGE_SIZE as 4096, assuming all users of vscsiif.h have
4K page granularity. Replace PAGE_SIZE with VSCSIIF_PAGE_SIZE.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit 04535bee06858fd949c743cfecc4d7b96333a16c
Author: Stefano Stabellini <sstabellini@kernel.org>
Date: Wed Oct 19 12:22:34 2016 -0700
usbif.h: replace PAGE_SIZE with USBIF_RING_SIZE
Do not reference PAGE_SIZE directly: it could be undefined, or it could
have different values in the frontend or in the backend.
Define USBIF_RING_SIZE as 4096, assuming all users of usbif.h have 4K
page granularity. Replace PAGE_SIZE with USBIF_RING_SIZE.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit 31ba5a9b92ac00c135e46f54052336945b77f159
Author: Tamas K Lengyel <tamas.lengyel@zentific.com>
Date: Thu Oct 13 18:00:47 2016 -0600
altp2m: don't attempt to unshare pages during change_altp2m_gfn op
Attempting to change gfn mappings with altp2m on a memory shared page results
in a lock-order violation (mm locking order violation: 282 > 254), which
crashes the hypervisor. Don't attempt to automatically unshare such pages and
just fall back to failing the op if the page type is not correct.
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit 4abcd521bf460fb3a247a7754698f98526b39635
Author: Kyle Huey <me@kylehuey.com>
Date: Thu Oct 20 06:44:28 2016 -0700
x86/Intel: virtualize support for cpuid faulting
On HVM guests, the cpuid triggers a vm exit, so we can check the emulated
faulting state in vmx_do_cpuid and hvmemul_cpuid. A new function,
hvm_check_cpuid_fault will check if cpuid faulting is enabled and the CPL > 0.
When it returns true, the cpuid handling functions will inject a GP(0). Notably
explicit hardware support for faulting on cpuid is not necessary to emulate
support for an HVM guest.
On PV guests, hardware support is required so that userspace cpuid will trap
to Xen. Xen already enables cpuid faulting on supported CPUs for pv guests (that
aren't the control domain, see the comment in intel_ctxt_switch_levelling).
Every PV guest cpuid will trap via a GP(0) to emulate_privileged_op (via
do_general_protection). Once there we simply decline to emulate cpuid if the
CPL > 0 and faulting is enabled, leaving the GP(0) for the guest kernel to
handle.
Signed-off-by: Kyle Huey <khuey@kylehuey.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit 70c95ecd5c0ec2fc5cf2bb0c5f96814bbd10c5b3
Author: Kyle Huey <me@kylehuey.com>
Date: Thu Oct 20 06:44:27 2016 -0700
x86/Intel: Expose cpuid_faulting_enabled so it can be used elsewhere
While we're here, use bool instead of bool_t.
Signed-off-by: Kyle Huey <khuey@kylehuey.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
commit 8b4664265bb398db4d5581959566a3f8036696ce
Author: Wei Liu <wei.liu2@citrix.com>
Date: Thu Oct 20 14:00:47 2016 +0100
Config.mk: use non-debug build for 4.8
Set debug ?= n in preparation for late RCs and eventual release.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
(qemu changes not included)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
reply other threads:[~2016-10-22 3:54 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=osstest-101590-mainreport@xen.org \
--to=osstest-admin@xenproject.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).