* [xen-4.8-testing test] 114505: regressions - FAIL
@ 2017-10-15 19:45 osstest service owner
2017-10-16 9:14 ` Andrew Cooper
0 siblings, 1 reply; 7+ messages in thread
From: osstest service owner @ 2017-10-15 19:45 UTC (permalink / raw)
To: xen-devel, osstest-admin
[-- Attachment #1: Type: text/plain, Size: 24775 bytes --]
flight 114505 xen-4.8-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/114505/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-xtf-amd64-amd64-2 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 114173
Tests which are failing intermittently (not blocking):
test-xtf-amd64-amd64-5 48 xtf/test-hvm64-lbr-tsx-vmentry fail in 114454 pass in 114505
test-armhf-armhf-xl-rtds 12 guest-start fail pass in 114454
Tests which did not succeed, but are not blocking:
test-xtf-amd64-amd64-3 48 xtf/test-hvm64-lbr-tsx-vmentry fail in 114454 like 114173
test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 114454 like 114173
test-armhf-armhf-xl-rtds 13 migrate-support-check fail in 114454 never pass
test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 114454 never pass
test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114173
test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114173
test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 114173
test-amd64-amd64-xl-rtds 10 debian-install fail like 114173
test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-install fail never pass
test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass
test-amd64-i386-libvirt 13 migrate-support-check fail never pass
test-amd64-amd64-libvirt 13 migrate-support-check fail never pass
test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass
test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
build-i386-prev 7 xen-build/dist-test fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
test-amd64-i386-libvirt-qcow2 12 migrate-support-check fail never pass
build-amd64-prev 7 xen-build/dist-test fail never pass
test-armhf-armhf-xl-arndale 13 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 14 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass
test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore fail never pass
test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass
test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail never pass
test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail never pass
test-armhf-armhf-libvirt 13 migrate-support-check fail never pass
test-armhf-armhf-libvirt 14 saverestore-support-check fail never pass
test-armhf-armhf-xl 13 migrate-support-check fail never pass
test-armhf-armhf-xl 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 13 migrate-support-check fail never pass
test-armhf-armhf-xl-credit2 14 saverestore-support-check fail never pass
test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore fail never pass
test-armhf-armhf-libvirt-xsm 13 migrate-support-check fail never pass
test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-cubietruck 13 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-vhd 12 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 13 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-raw 12 migrate-support-check fail never pass
test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail never pass
test-amd64-amd64-xl-qemut-win10-i386 10 windows-install fail never pass
test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail never pass
test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
test-armhf-armhf-xl-xsm 13 migrate-support-check fail never pass
test-armhf-armhf-xl-xsm 14 saverestore-support-check fail never pass
version targeted for testing:
xen bdc2ae68e2ecba1c3f55ad953189fe33362d1c51
baseline version:
xen 667f70e658c4c382672056ebaf1505b4c5cdb0aa
Last test of basis 114173 2017-10-09 03:27:38 Z 6 days
Failing since 114313 2017-10-11 00:46:14 Z 4 days 4 attempts
Testing same since 114454 2017-10-13 06:48:53 Z 2 days 2 attempts
------------------------------------------------------------
People who touched revisions under test:
Andrew Cooper <andrew.cooper3@citrix.com>
George Dunlap <george.dunlap@citrix.com>
Jan Beulich <jbeulich@suse.com>
Julien Grall <julien.grall@arm.com>
Stefano Stabellini <sstabellini@kernel.org>
Tim Deegan <tim@xen.org>
Vitaly Kuznetsov <vkuznets@redhat.com>
jobs:
build-amd64-xsm pass
build-armhf-xsm pass
build-i386-xsm pass
build-amd64-xtf pass
build-amd64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-armhf-pvops pass
build-i386-pvops pass
build-amd64-rumprun pass
build-i386-rumprun pass
test-xtf-amd64-amd64-1 pass
test-xtf-amd64-amd64-2 pass
test-xtf-amd64-amd64-3 pass
test-xtf-amd64-amd64-4 pass
test-xtf-amd64-amd64-5 pass
test-amd64-amd64-xl pass
test-armhf-armhf-xl pass
test-amd64-i386-xl pass
test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-xsm pass
test-armhf-armhf-libvirt-xsm pass
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-armhf-armhf-xl-xsm pass
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
test-amd64-i386-xl-qemuu-ovmf-amd64 pass
test-amd64-amd64-rumprun-amd64 pass
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 pass
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-amd64-amd64-xl-qemut-ws16-amd64 fail
test-amd64-i386-xl-qemut-ws16-amd64 fail
test-amd64-amd64-xl-qemuu-ws16-amd64 fail
test-amd64-i386-xl-qemuu-ws16-amd64 fail
test-armhf-armhf-xl-arndale pass
test-amd64-amd64-xl-credit2 pass
test-armhf-armhf-xl-credit2 pass
test-armhf-armhf-xl-cubietruck pass
test-amd64-i386-freebsd10-i386 pass
test-amd64-i386-rumprun-i386 pass
test-amd64-amd64-xl-qemut-win10-i386 fail
test-amd64-i386-xl-qemut-win10-i386 fail
test-amd64-amd64-xl-qemuu-win10-i386 fail
test-amd64-i386-xl-qemuu-win10-i386 fail
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt pass
test-amd64-i386-libvirt pass
test-amd64-amd64-livepatch pass
test-amd64-i386-livepatch pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair pass
test-amd64-i386-libvirt-pair pass
test-amd64-amd64-amd64-pvgrub pass
test-amd64-amd64-i386-pvgrub pass
test-amd64-amd64-pygrub pass
test-amd64-i386-libvirt-qcow2 pass
test-amd64-amd64-xl-qcow2 pass
test-armhf-armhf-libvirt-raw pass
test-amd64-i386-xl-raw pass
test-amd64-amd64-xl-rtds fail
test-armhf-armhf-xl-rtds fail
test-amd64-amd64-libvirt-vhd pass
test-armhf-armhf-xl-vhd pass
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
Not pushing.
------------------------------------------------------------
commit bdc2ae68e2ecba1c3f55ad953189fe33362d1c51
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu Oct 12 15:20:20 2017 +0200
x86/cpu: Fix IST handling during PCPU bringup
Clear IST references in newly allocated IDTs. Nothing good will come of
having them set before the TSS is suitably constructed (although the chances
of the CPU surviving such an IST interrupt/exception is extremely slim).
Uniformly set the IST references after the TSS is in place. This fixes an
issue on AMD hardware, where onlining a PCPU while PCPU0 is in HVM context
will cause IST_NONE to be copied into the new IDT, making that PCPU vulnerable
to privilege escalation from PV guests until it subsequently schedules an HVM
guest.
This is XSA-244.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: cc08c73c8c1f5ba5ed0f8274548db6725e1c3157
master date: 2017-10-12 14:50:31 +0200
commit 96e6364b5f64cc8b1210a8ab5cb7801162833ebb
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu Oct 12 15:19:40 2017 +0200
x86/shadow: Don't create self-linear shadow mappings for 4-level translated guests
When initially creating a monitor table for 4-level translated guests, don't
install a shadow-linear mapping. This mapping is actually self-linear, and
trips up the writeable heuristic logic into following Xen's mappings, not the
guests' shadows it was expecting to follow.
A consequence of this is that sh_guess_wrmap() needs to cope with there being
no shadow-linear mapping present, which in practice occurs once each time a
vcpu switches to 4-level paging from a different paging mode.
An appropriate shadow-linear slot will be inserted into the monitor table
either while constructing lower level monitor tables, or by sh_update_cr3().
While fixing this, clarify the safety of the other mappings. Despite
appearing unsafe, it is correct to create a guest-linear mapping for
translated domains; this is self-linear and doesn't point into the translated
domain. Drop a dead clause for translate != external guests.
This is XSA-243.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
master commit: bf2b4eadcf379d0361b38de9725ea5f7a18a5205
master date: 2017-10-12 14:50:07 +0200
commit 1a8ad09dd1e13894773944fc2de36d37f14faa68
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:19:12 2017 +0200
x86: don't allow page_unlock() to drop the last type reference
Only _put_page_type() does the necessary cleanup, and hence not all
domain pages can be released during guest cleanup (leaving around
zombie domains) if we get this wrong.
This is XSA-242.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 6410733a8a0dff2fe581338ff631670cf91889db
master date: 2017-10-12 14:49:46 +0200
commit df8919786f4781139cbd1be7340dd93f3408edee
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:18:33 2017 +0200
x86: don't store possibly stale TLB flush time stamp
While the timing window is extremely narrow, it is theoretically
possible for an update to the TLB flush clock and a subsequent flush
IPI to happen between the read and write parts of the update of the
per-page stamp. Exclude this possibility by disabling interrupts
across the update, preventing the IPI to be serviced in the middle.
This is XSA-241.
Reported-by: Jann Horn <jannh@google.com>
Suggested-by: George Dunlap <george.dunlap@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: 23a183607a427572185fc51c76cc5ab11c00c4cc
master date: 2017-10-12 14:48:25 +0200
commit c4f969d25463586103a70f2bc36624e2287b880c
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:17:20 2017 +0200
x86: limit linear page table use to a single level
That's the only way that they're meant to be used. Without such a
restriction arbitrarily long chains of same-level page tables can be
built, tearing down of which may then cause arbitrarily deep recursion,
causing a stack overflow. To facilitate this restriction, a counter is
being introduced to track both the number of same-level entries in a
page table as well as the number of uses of a page table in another
same-level one (counting into positive and negative direction
respectively, utilizing the fact that both counts can't be non-zero at
the same time).
Note that the added accounting introduces a restriction on the number
of times a page can be used in other same-level page tables - more than
32k of such uses are no longer possible.
Note also that some put_page_and_type[_preemptible]() calls are
replaced with open-coded equivalents. This seemed preferrable to
adding "parent_table" to the matrix of functions.
Note further that cross-domain same-level page table references are no
longer permitted (they probably never should have been).
This is XSA-240.
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: George Dunlap <george.dunlap@citrix.com>
master commit: 6987fc7558bdbab8119eabf026e3cdad1053f0e5
master date: 2017-10-12 14:44:34 +0200
commit b1f3f1dde1b904160d3ce895a2fbccab21706214
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:16:54 2017 +0200
x86/HVM: prefill partially used variable on emulation paths
Certain handlers ignore the access size (vioapic_write() being the
example this was found with), perhaps leading to subsequent reads
seeing data that wasn't actually written by the guest. For
consistency and extra safety also do this on the read path of
hvm_process_io_intercept(), even if this doesn't directly affect what
guests get to see, as we've supposedly already dealt with read handlers
leaving data completely unitialized.
This is XSA-239.
Reported-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 0d4732ac29b63063764c29fa3bd8946daf67d6f3
master date: 2017-10-12 14:43:26 +0200
commit 7251c0654004ecfb2c1f831564a95113b97ee51a
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Thu Oct 12 15:16:18 2017 +0200
x86/ioreq server: correctly handle bogus XEN_DMOP_{,un}map_io_range_to_ioreq_server arguments
Misbehaving device model can pass incorrect XEN_DMOP_map/
unmap_io_range_to_ioreq_server arguments, namely end < start when
specifying address range. When this happens we hit ASSERT(s <= e) in
rangeset_contains_range()/rangeset_overlaps_range() with debug builds.
Production builds will not trap right away but may misbehave later
while handling such bogus ranges.
This is XSA-238.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d59e55b018cfb79d0c4f794041aff4fe1cd0d570
master date: 2017-10-12 14:43:02 +0200
commit 1960ca822091d6c956349f8534f19a6d072a2ece
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:15:40 2017 +0200
x86/FLASK: fix unmap-domain-IRQ XSM hook
The caller and the FLASK implementation of xsm_unmap_domain_irq()
disagreed about what the "data" argument points to in the MSI case:
Change both sides to pass/take a PCI device.
This is part of XSA-237.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 6f17f5c43a3bd28d27ed8133b2bf513e2eab7d59
master date: 2017-10-12 14:37:56 +0200
commit 866cfa15751edb9c5cd1d2ad78671a16c31b6316
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:15:07 2017 +0200
x86/IRQ: conditionally preserve irq <-> pirq mapping on map error paths
Mappings that had been set up before should not be torn down when
handling unrelated errors.
This is part of XSA-237.
Reported-by: HW42 <hw42@ipsumj.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: 573ac7b22aba9e5b8d40d9cdccd744af57cd5928
master date: 2017-10-12 14:37:26 +0200
commit ddd6e415b11f41ad9a5ca6f919a294205360fe74
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:14:42 2017 +0200
x86/MSI: disallow redundant enabling
At the moment, Xen attempts to allow redundant enabling of MSI by
having pci_enable_msi() return 0, and point to the existing MSI
descriptor, when the msi already exists.
Unfortunately, if subsequent errors are encountered, the cleanup
paths assume pci_enable_msi() had done full initialization, and
hence undo everything that was assumed to be done by that
function without also undoing other setup that would normally
occur only after that function was called (in map_domain_pirq()
itself).
Rather than try to make the redundant enabling case work properly, just
forbid it entirely by having pci_enable_msi() return -EEXIST when MSI
is already set up.
This is part of XSA-237.
Reported-by: HW42 <hw42@ipsumj.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: a46126fec20e0cf4f5442352ef45efaea8c89646
master date: 2017-10-12 14:36:58 +0200
commit 370cc9aa4901a1646e6fbbfe009a09ba7aaddb15
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:14:14 2017 +0200
x86: enforce proper privilege when (un)mapping pIRQ-s
(Un)mapping of IRQs, just like other RESOURCE__ADD* / RESOURCE__REMOVE*
actions (in FLASK terms) should be XSM_DM_PRIV rather than XSM_TARGET.
This in turn requires bypassing the XSM check in physdev_unmap_pirq()
for the HVM emuirq case just like is being done in physdev_map_pirq().
The primary goal security wise, however, is to no longer allow HVM
guests, by specifying their own domain ID instead of DOMID_SELF, to
enter code paths intended for PV guest and the control domains of HVM
guests only.
This is part of XSA-237.
Reported-by: HW42 <hw42@ipsumj.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: db72faf69c94513e180568006a9d899ed422ff90
master date: 2017-10-12 14:36:30 +0200
commit 39e3024360a4c09205b9b85002f68ed9aa6cc034
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Oct 12 15:13:36 2017 +0200
x86: don't allow MSI pIRQ mapping on unowned device
MSI setup should be permitted only for existing devices owned by the
respective guest (the operation may still be carried out by the domain
controlling that guest).
This is part of XSA-237.
Reported-by: HW42 <hw42@ipsumj.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 3308374b1be7d43e23bd2e9eaf23ec06d7959882
master date: 2017-10-12 14:35:14 +0200
commit 9f092f57d2829a271233aef1d1df0bff84275122
Author: Julien Grall <julien.grall@arm.com>
Date: Thu Sep 14 16:39:01 2017 +0100
xen/arm: p2m: Read *_mapped_gfn with the p2m lock taken
*_mapped_gfn are currently read before acquiring the lock. However, they
may be modified by the p2m code before the lock was acquired. This means
we will use the wrong values.
Fix it by moving the read inside the section protected by the p2m lock.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
(cherry picked from commit 2c2ae1976da06283e923d97720c0bdcbebf04515)
(qemu changes not included)
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [xen-4.8-testing test] 114505: regressions - FAIL
2017-10-15 19:45 [xen-4.8-testing test] 114505: regressions - FAIL osstest service owner
@ 2017-10-16 9:14 ` Andrew Cooper
2017-10-16 15:16 ` Ian Jackson
2017-10-16 16:12 ` Jan Beulich
0 siblings, 2 replies; 7+ messages in thread
From: Andrew Cooper @ 2017-10-16 9:14 UTC (permalink / raw)
To: osstest service owner, xen-devel; +Cc: Ian Jackson, Jan Beulich
On 15/10/17 20:45, osstest service owner wrote:
> flight 114505 xen-4.8-testing real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/114505/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
> test-xtf-amd64-amd64-2 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 114173
>
> Tests which are failing intermittently (not blocking):
> test-xtf-amd64-amd64-5 48 xtf/test-hvm64-lbr-tsx-vmentry fail in 114454 pass in 114505
Ian: These tests exercise something very machine specific, and the XTF
tests really do need tying to specific hardware when making regression
considerations.
Jan: This highlights that TSX/VMEntry failure fixes probably want
backporting to before Xen 4.9. IIRC, the 6 patches needed are:
e3eb84e33c36 (only as a functional prerequisite)
9b93c6b3695b: x86/vmx: introduce vmx_find_msr()
7f11aa4b2b1f: x86/vmx: optimize vmx_read/write_guest_msr()
d6e9f8d4f35d: x86/vmx: fix vmentry failure with TSX bits in LBR
f97838bbd980: x86: Move microcode loading earlier
20f1976b4419: x86/vmx: Fix vmentry failure because of invalid LER on Broadwell
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-4.8-testing test] 114505: regressions - FAIL
2017-10-16 9:14 ` Andrew Cooper
@ 2017-10-16 15:16 ` Ian Jackson
2017-10-16 16:51 ` Andrew Cooper
2017-10-16 16:12 ` Jan Beulich
1 sibling, 1 reply; 7+ messages in thread
From: Ian Jackson @ 2017-10-16 15:16 UTC (permalink / raw)
To: Andrew Cooper; +Cc: xen-devel, osstest service owner, Jan Beulich
Andrew Cooper writes ("Re: [Xen-devel] [xen-4.8-testing test] 114505: regressions - FAIL"):
> On 15/10/17 20:45, osstest service owner wrote:
> > flight 114505 xen-4.8-testing real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/114505/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> > test-xtf-amd64-amd64-2 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 114173
...
> Ian: These tests exercise something very machine specific, and the XTF
> tests really do need tying to specific hardware when making regression
> considerations.
Is this test new enough that it might have never run on that
hardware ? If so then a force push might be justified.
It is difficult to tie the tests to specific hardware without
insisting that every run uses every host.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [xen-4.8-testing test] 114505: regressions - FAIL
2017-10-16 15:16 ` Ian Jackson
@ 2017-10-16 16:51 ` Andrew Cooper
0 siblings, 0 replies; 7+ messages in thread
From: Andrew Cooper @ 2017-10-16 16:51 UTC (permalink / raw)
To: Ian Jackson; +Cc: xen-devel, osstest service owner, Jan Beulich
On 16/10/17 16:16, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [Xen-devel] [xen-4.8-testing test] 114505: regressions - FAIL"):
>> On 15/10/17 20:45, osstest service owner wrote:
>>> flight 114505 xen-4.8-testing real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/114505/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>> test-xtf-amd64-amd64-2 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 114173
> ...
>> Ian: These tests exercise something very machine specific, and the XTF
>> tests really do need tying to specific hardware when making regression
>> considerations.
> Is this test new enough that it might have never run on that
> hardware ? If so then a force push might be justified.
andrewcoop@andrewcoop:/local/xen-test-framework.git$ git show --format=fuller 36d926fe
commit 36d926fe0e9b7db39965f430cdb4c5f1daf4eef3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 12 18:23:42 2016
Commit: Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Tue Apr 25 13:55:42 2017
LBR/TSX VMentry failure test
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
It has been running in OSSTest for a fair while now.
The test will only fail on versions of Xen before the fixes went in
(Currently Xen 4.9), on Haswell and Broadwell hardware.
Its also possible
> It is difficult to tie the tests to specific hardware without
> insisting that every run uses every host.
How hard would it be to tag each flight with which host it ran on, and
filter for host == current when determining whether a regression has
occurred?
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-4.8-testing test] 114505: regressions - FAIL
2017-10-16 9:14 ` Andrew Cooper
2017-10-16 15:16 ` Ian Jackson
@ 2017-10-16 16:12 ` Jan Beulich
2017-10-16 16:38 ` Andrew Cooper
1 sibling, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2017-10-16 16:12 UTC (permalink / raw)
To: Andrew Cooper; +Cc: xen-devel, Ian Jackson, osstest service owner
>>> On 16.10.17 at 11:14, <andrew.cooper3@citrix.com> wrote:
> On 15/10/17 20:45, osstest service owner wrote:
>> flight 114505 xen-4.8-testing real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/114505/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>> test-xtf-amd64-amd64-2 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs.
> 114173
>>
>> Tests which are failing intermittently (not blocking):
>> test-xtf-amd64-amd64-5 48 xtf/test-hvm64-lbr-tsx-vmentry fail in 114454
> pass in 114505
>
> Ian: These tests exercise something very machine specific, and the XTF
> tests really do need tying to specific hardware when making regression
> considerations.
>
> Jan: This highlights that TSX/VMEntry failure fixes probably want
> backporting to before Xen 4.9. IIRC, the 6 patches needed are:
So I'm mildly confused by this request:
> e3eb84e33c36 (only as a functional prerequisite)
> 9b93c6b3695b: x86/vmx: introduce vmx_find_msr()
> 7f11aa4b2b1f: x86/vmx: optimize vmx_read/write_guest_msr()
> d6e9f8d4f35d: x86/vmx: fix vmentry failure with TSX bits in LBR
> f97838bbd980: x86: Move microcode loading earlier
Up to here, everything is in 4.9 already afaict. Considering the
context here is a 4.8 test report, did you perhaps mean to ask
for this on 4.8 (and possibly also 4.7)? If so, I'm not really sure -
these changes taken together look a little large for the gain
they provide.
> 20f1976b4419: x86/vmx: Fix vmentry failure because of invalid LER on
> Broadwell
I'll see to pull this one in for 4.9.1.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-4.8-testing test] 114505: regressions - FAIL
2017-10-16 16:12 ` Jan Beulich
@ 2017-10-16 16:38 ` Andrew Cooper
2017-10-17 6:17 ` Jan Beulich
0 siblings, 1 reply; 7+ messages in thread
From: Andrew Cooper @ 2017-10-16 16:38 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel, Ian Jackson, osstest service owner
On 16/10/17 17:12, Jan Beulich wrote:
>>>> On 16.10.17 at 11:14, <andrew.cooper3@citrix.com> wrote:
>> On 15/10/17 20:45, osstest service owner wrote:
>>> flight 114505 xen-4.8-testing real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/114505/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>> test-xtf-amd64-amd64-2 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs.
>> 114173
>>> Tests which are failing intermittently (not blocking):
>>> test-xtf-amd64-amd64-5 48 xtf/test-hvm64-lbr-tsx-vmentry fail in 114454
>> pass in 114505
>>
>> Ian: These tests exercise something very machine specific, and the XTF
>> tests really do need tying to specific hardware when making regression
>> considerations.
>>
>> Jan: This highlights that TSX/VMEntry failure fixes probably want
>> backporting to before Xen 4.9. IIRC, the 6 patches needed are:
> So I'm mildly confused by this request:
>
>> e3eb84e33c36 (only as a functional prerequisite)
>> 9b93c6b3695b: x86/vmx: introduce vmx_find_msr()
>> 7f11aa4b2b1f: x86/vmx: optimize vmx_read/write_guest_msr()
>> d6e9f8d4f35d: x86/vmx: fix vmentry failure with TSX bits in LBR
>> f97838bbd980: x86: Move microcode loading earlier
> Up to here, everything is in 4.9 already afaict. Considering the
> context here is a 4.8 test report, did you perhaps mean to ask
> for this on 4.8 (and possibly also 4.7)?
Well - I did ask for "backporting to before Xen 4.9".
> If so, I'm not really sure -
> these changes taken together look a little large for the gain
> they provide.
We have had several xen-devel reports of this problem, starting against
Xen 4.6 iirc. If you really thing its more risk than its worth then fine.
>
>> 20f1976b4419: x86/vmx: Fix vmentry failure because of invalid LER on
>> Broadwell
> I'll see to pull this one in for 4.9.1.
Oops - I'd not spotted that that change was missing in Xen 4.9. Yes -
please backport that one.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-4.8-testing test] 114505: regressions - FAIL
2017-10-16 16:38 ` Andrew Cooper
@ 2017-10-17 6:17 ` Jan Beulich
0 siblings, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2017-10-17 6:17 UTC (permalink / raw)
To: andrew.cooper3; +Cc: xen-devel, Ian.Jackson, osstest-admin
>>> Andrew Cooper <andrew.cooper3@citrix.com> 10/16/17 6:39 PM >>>
>On 16/10/17 17:12, Jan Beulich wrote:
>>>>> On 16.10.17 at 11:14, <andrew.cooper3@citrix.com> wrote:
>>> Jan: This highlights that TSX/VMEntry failure fixes probably want
>>> backporting to before Xen 4.9. IIRC, the 6 patches needed are:
>> So I'm mildly confused by this request:
>>
>>> e3eb84e33c36 (only as a functional prerequisite)
>>> 9b93c6b3695b: x86/vmx: introduce vmx_find_msr()
>>> 7f11aa4b2b1f: x86/vmx: optimize vmx_read/write_guest_msr()
>>> d6e9f8d4f35d: x86/vmx: fix vmentry failure with TSX bits in LBR
>>> f97838bbd980: x86: Move microcode loading earlier
>> Up to here, everything is in 4.9 already afaict. Considering the
>> context here is a 4.8 test report, did you perhaps mean to ask
>> for this on 4.8 (and possibly also 4.7)?
>
>Well - I did ask for "backporting to before Xen 4.9".
Oh, I'm sorry - I had read something you didn't write (to do the backporting
before 4.9 has its first stable release go out).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2017-10-17 6:18 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-15 19:45 [xen-4.8-testing test] 114505: regressions - FAIL osstest service owner
2017-10-16 9:14 ` Andrew Cooper
2017-10-16 15:16 ` Ian Jackson
2017-10-16 16:51 ` Andrew Cooper
2017-10-16 16:12 ` Jan Beulich
2017-10-16 16:38 ` Andrew Cooper
2017-10-17 6:17 ` Jan Beulich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).