* [xen-unstable test] 94442: regressions - FAIL
@ 2016-05-16 2:57 osstest service owner
2016-05-16 9:24 ` Wei Liu
0 siblings, 1 reply; 11+ messages in thread
From: osstest service owner @ 2016-05-16 2:57 UTC (permalink / raw)
To: xen-devel, osstest-admin
flight 94442 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/94442/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl-credit2 15 guest-start/debian.repeat fail REGR. vs. 94368
test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368
test-armhf-armhf-libvirt 7 host-ping-check-xen fail REGR. vs. 94368
test-armhf-armhf-xl-multivcpu 9 debian-install fail REGR. vs. 94368
Regressions which are regarded as allowable (not blocking):
build-i386-rumpuserxen 6 xen-build fail like 94368
build-amd64-rumpuserxen 6 xen-build fail like 94368
test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 94368
test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 94368
test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 94368
test-amd64-amd64-xl-rtds 9 debian-install fail like 94368
test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 94368
Tests which did not succeed, but are not blocking:
test-amd64-i386-rumpuserxen-i386 1 build-check(1) blocked n/a
test-amd64-amd64-rumpuserxen-amd64 1 build-check(1) blocked n/a
test-amd64-amd64-xl-pvh-amd 11 guest-start fail never pass
test-amd64-amd64-xl-pvh-intel 11 guest-start fail never pass
test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2 fail never pass
test-amd64-i386-libvirt-xsm 12 migrate-support-check fail never pass
test-amd64-amd64-libvirt-xsm 12 migrate-support-check fail never pass
test-amd64-i386-libvirt 12 migrate-support-check fail never pass
test-amd64-amd64-libvirt 12 migrate-support-check fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
test-amd64-amd64-libvirt-vhd 11 migrate-support-check fail never pass
test-armhf-armhf-xl-xsm 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-xsm 12 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail never pass
test-armhf-armhf-xl 12 migrate-support-check fail never pass
test-armhf-armhf-xl 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 12 migrate-support-check fail never pass
test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail never pass
test-armhf-armhf-libvirt-xsm 14 guest-saverestore fail never pass
test-armhf-armhf-xl-rtds 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-rtds 12 migrate-support-check fail never pass
test-armhf-armhf-libvirt-qcow2 11 migrate-support-check fail never pass
test-armhf-armhf-libvirt-qcow2 13 guest-saverestore fail never pass
test-armhf-armhf-xl-arndale 12 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 13 saverestore-support-check fail never pass
test-armhf-armhf-xl-vhd 11 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 12 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-raw 13 guest-saverestore fail never pass
test-armhf-armhf-libvirt-raw 11 migrate-support-check fail never pass
version targeted for testing:
xen fcab4cec98ae1f56312744c19f608856261b20cf
baseline version:
xen 4f6aea066fe2cf3bf4929d6dac1e558071566f73
Last test of basis 94368 2016-05-15 05:56:52 Z 0 days
Testing same since 94442 2016-05-15 18:46:42 Z 0 days 1 attempts
------------------------------------------------------------
People who touched revisions under test:
Jim Fehlig <jfehlig@suse.com>
Wei Liu <wei.liu2@citrix.com>
jobs:
build-amd64-xsm pass
build-armhf-xsm pass
build-i386-xsm pass
build-amd64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-oldkern pass
build-i386-oldkern pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-armhf-pvops pass
build-i386-pvops pass
build-amd64-rumpuserxen fail
build-i386-rumpuserxen fail
test-amd64-amd64-xl pass
test-armhf-armhf-xl pass
test-amd64-i386-xl pass
test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-xsm pass
test-armhf-armhf-libvirt-xsm fail
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-armhf-armhf-xl-xsm pass
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-amd64-xl-pvh-amd fail
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
test-amd64-i386-xl-qemuu-ovmf-amd64 pass
test-amd64-amd64-rumpuserxen-amd64 blocked
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-armhf-armhf-xl-arndale pass
test-amd64-amd64-xl-credit2 pass
test-armhf-armhf-xl-credit2 fail
test-armhf-armhf-xl-cubietruck pass
test-amd64-i386-freebsd10-i386 pass
test-amd64-i386-rumpuserxen-i386 blocked
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-amd64-xl-pvh-intel fail
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel fail
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt fail
test-amd64-i386-libvirt pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu fail
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair pass
test-amd64-i386-libvirt-pair pass
test-amd64-amd64-amd64-pvgrub pass
test-amd64-amd64-i386-pvgrub pass
test-amd64-amd64-pygrub pass
test-armhf-armhf-libvirt-qcow2 fail
test-amd64-amd64-xl-qcow2 pass
test-armhf-armhf-libvirt-raw fail
test-amd64-i386-xl-raw pass
test-amd64-amd64-xl-rtds fail
test-armhf-armhf-xl-rtds pass
test-amd64-i386-xl-qemut-winxpsp3-vcpus1 pass
test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass
test-amd64-amd64-libvirt-vhd pass
test-armhf-armhf-xl-vhd pass
test-amd64-amd64-xl-qemut-winxpsp3 pass
test-amd64-i386-xl-qemut-winxpsp3 pass
test-amd64-amd64-xl-qemuu-winxpsp3 pass
test-amd64-i386-xl-qemuu-winxpsp3 pass
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
Not pushing.
------------------------------------------------------------
commit fcab4cec98ae1f56312744c19f608856261b20cf
Author: Wei Liu <wei.liu2@citrix.com>
Date: Sun May 15 16:20:02 2016 +0100
Config.mk: update mini-os changeset
There is one commit pulled in:
lib/sys.c: enclose file_types in define guards
This is required to fix stubdom build on Arch Linux.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
commit d532f45d94c412d6ee0491cd0b44946373ff2268
Author: Jim Fehlig <jfehlig@suse.com>
Date: Thu Apr 28 15:20:46 2016 -0600
libxl: don't add cache mode for qdisk cdrom drives
qemu commit 91a097e7 forbids specifying cache mode for empty
drives. Attempting to create a domain with an empty qdisk cdrom
drive results in
qemu-system-x86_64: -drive if=ide,index=1,readonly=on,media=cdrom,
cache=writeback,id=ide-832: Must specify either driver or file
libxl only allows an empty 'target=' for cdroms. By default, cdroms
are readonly (see the 'access' parameter in xl-disk-configuration.txt)
and forced to readonly by any tools (e.g. xl) using libxlutil's
xlu_disk_parse() function. With cdroms always marked readonly,
explicitly specifying the cache mode for cdrom drives can be dropped.
The drive's 'readonly=on' option can also be set unconditionally.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
(qemu changes not included)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-16 2:57 [xen-unstable test] 94442: regressions - FAIL osstest service owner @ 2016-05-16 9:24 ` Wei Liu 2016-05-16 9:29 ` Andrew Cooper 2016-05-17 10:57 ` Jan Beulich 0 siblings, 2 replies; 11+ messages in thread From: Wei Liu @ 2016-05-16 9:24 UTC (permalink / raw) To: osstest service owner, Andrew Cooper, Jan Beulich; +Cc: xen-devel, Wei Liu On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: > flight 94442 xen-unstable real [real] > http://logs.test-lab.xenproject.org/osstest/logs/94442/ [...] > > test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 The changes in this flight shouldn't cause failure like this. See below. It is more likely to be caused by SMEP/SMAP fix, which are now in master. It seems that previous run didn't discover this. Log file at: http://logs.test-lab.xenproject.org/osstest/logs/94442/test-amd64-i386-qemuu-rhel6hvm-intel/serial-italia0.log May 15 22:07:44.023500 (XEN) Xen BUG at entry.S:221 May 15 22:07:47.455549 (XEN) ----[ Xen-4.7.0-rc x86_64 debug=y Not tainted ]---- May 15 22:07:47.463500 (XEN) CPU: 0 May 15 22:07:47.463531 (XEN) RIP: e008:[<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 May 15 22:07:47.463567 (XEN) RFLAGS: 0000000000010287 CONTEXT: hypervisor (d0v3) May 15 22:07:47.471503 (XEN) rax: 0000000000000000 rbx: 00000000cf195e50 rcx: 0000000000000001 May 15 22:07:47.479496 (XEN) rdx: ffff8300be907ff8 rsi: 0000000000007ff0 rdi: 000000000022287e May 15 22:07:47.487498 (XEN) rbp: 00007cff416f80c7 rsp: ffff8300be907f08 r8: ffff83023df8a000 May 15 22:07:47.495498 (XEN) r9: ffff83023df8a000 r10: 00000000deadbeef r11: 0000000000800000 May 15 22:07:47.503510 (XEN) r12: ffff8300bed32000 r13: ffff83023df8a000 r14: 0000000000000000 May 15 22:07:47.503549 (XEN) r15: ffff83023df72000 cr0: 0000000080050033 cr4: 00000000001526e0 May 15 22:07:47.511501 (XEN) cr3: 00000002383d7000 cr2: 00000000b71ff000 May 15 22:07:47.519493 (XEN) ds: 007b es: 007b fs: 00d8 gs: 0033 ss: 0000 cs: e008 May 15 22:07:47.527520 (XEN) Xen code around <ffff82d0802411c7> (cr4_pv32_restore+0x37/0x40): May 15 22:07:47.535491 (XEN) 3b 05 03 87 0a 00 74 02 <0f> 0b 5a 31 c0 c3 0f 1f 00 f6 42 04 01 0f 84 26 May 15 22:07:47.535531 (XEN) Xen stack trace from rsp=ffff8300be907f08: May 15 22:07:47.543502 (XEN) 0000000000000000 ffff82d080240f22 ffff83023df72000 0000000000000000 May 15 22:07:47.551559 (XEN) ffff83023df8a000 ffff8300bed32000 00000000cf195e6c 00000000cf195e50 May 15 22:07:47.559494 (XEN) 0000000000800000 00000000deadbeef ffff83023df8a000 0000000000000206 May 15 22:07:47.567496 (XEN) 0000000000000001 0000000000000001 0000000000000000 0000000000007ff0 May 15 22:07:47.575503 (XEN) 000000000022287e 0000010000000000 00000000c1001027 0000000000000061 May 15 22:07:47.575543 (XEN) 0000000000000246 00000000cf195e44 0000000000000069 000000000000beef May 15 22:07:47.583508 (XEN) 000000000000beef 000000000000beef 000000000000beef 0000000000000000 May 15 22:07:47.591503 (XEN) ffff8300bed30000 0000000000000000 00000000001526e0 May 15 22:07:47.599493 (XEN) Xen call trace: May 15 22:07:47.599522 (XEN) [<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 May 15 22:07:47.607493 (XEN) May 15 22:07:47.607524 (XEN) Xen BUG at entry.S:221 May 15 22:07:47.607552 (XEN) ----[ Xen-4.7.0-rc x86_64 debug=y Not tainted ]---- May 15 22:07:47.615544 (XEN) CPU: 0 May 15 22:07:47.615573 (XEN) RIP: e008:[<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 May 15 22:07:47.623503 (XEN) RFLAGS: 0000000000010087 CONTEXT: hypervisor (d0v3) May 15 22:07:47.631502 (XEN) rax: 0000000000000000 rbx: 0000000000000200 rcx: 0000000000000000 May 15 22:07:47.631540 (XEN) rdx: ffff8300be907ff8 rsi: 000000000000000a rdi: ffff82d0802fb6d8 May 15 22:07:47.639505 (XEN) rbp: 00007cff416f8327 rsp: ffff8300be907ca8 r8: ffff83023df78000 May 15 22:07:47.647508 (XEN) r9: 0000000000000002 r10: 0000000000000040 r11: 0000000000000002 May 15 22:07:47.655507 (XEN) r12: 0000000000000010 r13: ffff8300be907e58 r14: ffff82d0802780e2 May 15 22:07:47.663495 (XEN) r15: ffff82d0802780de cr0: 0000000080050033 cr4: 00000000001526e0 May 15 22:07:47.671487 (XEN) cr3: 00000002383d7000 cr2: 00000000b71ff000 May 15 22:07:47.671520 (XEN) ds: 007b es: 007b fs: 00d8 gs: 0033 ss: 0000 cs: e008 May 15 22:07:47.679565 (XEN) Xen code around <ffff82d0802411c7> (cr4_pv32_restore+0x37/0x40): May 15 22:07:47.687510 (XEN) 3b 05 03 87 0a 00 74 02 <0f> 0b 5a 31 c0 c3 0f 1f 00 f6 42 04 01 0f 84 26 May 15 22:07:47.695502 (XEN) Xen stack trace from rsp=ffff8300be907ca8: May 15 22: 07: 47.695536 (XEN) ffff8300be907fff ffff82d080241c4f ffff82d0802780de ffff82d0802780e2 May 15 22:07:47.703500 (XEN) ffff8300be907e58 0000000000000010 ffff8300be907d78 0000000000000200 May 15 22:07:47.711506 (XEN) 0000000000000002 0000000000000040 0000000000000002 ffff83023df78000 May 15 22:07:47.719538 (XEN) ffff82d08033c0a8 0000000000000000 ffff8300be907fff 000000000000000a May 15 22:07:47.727494 (XEN) ffff82d0802fb6d8 000000f100000000 ffff82d080145845 000000000000e008 May 15 22:07:47.735506 (XEN) 0000000000000206 ffff8300be907d68 0000000000000000 0000000000000206 May 15 22:07:47.743496 (XEN) ffff82d0802780e2 0000000000000010 ffff8300be907de8 ffff82d080199bab May 15 22:07:47.743535 (XEN) 00000000be907de8 0000000000000292 00000087ede1f795 0274000a8703053b May 15 22:07:47.751568 (XEN) 1f0fc3c0315a0b0f 26840f010442f600 000000000022287e ffff8300be907e58 May 15 22:07:47.759506 (XEN) ffff82d080249838 ffff82d080273835 00000000000000dd ffff82d080278528 May 15 22:07:47.767561 (XEN) ffff8300be907e48 ffff82d08019b180 ffff8300be907e58 ffff82d0802411c9 May 15 22:07:47.775499 (XEN) 800000021a51a025 0b0f000000000000 0000000000000001 ffff8300bed30000 May 15 22:07:47.783506 (XEN) ffff8300bed32000 ffff83023df8a000 0000000000000000 ffff83023df72000 May 15 22:07:47.791562 (XEN) 00007cff416f8187 ffff82d080241d58 ffff83023df72000 0000000000000000 May 15 22:07:47.799544 (XEN) ffff83023df8a000 ffff8300bed32000 00007cff416f80c7 00000000cf195e50 May 15 22:07:47.799583 (XEN) 0000000000800000 00000000deadbeef ffff83023df8a000 ffff83023df8a000 May 15 22:07:47.807512 (XEN) 0000000000000000 0000000000000001 ffff8300be907ff8 0000000000007ff0 May 15 22:07:47.815620 (XEN) 000000000022287e 0000000600000000 ffff82d0802411c7 000000000000e008 May 15 22:07:47.823503 (XEN) 0000000000010287 ffff8300be907f08 0000000000000000 0000000000000246 May 15 22:07:47.831506 (XEN) 0000000000000000 ffff82d080240f22 ffff83023df72000 0000000000000000 May 15 22:07:47.839490 (XEN) Xen call trace: May 15 22:07:47.839520 (XEN) [<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 May 15 22:07:47.847496 (XEN) [<ffff82d080145845>] console_unlock_recursive_irqrestore+0x2c/0x33 May 15 22:07:47.855488 (XEN) [<ffff82d080199bab>] show_execution_state+0x197/0x1c9 May 15 22:07:47.855524 (XEN) [<ffff82d08019b180>] do_invalid_op+0x381/0x4a6 May 15 22:07:47.863502 (XEN) [<ffff82d080241d58>] entry.o#handle_exception_saved+0x66/0xa4 May 15 22:07:47.871503 (XEN) [<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 May 15 22:07:47.871539 (XEN) May 15 22:07:47.879490 (XEN) May 15 22:07:47.879515 (XEN) **************************************** May 15 22:07:47.879544 (XEN) Panic on CPU 0: May 15 22:07:47.879570 (XEN) Xen BUG at entry.S:221 May 15 22:07:47.887492 (XEN) **************************************** May 15 22:07:47.887525 (XEN) May 15 22:07:47.887547 (XEN) Reboot in five seconds... May 15 22:07:47.895449 (XEN) Resetting with ACPI MEMORY or I/O RESET_REG. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-16 9:24 ` Wei Liu @ 2016-05-16 9:29 ` Andrew Cooper 2016-05-16 9:39 ` Wei Liu 2016-05-17 8:59 ` Jan Beulich 2016-05-17 10:57 ` Jan Beulich 1 sibling, 2 replies; 11+ messages in thread From: Andrew Cooper @ 2016-05-16 9:29 UTC (permalink / raw) To: Wei Liu, osstest service owner, Jan Beulich; +Cc: xen-devel On 16/05/16 10:24, Wei Liu wrote: > On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >> flight 94442 xen-unstable real [real] >> http://logs.test-lab.xenproject.org/osstest/logs/94442/ > [...] >> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 > The changes in this flight shouldn't cause failure like this. See below. > > It is more likely to be caused by SMEP/SMAP fix, which are now in > master. It seems that previous run didn't discover this. Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is italia0? In the meantime, I need to fix stack traces to prevent them assuming the presence of a frame pointer in debug builds. This isn't true for some of the hand rolled assembly (or for calls through the EFI firmware). ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-16 9:29 ` Andrew Cooper @ 2016-05-16 9:39 ` Wei Liu 2016-05-16 9:42 ` Andrew Cooper 2016-05-17 8:59 ` Jan Beulich 1 sibling, 1 reply; 11+ messages in thread From: Wei Liu @ 2016-05-16 9:39 UTC (permalink / raw) To: Andrew Cooper; +Cc: xen-devel, Wei Liu, osstest service owner, Jan Beulich On Mon, May 16, 2016 at 10:29:41AM +0100, Andrew Cooper wrote: > On 16/05/16 10:24, Wei Liu wrote: > > On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: > >> flight 94442 xen-unstable real [real] > >> http://logs.test-lab.xenproject.org/osstest/logs/94442/ > > [...] > >> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 > > The changes in this flight shouldn't cause failure like this. See below. > > > > It is more likely to be caused by SMEP/SMAP fix, which are now in > > master. It seems that previous run didn't discover this. > > Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is > italia0? > I can only tell it is an Intel box from the serial log. I'm afraid if you need more information we need to wait until Ian comes back. > In the meantime, I need to fix stack traces to prevent them assuming the > presence of a frame pointer in debug builds. This isn't true for some > of the hand rolled assembly (or for calls through the EFI firmware). > Is this related to this bug? Shall we revert the series now? I don't want it to block pushing to master for too long. Wei. > ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-16 9:39 ` Wei Liu @ 2016-05-16 9:42 ` Andrew Cooper 0 siblings, 0 replies; 11+ messages in thread From: Andrew Cooper @ 2016-05-16 9:42 UTC (permalink / raw) To: Wei Liu; +Cc: xen-devel, osstest service owner, Jan Beulich On 16/05/16 10:39, Wei Liu wrote: > On Mon, May 16, 2016 at 10:29:41AM +0100, Andrew Cooper wrote: >> On 16/05/16 10:24, Wei Liu wrote: >>> On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >>>> flight 94442 xen-unstable real [real] >>>> http://logs.test-lab.xenproject.org/osstest/logs/94442/ >>> [...] >>>> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 >>> The changes in this flight shouldn't cause failure like this. See below. >>> >>> It is more likely to be caused by SMEP/SMAP fix, which are now in >>> master. It seems that previous run didn't discover this. >> Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is >> italia0? >> > I can only tell it is an Intel box from the serial log. > > I'm afraid if you need more information we need to wait until Ian comes > back. > >> In the meantime, I need to fix stack traces to prevent them assuming the >> presence of a frame pointer in debug builds. This isn't true for some >> of the hand rolled assembly (or for calls through the EFI firmware). >> > Is this related to this bug? Not specifically, but it is the reason the first call trace has a single entry rather than the two expected. > > Shall we revert the series now? I don't want it to block pushing to > master for too long. Let me see if I can come up with a fix soonish. If not, we should consider reverting. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-16 9:29 ` Andrew Cooper 2016-05-16 9:39 ` Wei Liu @ 2016-05-17 8:59 ` Jan Beulich 2016-05-17 9:01 ` Andrew Cooper 2016-05-17 9:06 ` Jan Beulich 1 sibling, 2 replies; 11+ messages in thread From: Jan Beulich @ 2016-05-17 8:59 UTC (permalink / raw) To: Andrew Cooper, Wei Liu, osstest service owner; +Cc: xen-devel >>> On 16.05.16 at 11:29, <andrew.cooper3@citrix.com> wrote: > On 16/05/16 10:24, Wei Liu wrote: >> On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >>> flight 94442 xen-unstable real [real] >>> http://logs.test-lab.xenproject.org/osstest/logs/94442/ >> [...] >>> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 >> The changes in this flight shouldn't cause failure like this. See below. >> >> It is more likely to be caused by SMEP/SMAP fix, which are now in >> master. It seems that previous run didn't discover this. > > Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is > italia0? E3-1220V2 according to the copy of the spreadsheet of systems I have. Aiui v2 should have neither SMEP nor SMAP, and hence we shouldn't even get into cr4_pv32_restore(). Suggests an issue with alternative insn patching ... Jan > In the meantime, I need to fix stack traces to prevent them assuming the > presence of a frame pointer in debug builds. This isn't true for some > of the hand rolled assembly (or for calls through the EFI firmware). > > ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-17 8:59 ` Jan Beulich @ 2016-05-17 9:01 ` Andrew Cooper 2016-05-17 9:08 ` Jan Beulich 2016-05-17 9:06 ` Jan Beulich 1 sibling, 1 reply; 11+ messages in thread From: Andrew Cooper @ 2016-05-17 9:01 UTC (permalink / raw) To: Jan Beulich, Wei Liu, osstest service owner; +Cc: xen-devel On 17/05/16 09:59, Jan Beulich wrote: >>>> On 16.05.16 at 11:29, <andrew.cooper3@citrix.com> wrote: >> On 16/05/16 10:24, Wei Liu wrote: >>> On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >>>> flight 94442 xen-unstable real [real] >>>> http://logs.test-lab.xenproject.org/osstest/logs/94442/ >>> [...] >>>> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 >>> The changes in this flight shouldn't cause failure like this. See below. >>> >>> It is more likely to be caused by SMEP/SMAP fix, which are now in >>> master. It seems that previous run didn't discover this. >> Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is >> italia0? > E3-1220V2 according to the copy of the spreadsheet of systems > I have. Aiui v2 should have neither SMEP nor SMAP, and hence we > shouldn't even get into cr4_pv32_restore(). Suggests an issue with > alternative insn patching ... v2 is IvyBridge, and has SMEP but not SMAP. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-17 9:01 ` Andrew Cooper @ 2016-05-17 9:08 ` Jan Beulich 0 siblings, 0 replies; 11+ messages in thread From: Jan Beulich @ 2016-05-17 9:08 UTC (permalink / raw) To: Andrew Cooper; +Cc: xen-devel, Wei Liu, osstestservice owner >>> On 17.05.16 at 11:01, <andrew.cooper3@citrix.com> wrote: > On 17/05/16 09:59, Jan Beulich wrote: >>>>> On 16.05.16 at 11:29, <andrew.cooper3@citrix.com> wrote: >>> On 16/05/16 10:24, Wei Liu wrote: >>>> On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >>>>> flight 94442 xen-unstable real [real] >>>>> http://logs.test-lab.xenproject.org/osstest/logs/94442/ >>>> [...] >>>>> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 >>>> The changes in this flight shouldn't cause failure like this. See below. >>>> >>>> It is more likely to be caused by SMEP/SMAP fix, which are now in >>>> master. It seems that previous run didn't discover this. >>> Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is >>> italia0? >> E3-1220V2 according to the copy of the spreadsheet of systems >> I have. Aiui v2 should have neither SMEP nor SMAP, and hence we >> shouldn't even get into cr4_pv32_restore(). Suggests an issue with >> alternative insn patching ... > > v2 is IvyBridge, and has SMEP but not SMAP. Oh, I wrongly thought only v3 had SMEP. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-17 8:59 ` Jan Beulich 2016-05-17 9:01 ` Andrew Cooper @ 2016-05-17 9:06 ` Jan Beulich 1 sibling, 0 replies; 11+ messages in thread From: Jan Beulich @ 2016-05-17 9:06 UTC (permalink / raw) To: Andrew Cooper; +Cc: xen-devel, Wei Liu, osstest service owner >>> On 17.05.16 at 10:59, <JBeulich@suse.com> wrote: >>>> On 16.05.16 at 11:29, <andrew.cooper3@citrix.com> wrote: >> On 16/05/16 10:24, Wei Liu wrote: >>> On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >>>> flight 94442 xen-unstable real [real] >>>> http://logs.test-lab.xenproject.org/osstest/logs/94442/ >>> [...] >>>> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. > 94368 >>> The changes in this flight shouldn't cause failure like this. See below. >>> >>> It is more likely to be caused by SMEP/SMAP fix, which are now in >>> master. It seems that previous run didn't discover this. >> >> Indeed - definitely from the SMEP/SMAP fix. What kind of hardware is >> italia0? > > E3-1220V2 according to the copy of the spreadsheet of systems > I have. Aiui v2 should have neither SMEP nor SMAP, and hence we > shouldn't even get into cr4_pv32_restore(). Suggests an issue with > alternative insn patching ... Otoh the dumped CR4 shows SMEP to be in use. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-16 9:24 ` Wei Liu 2016-05-16 9:29 ` Andrew Cooper @ 2016-05-17 10:57 ` Jan Beulich 2016-05-17 13:08 ` Andrew Cooper 1 sibling, 1 reply; 11+ messages in thread From: Jan Beulich @ 2016-05-17 10:57 UTC (permalink / raw) To: Andrew Cooper, Wei Liu; +Cc: xen-devel, osstest service owner >>> On 16.05.16 at 11:24, <wei.liu2@citrix.com> wrote: > On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >> flight 94442 xen-unstable real [real] >> http://logs.test-lab.xenproject.org/osstest/logs/94442/ > [...] >> >> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 > > The changes in this flight shouldn't cause failure like this. See below. > > It is more likely to be caused by SMEP/SMAP fix, which are now in > master. It seems that previous run didn't discover this. > > Log file at: > > http://logs.test-lab.xenproject.org/osstest/logs/94442/test-amd64-i386-qemuu-rhel > 6hvm-intel/serial-italia0.log > > May 15 22:07:44.023500 (XEN) Xen BUG at entry.S:221 > May 15 22:07:47.455549 (XEN) ----[ Xen-4.7.0-rc x86_64 debug=y Not tainted ]---- > May 15 22:07:47.463500 (XEN) CPU: 0 > May 15 22:07:47.463531 (XEN) RIP: e008:[<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 > May 15 22:07:47.463567 (XEN) RFLAGS: 0000000000010287 CONTEXT: hypervisor (d0v3) > May 15 22:07:47.471503 (XEN) rax: 0000000000000000 rbx: 00000000cf195e50 rcx: 0000000000000001 > May 15 22:07:47.479496 (XEN) rdx: ffff8300be907ff8 rsi: 0000000000007ff0 rdi: 000000000022287e > May 15 22:07:47.487498 (XEN) rbp: 00007cff416f80c7 rsp: ffff8300be907f08 r8: ffff83023df8a000 > May 15 22:07:47.495498 (XEN) r9: ffff83023df8a000 r10: 00000000deadbeef r11: 0000000000800000 > May 15 22:07:47.503510 (XEN) r12: ffff8300bed32000 r13: ffff83023df8a000 r14: 0000000000000000 > May 15 22:07:47.503549 (XEN) r15: ffff83023df72000 cr0: 0000000080050033 cr4: 00000000001526e0 > May 15 22:07:47.511501 (XEN) cr3: 00000002383d7000 cr2: 00000000b71ff000 > May 15 22:07:47.519493 (XEN) ds: 007b es: 007b fs: 00d8 gs: 0033 ss: 0000 cs: e008 > May 15 22:07:47.527520 (XEN) Xen code around <ffff82d0802411c7> (cr4_pv32_restore+0x37/0x40): > May 15 22:07:47.535491 (XEN) 3b 05 03 87 0a 00 74 02 <0f> 0b 5a 31 c0 c3 0f 1f 00 f6 42 04 01 0f 84 26 > May 15 22:07:47.535531 (XEN) Xen stack trace from rsp=ffff8300be907f08: > May 15 22:07:47.543502 (XEN) 0000000000000000 ffff82d080240f22 ffff83023df72000 0000000000000000 > May 15 22:07:47.551559 (XEN) ffff83023df8a000 ffff8300bed32000 00000000cf195e6c 00000000cf195e50 > May 15 22:07:47.559494 (XEN) 0000000000800000 00000000deadbeef ffff83023df8a000 0000000000000206 > May 15 22:07:47.567496 (XEN) 0000000000000001 0000000000000001 0000000000000000 0000000000007ff0 > May 15 22:07:47.575503 (XEN) 000000000022287e 0000010000000000 00000000c1001027 0000000000000061 > May 15 22:07:47.575543 (XEN) 0000000000000246 00000000cf195e44 0000000000000069 000000000000beef > May 15 22:07:47.583508 (XEN) 000000000000beef 000000000000beef 000000000000beef 0000000000000000 > May 15 22:07:47.591503 (XEN) ffff8300bed30000 0000000000000000 00000000001526e0 > May 15 22:07:47.599493 (XEN) Xen call trace: > May 15 22:07:47.599522 (XEN) [<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 I think I see the problem the introduction of caching in v3 introduced: In compat_restore_all_guest we have (getting patched in by altinsn patching): .Lcr4_alt: testb $3,UREGS_cs(%rsp) jpe .Lcr4_alt_end mov CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp), %rax and $~XEN_CR4_PV32_BITS, %rax mov %rax, CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp) mov %rax, %cr4 .Lcr4_alt_end: If an NMI occurs between the updating og the cached value and the actual CR4 write, the NMI handling will cause the cached value to get SMEP+SMAP enabled again (in both cache and CR4), and once we get back here, we will clear it in just CR4. We don't want to undo the caching, as that gave us performance back at least for 64-bit PV guests. We also can't simply swap the two instructions: If we did, an NMI between the two would itself trigger the BUG in cr4_pv32_restore (as the check there assumes that CR4 always has no less of the bits of interest set than the cached value). The options I see are: 1) Ditch the debug check altogether, for being false positive in exactly one corner case. 2) Make the NMI handler recognize the single critical pair of instructions. 3) Change the code sequence above to .Lcr4_alt: testb $3,UREGS_cs(%rsp) jpe .Lcr4_alt_end mov CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp), %rax and $~XEN_CR4_PV32_BITS, %rax 1: mov %rax, CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp) mov %rax, %cr4 /* (suitable comment goes here) */ cmp %rax, CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp) jne 1b .Lcr4_alt_end: (assuming that an insane flood of NMIs not allowing this loop to be exited would be sufficiently problematic in other ways). I dislike 1, and between 2 and 3 I think I'd prefer the latter, unless someone else sees something wrong with such an approach. > May 15 22:07:47.607524 (XEN) Xen BUG at entry.S:221 A fix for this recursive occurrence was already sent. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [xen-unstable test] 94442: regressions - FAIL 2016-05-17 10:57 ` Jan Beulich @ 2016-05-17 13:08 ` Andrew Cooper 0 siblings, 0 replies; 11+ messages in thread From: Andrew Cooper @ 2016-05-17 13:08 UTC (permalink / raw) To: Jan Beulich, Wei Liu; +Cc: xen-devel, osstest service owner On 17/05/16 11:57, Jan Beulich wrote: >>>> On 16.05.16 at 11:24, <wei.liu2@citrix.com> wrote: >> On Mon, May 16, 2016 at 02:57:13AM +0000, osstest service owner wrote: >>> flight 94442 xen-unstable real [real] >>> http://logs.test-lab.xenproject.org/osstest/logs/94442/ >> [...] >>> test-amd64-i386-qemuu-rhel6hvm-intel 9 redhat-install fail REGR. vs. 94368 >> The changes in this flight shouldn't cause failure like this. See below. >> >> It is more likely to be caused by SMEP/SMAP fix, which are now in >> master. It seems that previous run didn't discover this. >> >> Log file at: >> >> http://logs.test-lab.xenproject.org/osstest/logs/94442/test-amd64-i386-qemuu-rhel >> 6hvm-intel/serial-italia0.log >> >> May 15 22:07:44.023500 (XEN) Xen BUG at entry.S:221 >> May 15 22:07:47.455549 (XEN) ----[ Xen-4.7.0-rc x86_64 debug=y Not tainted ]---- >> May 15 22:07:47.463500 (XEN) CPU: 0 >> May 15 22:07:47.463531 (XEN) RIP: e008:[<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 >> May 15 22:07:47.463567 (XEN) RFLAGS: 0000000000010287 CONTEXT: hypervisor (d0v3) >> May 15 22:07:47.471503 (XEN) rax: 0000000000000000 rbx: 00000000cf195e50 rcx: 0000000000000001 >> May 15 22:07:47.479496 (XEN) rdx: ffff8300be907ff8 rsi: 0000000000007ff0 rdi: 000000000022287e >> May 15 22:07:47.487498 (XEN) rbp: 00007cff416f80c7 rsp: ffff8300be907f08 r8: ffff83023df8a000 >> May 15 22:07:47.495498 (XEN) r9: ffff83023df8a000 r10: 00000000deadbeef r11: 0000000000800000 >> May 15 22:07:47.503510 (XEN) r12: ffff8300bed32000 r13: ffff83023df8a000 r14: 0000000000000000 >> May 15 22:07:47.503549 (XEN) r15: ffff83023df72000 cr0: 0000000080050033 cr4: 00000000001526e0 >> May 15 22:07:47.511501 (XEN) cr3: 00000002383d7000 cr2: 00000000b71ff000 >> May 15 22:07:47.519493 (XEN) ds: 007b es: 007b fs: 00d8 gs: 0033 ss: 0000 cs: e008 >> May 15 22:07:47.527520 (XEN) Xen code around <ffff82d0802411c7> (cr4_pv32_restore+0x37/0x40): >> May 15 22:07:47.535491 (XEN) 3b 05 03 87 0a 00 74 02 <0f> 0b 5a 31 c0 c3 0f 1f 00 f6 42 04 01 0f 84 26 >> May 15 22:07:47.535531 (XEN) Xen stack trace from rsp=ffff8300be907f08: >> May 15 22:07:47.543502 (XEN) 0000000000000000 ffff82d080240f22 ffff83023df72000 0000000000000000 >> May 15 22:07:47.551559 (XEN) ffff83023df8a000 ffff8300bed32000 00000000cf195e6c 00000000cf195e50 >> May 15 22:07:47.559494 (XEN) 0000000000800000 00000000deadbeef ffff83023df8a000 0000000000000206 >> May 15 22:07:47.567496 (XEN) 0000000000000001 0000000000000001 0000000000000000 0000000000007ff0 >> May 15 22:07:47.575503 (XEN) 000000000022287e 0000010000000000 00000000c1001027 0000000000000061 >> May 15 22:07:47.575543 (XEN) 0000000000000246 00000000cf195e44 0000000000000069 000000000000beef >> May 15 22:07:47.583508 (XEN) 000000000000beef 000000000000beef 000000000000beef 0000000000000000 >> May 15 22:07:47.591503 (XEN) ffff8300bed30000 0000000000000000 00000000001526e0 >> May 15 22:07:47.599493 (XEN) Xen call trace: >> May 15 22:07:47.599522 (XEN) [<ffff82d0802411c7>] cr4_pv32_restore+0x37/0x40 > I think I see the problem the introduction of caching in v3 introduced: > In compat_restore_all_guest we have (getting patched in by altinsn > patching): > > .Lcr4_alt: > testb $3,UREGS_cs(%rsp) > jpe .Lcr4_alt_end > mov CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp), %rax > and $~XEN_CR4_PV32_BITS, %rax > mov %rax, CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp) > mov %rax, %cr4 > .Lcr4_alt_end: > > If an NMI occurs between the updating og the cached value and the > actual CR4 write, the NMI handling will cause the cached value to get > SMEP+SMAP enabled again (in both cache and CR4), and once we > get back here, we will clear it in just CR4. > > We don't want to undo the caching, as that gave us performance back > at least for 64-bit PV guests. > > We also can't simply swap the two instructions: If we did, an NMI > between the two would itself trigger the BUG in cr4_pv32_restore > (as the check there assumes that CR4 always has no less of the > bits of interest set than the cached value). > > The options I see are: > > 1) Ditch the debug check altogether, for being false positive in > exactly one corner case. > > 2) Make the NMI handler recognize the single critical pair of > instructions. > > 3) Change the code sequence above to > > .Lcr4_alt: > testb $3,UREGS_cs(%rsp) > jpe .Lcr4_alt_end > mov CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp), %rax > and $~XEN_CR4_PV32_BITS, %rax > 1: > mov %rax, CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp) > mov %rax, %cr4 > /* (suitable comment goes here) */ > cmp %rax, CPUINFO_cr4-CPUINFO_guest_cpu_user_regs(%rsp) > jne 1b > .Lcr4_alt_end: > > (assuming that an insane flood of NMIs not allowing this loop to > be exited would be sufficiently problematic in other ways). > > I dislike 1, and between 2 and 3 I think I'd prefer the latter, unless > someone else sees something wrong with such an approach. +1 for option 3. If we have a flood of NMIs, we have larger problems than this loop. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-05-17 13:09 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2016-05-16 2:57 [xen-unstable test] 94442: regressions - FAIL osstest service owner 2016-05-16 9:24 ` Wei Liu 2016-05-16 9:29 ` Andrew Cooper 2016-05-16 9:39 ` Wei Liu 2016-05-16 9:42 ` Andrew Cooper 2016-05-17 8:59 ` Jan Beulich 2016-05-17 9:01 ` Andrew Cooper 2016-05-17 9:08 ` Jan Beulich 2016-05-17 9:06 ` Jan Beulich 2016-05-17 10:57 ` Jan Beulich 2016-05-17 13:08 ` Andrew Cooper
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).