* Xen 4.10.0 RC1 test result @ 2017-10-27 8:28 Hao, Xudong 2017-10-27 9:19 ` Jan Beulich 2017-10-27 9:54 ` Andrew Cooper 0 siblings, 2 replies; 9+ messages in thread From: Hao, Xudong @ 2017-10-27 8:28 UTC (permalink / raw) To: xen-devel@lists.xen.org; +Cc: Lars Kurth, Julien Grall [-- Attachment #1.1: Type: text/plain, Size: 1164 bytes --] We performed Xen 4.10 RC1 testing on Intel Xeon Skylake, Broadwell server, Intel Atom Denverton platforms, verified many functional features, which include new features Local MCE, L2 CAT and UMIP on Xen 4.10. We'd like to share the result out. Most of features passed to testing on Xen 4.10 RC1, VT-d, RAS and nested has some bugs. VT-d: [BUG] win2008 guest cannot get ip through sriov https://www.mail-archive.com/xen-devel@lists.xen.org/msg127433.html RAS: [BUG] xen-mceinj tool testing cause dom0 crash https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html Nested: Nested status is better than Xen 4.9.0, KVM on Xen, HyperV on Xen works, while Xen on Xen, VMware on Xen fail. https://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen Features Test Result Local MCE Pass L2 CAT Pass UMIP Pass AVX512 Pass Protection keys Pass Altp2m Pass RDT(CMT, CAT, CDP, MBM) Pass VT-d PI Pass XSAVES Pass MPX Pass PML (Page-modification Logging) Pass Nested Buggy VT-d/SR-IOV Buggy RAS Buggy ACPI Pass Best Regards, Xudong [-- Attachment #1.2: Type: text/html, Size: 12930 bytes --] [-- Attachment #2: Type: text/plain, Size: 127 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-10-27 8:28 Xen 4.10.0 RC1 test result Hao, Xudong @ 2017-10-27 9:19 ` Jan Beulich 2017-10-30 2:21 ` Hao, Xudong 2017-10-27 9:54 ` Andrew Cooper 1 sibling, 1 reply; 9+ messages in thread From: Jan Beulich @ 2017-10-27 9:19 UTC (permalink / raw) To: Xudong Hao; +Cc: Lars Kurth, Julien Grall, xen-devel@lists.xen.org >>> On 27.10.17 at 10:28, <xudong.hao@intel.com> wrote: > RAS: > [BUG] xen-mceinj tool testing cause dom0 crash > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html Please can you provide helpful links? This doesn't point to the beginning of the thread, and the mail archive chosen doesn't appear to have an easy way to go back to the head of a thread. And when I go through the parts of the thread which are easily accessible there, it looks like you've never followed up on the additional information (log) request. This way I don't see how we can make progress there. Plus, looking over the Cc lists there, Linux maintainers also don't appear to have been involved at any time. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-10-27 9:19 ` Jan Beulich @ 2017-10-30 2:21 ` Hao, Xudong 2017-11-02 13:59 ` Julien Grall 2017-11-06 8:24 ` Jan Beulich 0 siblings, 2 replies; 9+ messages in thread From: Hao, Xudong @ 2017-10-30 2:21 UTC (permalink / raw) To: Jan Beulich; +Cc: Lars Kurth, Julien Grall, xen-devel@lists.xen.org [-- Attachment #1: Type: text/plain, Size: 1589 bytes --] > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: Friday, October 27, 2017 5:19 PM > To: Hao, Xudong <xudong.hao@intel.com> > Cc: Julien Grall <julien.grall@arm.com>; Lars Kurth <lars.kurth@citrix.com>; > xen-devel@lists.xen.org > Subject: Re: [Xen-devel] Xen 4.10.0 RC1 test result > > >>> On 27.10.17 at 10:28, <xudong.hao@intel.com> wrote: > > RAS: > > [BUG] xen-mceinj tool testing cause dom0 crash > > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html > > Please can you provide helpful links? This doesn't point to the beginning of the > thread, and the mail archive chosen doesn't appear to have an easy way to go > back to the head of a thread. And when I go through the parts of the thread Unfortunately I didn't find the original link from mail-archive, but I pick up it in my mail client, attach the original mail. > which are easily accessible there, it looks like you've never followed up on the > additional information (log) request. I've provided the full log which included Xen and Dom0's, even though there was no valid error message from Dom0. > This way I don't see how we can make > progress there. Yes, this is the end mail https://www.mail-archive.com/xen-devel@lists.xen.org/msg108894.html. > Plus, looking over the Cc lists there, Linux maintainers also don't > appear to have been involved at any time. > I'm not sure if it's related with Dom0's kernel. My intention is we could discuss in Xen list only till we make sure it's Dom0's issue. Thanks, -Xudong [-- Attachment #2: Type: message/rfc822, Size: 502924 bytes --] [-- Attachment #2.1.1.1: Type: text/plain, Size: 4708 bytes --] Bug detailed description: ---------------- Xen has a MCE soft injection tool xen-mceinj to test RAS, testing with this tool cause dom0 crash and system reboot. Attach the whole log. Environment : ---------------- HW: Skylake/Broadwell server Xen: Xen 4.9.0 RC5 Dom0: Linux 4.11.0 Reproduce steps: ---------------- 1. Compiling xen-mceinj in xen : xen/tools/tests/mce-test/tools 2. Run the commond: xen/tools/tests/mce-test/tools/xen-mceinj -t 0 Current result: ---------------- VM Migration fail. Base error log: ---------------- (XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds. (XEN) ----[ Xen-4.9-rc x86_64 debug=y Tainted: MCE ]---- (XEN) CPU: 0 (XEN) RIP: e008:[<0000000065eb1e13>] 0000000065eb1e13 (XEN) RFLAGS: 0000000000010246 CONTEXT: hypervisor (XEN) rax: 0000000000000000 rbx: ffff83005f827bb0 rcx: 00000000682ab000 (XEN) rdx: 0000000000000000 rsi: 0000000000000381 rdi: ffff83005f827b90 (XEN) rbp: ffff83005f827c88 rsp: ffff83005f827ae0 r8: ffff83005f827bb0 (XEN) r9: ffff83005f827b90 r10: 0000000065eb3258 r11: 0000ffff0000ffff (XEN) r12: 00000000fffffffe r13: 0000000000000000 r14: 0000000000000065 (XEN) r15: ffff83102bca5000 cr0: 0000000080050033 cr4: 00000000003526e0 (XEN) cr3: 000000102c962000 cr2: 00000000682ab009 (XEN) ds: 002b es: 002b fs: 0000 gs: 0000 ss: 0000 cs: e008 (XEN) Xen code around <0000000065eb1e13> (0000000065eb1e13): (XEN) ff 00 00 48 8b 4c 24 28 <0f> b6 49 09 3b c1 72 18 4c 8d 05 06 20 00 00 ba (XEN) Xen stack trace from rsp=ffff83005f827ae0: (XEN) ffff82d08026dd12 ffff83005f827b38 ffff82d08026e1df 0000000400000093 (XEN) 0000000000000004 00000000682ab000 000000000000000d 0000000000000002 (XEN) 0000000000000017 0000000065eb0ee8 ffff83005f827bb0 0000000000000046 (XEN) 020000000001a0d1 ffff83005f827b98 0000000000000000 0000000065eaf77c (XEN) 0000000000000000 ffff83005f827bb8 ffff82d08026fe70 0000000000000010 (XEN) 000000000000001e 0000000065e4de0b ffff83102bca5000 ffff83005f827ba8 (XEN) ffff82d08025f2f2 ffff83005f827bb8 00000000000b0000 682ab00000000200 (XEN) ffff82d080270b47 0000000065e4e1cc ffff83005f827c00 0000000000000206 (XEN) ffff83005f827c60 ffff83005f827c40 ffff83102bca5000 0000000065e4d7c9 (XEN) 0000000000000000 0000000000000381 000000102c962000 0000000000000065 (XEN) 0000000000000000 00000000fffffffe 000000102c962000 ffff82d080356618 (XEN) 0000000000000000 0000000000000000 ffff82d080808780 ffff83005f827c68 (XEN) 000000005f819000 ffff83005f827c88 ffff82d08029796c 0000000000000000 (XEN) 0000000000000000 ffff83005f827cd8 ffff82d080297307 ffff83005f827cf8 (XEN) 000013888024855e 000083005f827d08 0000000000000000 0000000000000000 (XEN) ffff83005f827db8 00000000000000fb ffff83005f827fff ffff83005f827ce8 (XEN) ffff82d0802973a5 ffff83005f827d08 ffff82d080232e22 ffff83005f827d08 (XEN) 0000000000000000 ffff83005f827d18 ffff82d080297a08 ffff83005f827da8 (XEN) ffff82d080276efe ffff83005f827db8 ffff82d080276efe 0000000000000286 (XEN) ffff83005f827d58 ffff83102bc61cd0 ffff83102bc7ae40 80000000000000d4 (XEN) Xen call trace: (XEN) [<ffff82d08026dd12>] sync_local_execstate+0x9/0xb (XEN) [<ffff82d080297307>] machine_restart+0x1c6/0x259 (XEN) [<ffff82d0802973a5>] shutdown.c#__machine_restart+0xb/0x16 (XEN) [<ffff82d080232e22>] smp_call_function_interrupt+0x8f/0xbd (XEN) [<ffff82d080297a08>] call_function_interrupt+0x35/0x3d (XEN) [<ffff82d080276efe>] do_IRQ+0x8c/0x61e (XEN) [<ffff82d0803537b7>] common_interrupt+0x67/0x70 (XEN) [<ffff82d0802d1a5f>] mce_panic_check+0/0x21 (XEN) [<ffff82d0802cd950>] mce.c#mce_softirq+0x140/0x183 (XEN) [<ffff82d08023265f>] softirq.c#__do_softirq+0x7f/0x8a (XEN) [<ffff82d0802326b4>] do_softirq+0x13/0x15 (XEN) [<ffff82d080268b4a>] domain.c#idle_loop+0x55/0x62 (XEN) (XEN) Pagetable walk from 00000000682ab009: (XEN) L4[0x000] = 000000102c961063 ffffffffffffffff (XEN) L3[0x001] = 000000005f812063 ffffffffffffffff (XEN) L2[0x141] = 0000000000000000 ffffffffffffffff (XEN) (XEN) **************************************** (XEN) Panic on CPU 0: (XEN) FATAL PAGE FAULT (XEN) [error_code=0000] (XEN) Faulting linear address: 00000000682ab009 (XEN) **************************************** (XEN) (XEN) Reboot in five seconds... (XEN) Resetting with ACPI MEMORY or I/O RESET_REG. Force an S5 exit path. [SIO] Current system SIO exist bit:1 Best Regards, Xudong [-- Attachment #2.1.1.2: Type: text/html, Size: 13012 bytes --] [-- Attachment #2.1.2: xen-mceinj.log --] [-- Type: application/octet-stream, Size: 353479 bytes --] [-- Attachment #2.1.3: ATT00001.txt --] [-- Type: text/plain, Size: 127 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel [-- Attachment #3: Type: text/plain, Size: 127 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-10-30 2:21 ` Hao, Xudong @ 2017-11-02 13:59 ` Julien Grall 2017-11-03 8:42 ` Hao, Xudong 2017-11-06 8:24 ` Jan Beulich 1 sibling, 1 reply; 9+ messages in thread From: Julien Grall @ 2017-11-02 13:59 UTC (permalink / raw) To: Hao, Xudong, Jan Beulich Cc: Lars Kurth, Julien Grall, xen-devel@lists.xen.org Hi, On 30/10/17 02:21, Hao, Xudong wrote: > >> -----Original Message----- >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: Friday, October 27, 2017 5:19 PM >> To: Hao, Xudong <xudong.hao@intel.com> >> Cc: Julien Grall <julien.grall@arm.com>; Lars Kurth <lars.kurth@citrix.com>; >> xen-devel@lists.xen.org >> Subject: Re: [Xen-devel] Xen 4.10.0 RC1 test result >> >>>>> On 27.10.17 at 10:28, <xudong.hao@intel.com> wrote: >>> RAS: >>> [BUG] xen-mceinj tool testing cause dom0 crash >>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html >> >> Please can you provide helpful links? This doesn't point to the beginning of the >> thread, and the mail archive chosen doesn't appear to have an easy way to go >> back to the head of a thread. And when I go through the parts of the thread > > Unfortunately I didn't find the original link from mail-archive, but I pick up it in my mail client, attach the original mail. > >> which are easily accessible there, it looks like you've never followed up on the >> additional information (log) request. > > I've provided the full log which included Xen and Dom0's, even though there was no valid error message from Dom0. > >> This way I don't see how we can make >> progress there. > > Yes, this is the end mail https://www.mail-archive.com/xen-devel@lists.xen.org/msg108894.html. > >> Plus, looking over the Cc lists there, Linux maintainers also don't >> appear to have been involved at any time. >> > > I'm not sure if it's related with Dom0's kernel. My intention is we could discuss in Xen list only till we make sure it's Dom0's issue. At the moment the discussion seem to be stuck on Xen list... Jan mentioned that for now he is not convinced it is a Xen bug. How about you CC Linux maintainers to get more feedback? Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-11-02 13:59 ` Julien Grall @ 2017-11-03 8:42 ` Hao, Xudong 0 siblings, 0 replies; 9+ messages in thread From: Hao, Xudong @ 2017-11-03 8:42 UTC (permalink / raw) To: Julien Grall, Jan Beulich Cc: Lars Kurth, Julien Grall, xen-devel@lists.xen.org > -----Original Message----- > From: Julien Grall [mailto:julien.grall@linaro.org] > Sent: Thursday, November 2, 2017 9:59 PM > To: Hao, Xudong <xudong.hao@intel.com>; Jan Beulich <JBeulich@suse.com> > Cc: Lars Kurth <lars.kurth@citrix.com>; Julien Grall <julien.grall@arm.com>; > xen-devel@lists.xen.org > Subject: Re: [Xen-devel] Xen 4.10.0 RC1 test result > > Hi, > > On 30/10/17 02:21, Hao, Xudong wrote: > > > >> -----Original Message----- > >> From: Jan Beulich [mailto:JBeulich@suse.com] > >> Sent: Friday, October 27, 2017 5:19 PM > >> To: Hao, Xudong <xudong.hao@intel.com> > >> Cc: Julien Grall <julien.grall@arm.com>; Lars Kurth > >> <lars.kurth@citrix.com>; xen-devel@lists.xen.org > >> Subject: Re: [Xen-devel] Xen 4.10.0 RC1 test result > >> > >>>>> On 27.10.17 at 10:28, <xudong.hao@intel.com> wrote: > >>> RAS: > >>> [BUG] xen-mceinj tool testing cause dom0 crash > >>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html > >> > >> Please can you provide helpful links? This doesn't point to the > >> beginning of the thread, and the mail archive chosen doesn't appear > >> to have an easy way to go back to the head of a thread. And when I go > >> through the parts of the thread > > > > Unfortunately I didn't find the original link from mail-archive, but I pick up it in > my mail client, attach the original mail. > > > >> which are easily accessible there, it looks like you've never > >> followed up on the additional information (log) request. > > > > I've provided the full log which included Xen and Dom0's, even though there > was no valid error message from Dom0. > > > >> This way I don't see how we can make > >> progress there. > > > > Yes, this is the end mail https://www.mail-archive.com/xen- > devel@lists.xen.org/msg108894.html. > > > >> Plus, looking over the Cc lists there, Linux maintainers also don't > >> appear to have been involved at any time. > >> > > > > I'm not sure if it's related with Dom0's kernel. My intention is we could discuss > in Xen list only till we make sure it's Dom0's issue. > > At the moment the discussion seem to be stuck on Xen list... Jan mentioned that > for now he is not convinced it is a Xen bug. How about you CC Linux maintainers > to get more feedback? > Hi Jan/Julien, We did further analysis for this issue and closed it, I've replied the detail in the bug mail thread. https://www.mail-archive.com/xen-devel@lists.xen.org/msg127967.html Thanks, -Xudong _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-10-30 2:21 ` Hao, Xudong 2017-11-02 13:59 ` Julien Grall @ 2017-11-06 8:24 ` Jan Beulich 2017-11-06 8:44 ` Hao, Xudong 1 sibling, 1 reply; 9+ messages in thread From: Jan Beulich @ 2017-11-06 8:24 UTC (permalink / raw) To: Xudong Hao; +Cc: Lars Kurth, Julien Grall, xen-devel@lists.xen.org >>> On 30.10.17 at 03:21, <xudong.hao@intel.com> wrote: >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: Friday, October 27, 2017 5:19 PM >> >>> On 27.10.17 at 10:28, <xudong.hao@intel.com> wrote: >> > RAS: >> > [BUG] xen-mceinj tool testing cause dom0 crash >> > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html >> >> Please can you provide helpful links? This doesn't point to the beginning of the >> thread, and the mail archive chosen doesn't appear to have an easy way to go >> back to the head of a thread. And when I go through the parts of the thread > > Unfortunately I didn't find the original link from mail-archive, but I pick > up it in my mail client, attach the original mail. Did you also check our own archive, rather than just that foreign one? >> which are easily accessible there, it looks like you've never followed up on the >> additional information (log) request. > > I've provided the full log which included Xen and Dom0's, even though there > was no valid error message from Dom0. If Dom0 crashes without any log messages, this is very likely a bug by itself. >> This way I don't see how we can make >> progress there. > > Yes, this is the end mail > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108894.html. > >> Plus, looking over the Cc lists there, Linux maintainers also don't >> appear to have been involved at any time. >> > > I'm not sure if it's related with Dom0's kernel. My intention is we could > discuss in Xen list only till we make sure it's Dom0's issue. The question isn't where to discuss the issue, but who to involve in the discussion. For a Dom0 kernel issue the Linux maintainers would need to be pulled in. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-11-06 8:24 ` Jan Beulich @ 2017-11-06 8:44 ` Hao, Xudong 0 siblings, 0 replies; 9+ messages in thread From: Hao, Xudong @ 2017-11-06 8:44 UTC (permalink / raw) To: Jan Beulich; +Cc: Lars Kurth, Julien Grall, xen-devel@lists.xen.org Hi Jan We have update of this issue and I sent mail two three days ago, maybe you haven't read it yet, but I think you'll read it soon. Thanks, -Xudong > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: Monday, November 6, 2017 4:24 PM > To: Hao, Xudong <xudong.hao@intel.com> > Cc: Julien Grall <julien.grall@arm.com>; Lars Kurth <lars.kurth@citrix.com>; > xen-devel@lists.xen.org > Subject: RE: [Xen-devel] Xen 4.10.0 RC1 test result > > >>> On 30.10.17 at 03:21, <xudong.hao@intel.com> wrote: > >> From: Jan Beulich [mailto:JBeulich@suse.com] > >> Sent: Friday, October 27, 2017 5:19 PM > >> >>> On 27.10.17 at 10:28, <xudong.hao@intel.com> wrote: > >> > RAS: > >> > [BUG] xen-mceinj tool testing cause dom0 crash > >> > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html > >> > >> Please can you provide helpful links? This doesn't point to the > >> beginning of the thread, and the mail archive chosen doesn't appear > >> to have an easy way to go back to the head of a thread. And when I go > >> through the parts of the thread > > > > Unfortunately I didn't find the original link from mail-archive, but I > > pick up it in my mail client, attach the original mail. > > Did you also check our own archive, rather than just that foreign one? > > >> which are easily accessible there, it looks like you've never > >> followed up on the additional information (log) request. > > > > I've provided the full log which included Xen and Dom0's, even though > > there was no valid error message from Dom0. > > If Dom0 crashes without any log messages, this is very likely a bug by itself. > > >> This way I don't see how we can make > >> progress there. > > > > Yes, this is the end mail > > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108894.html. > > > >> Plus, looking over the Cc lists there, Linux maintainers also don't > >> appear to have been involved at any time. > >> > > > > I'm not sure if it's related with Dom0's kernel. My intention is we > > could discuss in Xen list only till we make sure it's Dom0's issue. > > The question isn't where to discuss the issue, but who to involve in the > discussion. For a Dom0 kernel issue the Linux maintainers would need to be > pulled in. > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-10-27 8:28 Xen 4.10.0 RC1 test result Hao, Xudong 2017-10-27 9:19 ` Jan Beulich @ 2017-10-27 9:54 ` Andrew Cooper 2017-10-30 8:51 ` Hao, Xudong 1 sibling, 1 reply; 9+ messages in thread From: Andrew Cooper @ 2017-10-27 9:54 UTC (permalink / raw) To: Hao, Xudong, xen-devel@lists.xen.org; +Cc: Lars Kurth, Julien Grall [-- Attachment #1.1: Type: text/plain, Size: 2075 bytes --] On 27/10/17 09:28, Hao, Xudong wrote: > > We performed Xen 4.10 RC1 testing on Intel Xeon Skylake, Broadwell > server, Intel Atom Denverton platforms, verified many functional > features, which include new features Local MCE, L2 CAT and UMIP on Xen > 4.10. We’d like to share the result out. > > > > Most of features passed to testing on Xen 4.10 RC1, VT-d, RAS and > nested has some bugs. > > VT-d: > > [BUG] win2008 guest cannot get ip through sriov > https://www.mail-archive.com/xen-devel@lists.xen.org/msg127433.html > > > > RAS: > > [BUG] xen-mceinj tool testing cause dom0 crash > https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html > > > > Nested: > > Nested status is better than Xen 4.9.0, KVM on Xen, HyperV on Xen > works, while Xen on Xen, VMware on Xen fail. > https://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen > Do you have any further details on your HyperV scenarios, in particular versions of HyperV and the hardware involved, and guests booted under HyperV? XenServers current nested-virt testing status shows a rather bleaker picture. More modern version of Windows Server fail to initialise the HyperV role, because Xen doesn't advertise Virtual NMI support to L1. (One version, Server 2012 R2 I believe, indicates the same, but with a BSOD instead). Older versions still do actually boot successfully. When booting windows guests under nested HyperV, old versions appear to be stable with a single one-vcpu guest, but unstable with multiple vcpus or multiple single-vcpu guests. The instability here is a VMEntry failure trying to inject an NMI, and occurs because HyperV and Xen disagree on whether to use Virtual NMI, resulting in HyperV thinking virtual NMI is disabled, but it is actually enabled in hardware. When booting windows guests under more modern nested HyperV, the guest is crashing because of a pagefault when trying to access the APIC page. We haven't tracked down the cause of this, but I expect it is something to do with emulating instruction while in nested vcpu context. Thanks, ~Andrew [-- Attachment #1.2: Type: text/html, Size: 5510 bytes --] [-- Attachment #2: Type: text/plain, Size: 127 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Xen 4.10.0 RC1 test result 2017-10-27 9:54 ` Andrew Cooper @ 2017-10-30 8:51 ` Hao, Xudong 0 siblings, 0 replies; 9+ messages in thread From: Hao, Xudong @ 2017-10-30 8:51 UTC (permalink / raw) To: Andrew Cooper, xen-devel@lists.xen.org; +Cc: Lars Kurth, Julien Grall [-- Attachment #1.1: Type: text/plain, Size: 2822 bytes --] Hi Andrew Our L1 is Windows 8 and HyperV version is 6.2.9200.16384, Hardware covered Skylake and Broadwell. Here I want to correct "HyperV on Xen works", maybe "L1 Windows 8 with HyperV installed boot up successfully" is more accurate. One progress of Xen 4.10 is there is one issue: L1 Windows8 booted failed for years, this issue got fixed from Xen d23afa63 as our monitor. We're doing L2 installation on Windows8 HyperV and I will update result here and wiki. Thanks, -Xudong From: Andrew Cooper [mailto:andrew.cooper3@citrix.com] Sent: Friday, October 27, 2017 5:55 PM To: Hao, Xudong <xudong.hao@intel.com>; xen-devel@lists.xen.org Cc: Lars Kurth <lars.kurth@citrix.com>; Julien Grall <julien.grall@arm.com> Subject: Re: [Xen-devel] Xen 4.10.0 RC1 test result On 27/10/17 09:28, Hao, Xudong wrote: We performed Xen 4.10 RC1 testing on Intel Xeon Skylake, Broadwell server, Intel Atom Denverton platforms, verified many functional features, which include new features Local MCE, L2 CAT and UMIP on Xen 4.10. We'd like to share the result out. Most of features passed to testing on Xen 4.10 RC1, VT-d, RAS and nested has some bugs. VT-d: [BUG] win2008 guest cannot get ip through sriov https://www.mail-archive.com/xen-devel@lists.xen.org/msg127433.html RAS: [BUG] xen-mceinj tool testing cause dom0 crash https://www.mail-archive.com/xen-devel@lists.xen.org/msg108671.html Nested: Nested status is better than Xen 4.9.0, KVM on Xen, HyperV on Xen works, while Xen on Xen, VMware on Xen fail. https://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen Do you have any further details on your HyperV scenarios, in particular versions of HyperV and the hardware involved, and guests booted under HyperV? XenServers current nested-virt testing status shows a rather bleaker picture. More modern version of Windows Server fail to initialise the HyperV role, because Xen doesn't advertise Virtual NMI support to L1. (One version, Server 2012 R2 I believe, indicates the same, but with a BSOD instead). Older versions still do actually boot successfully. When booting windows guests under nested HyperV, old versions appear to be stable with a single one-vcpu guest, but unstable with multiple vcpus or multiple single-vcpu guests. The instability here is a VMEntry failure trying to inject an NMI, and occurs because HyperV and Xen disagree on whether to use Virtual NMI, resulting in HyperV thinking virtual NMI is disabled, but it is actually enabled in hardware. When booting windows guests under more modern nested HyperV, the guest is crashing because of a pagefault when trying to access the APIC page. We haven't tracked down the cause of this, but I expect it is something to do with emulating instruction while in nested vcpu context. Thanks, ~Andrew [-- Attachment #1.2: Type: text/html, Size: 7687 bytes --] [-- Attachment #2: Type: text/plain, Size: 127 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-11-06 8:44 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-10-27 8:28 Xen 4.10.0 RC1 test result Hao, Xudong 2017-10-27 9:19 ` Jan Beulich 2017-10-30 2:21 ` Hao, Xudong 2017-11-02 13:59 ` Julien Grall 2017-11-03 8:42 ` Hao, Xudong 2017-11-06 8:24 ` Jan Beulich 2017-11-06 8:44 ` Hao, Xudong 2017-10-27 9:54 ` Andrew Cooper 2017-10-30 8:51 ` Hao, Xudong
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).