* VMX status report. Xen:24911 & Dom0: d93dc5c4...
@ 2012-03-13 9:18 Zhou, Chao
2012-03-13 11:33 ` Jan Beulich
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Zhou, Chao @ 2012-03-13 9:18 UTC (permalink / raw)
To: xen-devel@lists.xensource.com
Hi all,
This is the test report of xen-unstable tree. We've switched our Dom0 to upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree.
We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2.
We found four new issues and one old issue got fixed.
Version Info
=================================================================
xen-changeset: 24911:d7fe4cd831a0
Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...)
=================================================================
New issues(4)
==============
1. when detaching a VF from hvm guest, "xl dmesg" will show some warning information
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809
2. Dom0 hang when bootup a guest with a VF(the guest has been bootup with a different VF before)
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810
3. RHEL6.2/6.1 guest runs quite slow
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811
4. after detaching a VF from a guest, shutdown the guest is very slow
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
Fixed issue(1)
==============
1. Dom0 crash on power-off
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740
----kernel3.1.0 doesn't have this issue now
Old issues(5)
==============
1. [ACPI] System cann't resume after do suspend
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
2. [XL]"xl vcpu-set" causes dom0 crash or panic
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
3. [VT-D]fail to detach NIC from guest
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
4. Sometimes Xen panic on ia32pae Sandybridge when restore guest
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
5. [VT-D] device reset fail when create/destroy guest
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752
Thanks
Zhou, Chao
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-13 9:18 VMX status report. Xen:24911 & Dom0: d93dc5c4 Zhou, Chao @ 2012-03-13 11:33 ` Jan Beulich 2012-03-14 6:08 ` Ren, Yongjie 2012-03-13 15:38 ` VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? Pasi Kärkkäinen 2012-03-13 16:55 ` VMX status report. Xen:24911 & Dom0: d93dc5c4 Konrad Rzeszutek Wilk 2 siblings, 1 reply; 16+ messages in thread From: Jan Beulich @ 2012-03-13 11:33 UTC (permalink / raw) To: Chao Zhou; +Cc: xen-devel >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> wrote: > 1. when detaching a VF from hvm guest, "xl dmesg" will show some warning > information > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 Could you give the qemu-traditional patch below a try (and report the resulting "xl dmesg" regardless of whether this eliminates the warning)? Jan --- a/hw/pass-through.c +++ b/hw/pass-through.c @@ -1969,11 +1969,9 @@ static void pt_unregister_regions(struct if ( type == PCI_ADDRESS_SPACE_MEM || type == PCI_ADDRESS_SPACE_MEM_PREFETCH ) { - ret = xc_domain_memory_mapping(xc_handle, domid, - assigned_device->bases[i].e_physbase >> XC_PAGE_SHIFT, - assigned_device->bases[i].access.maddr >> XC_PAGE_SHIFT, - (e_size+XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT, - DPCI_REMOVE_MAPPING); + ret = _pt_iomem_helper(assigned_device, i, + assigned_device->bases[i].e_physbase, + e_size, DPCI_REMOVE_MAPPING); if ( ret != 0 ) { PT_LOG("Error: remove old mem mapping failed!\n"); ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-13 11:33 ` Jan Beulich @ 2012-03-14 6:08 ` Ren, Yongjie 2012-03-15 10:12 ` Jan Beulich 2012-03-23 9:33 ` Jan Beulich 0 siblings, 2 replies; 16+ messages in thread From: Ren, Yongjie @ 2012-03-14 6:08 UTC (permalink / raw) To: Jan Beulich, Zhou, Chao; +Cc: xen-devel [-- Attachment #1: Type: text/plain, Size: 1863 bytes --] > -----Original Message----- > From: xen-devel-bounces@lists.xen.org > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich > Sent: Tuesday, March 13, 2012 7:34 PM > To: Zhou, Chao > Cc: xen-devel > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > d93dc5c4... > > >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> wrote: > > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > warning > > information > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > > Could you give the qemu-traditional patch below a try (and report > the resulting "xl dmesg" regardless of whether this eliminates the > warning)? > Yes, we tried your below patch. The warning still exists. Attached is the output of 'xl dmesg' after detaching a VF. Jay > Jan > > --- a/hw/pass-through.c > +++ b/hw/pass-through.c > @@ -1969,11 +1969,9 @@ static void pt_unregister_regions(struct > if ( type == PCI_ADDRESS_SPACE_MEM || > type == PCI_ADDRESS_SPACE_MEM_PREFETCH ) > { > - ret = xc_domain_memory_mapping(xc_handle, domid, > - assigned_device->bases[i].e_physbase >> > XC_PAGE_SHIFT, > - assigned_device->bases[i].access.maddr >> > XC_PAGE_SHIFT, > - (e_size+XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT, > - DPCI_REMOVE_MAPPING); > + ret = _pt_iomem_helper(assigned_device, i, > + > assigned_device->bases[i].e_physbase, > + e_size, > DPCI_REMOVE_MAPPING); > if ( ret != 0 ) > { > PT_LOG("Error: remove old mem mapping failed!\n"); > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel [-- Attachment #2: xl-dmesg.log --] [-- Type: application/octet-stream, Size: 33977 bytes --] a000 - 00000000baf5a000 (usable) (XEN) 00000000baf5a000 - 00000000baf63000 (reserved) (XEN) 00000000baf63000 - 00000000baf67000 (usable) (XEN) 00000000baf67000 - 00000000baf6b000 (reserved) (XEN) 00000000baf6b000 - 00000000bafa7000 (usable) (XEN) 00000000bafa7000 - 00000000bafab000 (reserved) (XEN) 00000000bafab000 - 00000000bb035000 (usable) (XEN) 00000000bb035000 - 00000000bb03c000 (reserved) (XEN) 00000000bb03c000 - 00000000bb0b8000 (usable) (XEN) 00000000bb0b8000 - 00000000bb0ba000 (reserved) (XEN) 00000000bb0ba000 - 00000000bb0bf000 (usable) (XEN) 00000000bb0bf000 - 00000000bb0c2000 (reserved) (XEN) 00000000bb0c2000 - 00000000bb0d1000 (usable) (XEN) 00000000bb0d1000 - 00000000bb0e0000 (reserved) (XEN) 00000000bb0e0000 - 00000000bb12d000 (usable) (XEN) 00000000bb12d000 - 00000000bb130000 (reserved) (XEN) 00000000bb130000 - 00000000bb156000 (usable) (XEN) 00000000bb156000 - 00000000bb457000 (reserved) (XEN) 00000000bb457000 - 00000000bb468000 (usable) (XEN) 00000000bb468000 - 00000000bb490000 (reserved) (XEN) 00000000bb490000 - 00000000bb5f9000 (usable) (XEN) 00000000bb5f9000 - 00000000bb608000 (reserved) (XEN) 00000000bb608000 - 00000000bd9fb000 (usable) (XEN) 00000000bd9fb000 - 00000000bdbfb000 (reserved) (XEN) 00000000bdbfb000 - 00000000bdcdd000 (usable) (XEN) 00000000bdcdd000 - 00000000bdde7000 (reserved) (XEN) 00000000bdde7000 - 00000000bde90000 (ACPI NVS) (XEN) 00000000bde90000 - 00000000bde91000 (ACPI data) (XEN) 00000000bde91000 - 00000000bdf08000 (ACPI NVS) (XEN) 00000000bdf08000 - 00000000bdf09000 (ACPI data) (XEN) 00000000bdf09000 - 00000000bdf0b000 (ACPI NVS) (XEN) 00000000bdf0b000 - 00000000bdf0c000 (ACPI data) (XEN) 00000000bdf0c000 - 00000000bdf0d000 (ACPI NVS) (XEN) 00000000bdf0d000 - 00000000bdf24000 (ACPI data) (XEN) 00000000bdf24000 - 00000000bdfb1000 (ACPI NVS) (XEN) 00000000bdfb1000 - 00000000be000000 (usable) (XEN) 00000000be000000 - 00000000d0000000 (reserved) (XEN) 00000000fec00000 - 00000000fec01000 (reserved) (XEN) 00000000fed19000 - 00000000fed1a000 (reserved) (XEN) 00000000fed1c000 - 00000000fed20000 (reserved) (XEN) 00000000fee00000 - 00000000fee01000 (reserved) (XEN) 00000000ffa20000 - 0000000100000000 (reserved) (XEN) 0000000100000000 - 0000000840000000 (usable) (XEN) ACPI: RSDP 000F0410, 0024 (r2 INTEL ) (XEN) ACPI: XSDT BDF21D98, 00BC (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: FACP BDF21918, 00F4 (r4 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: DSDT BDF0D018, 12088 (r2 INTEL S2600CP 99 INTL 20100331) (XEN) ACPI: FACS BDF21F40, 0040 (XEN) ACPI: APIC BDF20718, 066A (r3 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: SPMI BDF23C18, 0040 (r5 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: MCFG BDF23B98, 003C (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: WDDT BDF23F18, 0040 (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: SRAT BDF0BC18, 02A8 (r3 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: SLIT BDF23E98, 0030 (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: MSCT BDF22E18, 0090 (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: HPET BDF23E18, 0038 (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: SSDT BDF23D18, 002B (r2 INTEL S2600CP 1000 INTL 20100331) (XEN) ACPI: SSDT B8FE7018, 795C4 (r2 INTEL S2600CP 4000 INTL 20100331) (XEN) ACPI: DMAR BDF21618, 0118 (r1 INTEL S2600CP 6222004 INTL 20090903) (XEN) ACPI: HEST BDE90F18, 00A8 (r1 INTEL S2600CP 1 INTL 1) (XEN) ACPI: BERT BDF23D98, 0030 (r1 INTEL S2600CP 1 INTL 1) (XEN) ACPI: ERST BDE90C98, 0230 (r1 INTEL S2600CP 1 INTL 1) (XEN) ACPI: EINJ BDF21C18, 0130 (r1 INTEL S2600CP 1 INTL 1) (XEN) ACPI: SSDT BDF08018, 0F41 (r2 INTEL S2600CP 2 INTL 20100331) (XEN) ACPI: SSDT BDF08F98, 0045 (r2 INTEL S2600CP 1 INTL 20100331) (XEN) ACPI: SSDT BDF20E18, 0181 (r2 INTEL S2600CP 3 INTL 20100331) (XEN) System RAM: 32726MB (33512208kB) (XEN) SRAT: PXM 0 -> APIC 0 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 1 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 2 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 3 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 4 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 5 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 6 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 7 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 8 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 9 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 10 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 11 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 12 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 13 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 14 -> Node 0 (XEN) SRAT: PXM 0 -> APIC 15 -> Node 0 (XEN) SRAT: PXM 1 -> APIC 32 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 33 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 34 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 35 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 36 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 37 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 38 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 39 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 40 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 41 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 42 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 43 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 44 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 45 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 46 -> Node 1 (XEN) SRAT: PXM 1 -> APIC 47 -> Node 1 (XEN) SRAT: Node 0 PXM 0 0-c0000000 (XEN) SRAT: Node 0 PXM 0 100000000-440000000 (XEN) SRAT: Node 1 PXM 1 440000000-840000000 (XEN) NUMA: Using 18 for the hash shift. (XEN) Domain heap initialised DMA width 32 bits (XEN) found SMP MP-table at 000fcd80 (XEN) DMI 2.6 present. (XEN) x2APIC mode is already enabled by BIOS. (XEN) Using APIC driver x2apic_cluster (XEN) ACPI: PM-Timer IO Port: 0x408 (XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0] (XEN) ACPI: 32/64X FACS address mismatch in FADT - bdf21f40/0000000000000000, using 32 (XEN) ACPI: wakeup_vec[bdf21f4c], vec_size[20] (XEN) ACPI: Local APIC address 0xfee00000 (XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) (XEN) Processor #0 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x02] enabled) (XEN) Processor #2 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x04] enabled) (XEN) Processor #4 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x06] enabled) (XEN) Processor #6 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x08] enabled) (XEN) Processor #8 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x0a] enabled) (XEN) Processor #10 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x0c] enabled) (XEN) Processor #12 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x0e] enabled) (XEN) Processor #14 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x20] enabled) (XEN) Processor #32 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x22] enabled) (XEN) Processor #34 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x24] enabled) (XEN) Processor #36 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x26] enabled) (XEN) Processor #38 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x28] enabled) (XEN) Processor #40 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x2a] enabled) (XEN) Processor #42 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x2c] enabled) (XEN) Processor #44 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x2e] enabled) (XEN) Processor #46 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x01] enabled) (XEN) Processor #1 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x11] lapic_id[0x03] enabled) (XEN) Processor #3 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x12] lapic_id[0x05] enabled) (XEN) Processor #5 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x13] lapic_id[0x07] enabled) (XEN) Processor #7 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x14] lapic_id[0x09] enabled) (XEN) Processor #9 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x15] lapic_id[0x0b] enabled) (XEN) Processor #11 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x16] lapic_id[0x0d] enabled) (XEN) Processor #13 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x17] lapic_id[0x0f] enabled) (XEN) Processor #15 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x18] lapic_id[0x21] enabled) (XEN) Processor #33 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x19] lapic_id[0x23] enabled) (XEN) Processor #35 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x25] enabled) (XEN) Processor #37 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x27] enabled) (XEN) Processor #39 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x29] enabled) (XEN) Processor #41 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x2b] enabled) (XEN) Processor #43 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x2d] enabled) (XEN) Processor #45 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x2f] enabled) (XEN) Processor #47 6:13 APIC version 21 (XEN) ACPI: LAPIC (acpi_id[0x20] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x21] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x22] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x23] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x24] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x25] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x26] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x27] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x28] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x29] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x2a] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x2b] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x2c] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x2d] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x2e] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x2f] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x30] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x31] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x32] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x33] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x34] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x35] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x36] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x37] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x38] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x39] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x3a] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x3b] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x3c] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x3d] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x3e] lapic_id[0xff] disabled) (XEN) ACPI: LAPIC (acpi_id[0x3f] lapic_id[0xff] disabled) (XEN) ACPI: X2APIC (apic_id[0x00] uid[0x00] disabled) (XEN) ACPI: X2APIC (apic_id[0x01] uid[0x01] disabled) (XEN) ACPI: X2APIC (apic_id[0x02] uid[0x02] disabled) (XEN) ACPI: X2APIC (apic_id[0x03] uid[0x03] disabled) (XEN) ACPI: X2APIC (apic_id[0x04] uid[0x04] disabled) (XEN) ACPI: X2APIC (apic_id[0x05] uid[0x05] disabled) (XEN) ACPI: X2APIC (apic_id[0x06] uid[0x06] disabled) (XEN) ACPI: X2APIC (apic_id[0x07] uid[0x07] disabled) (XEN) ACPI: X2APIC (apic_id[0x08] uid[0x08] disabled) (XEN) ACPI: X2APIC (apic_id[0x09] uid[0x09] disabled) (XEN) ACPI: X2APIC (apic_id[0x0a] uid[0x0a] disabled) (XEN) ACPI: X2APIC (apic_id[0x0b] uid[0x0b] disabled) (XEN) ACPI: X2APIC (apic_id[0x0c] uid[0x0c] disabled) (XEN) ACPI: X2APIC (apic_id[0x0d] uid[0x0d] disabled) (XEN) ACPI: X2APIC (apic_id[0x0e] uid[0x0e] disabled) (XEN) ACPI: X2APIC (apic_id[0x0f] uid[0x0f] disabled) (XEN) ACPI: X2APIC (apic_id[0x10] uid[0x10] disabled) (XEN) ACPI: X2APIC (apic_id[0x11] uid[0x11] disabled) (XEN) ACPI: X2APIC (apic_id[0x12] uid[0x12] disabled) (XEN) ACPI: X2APIC (apic_id[0x13] uid[0x13] disabled) (XEN) ACPI: X2APIC (apic_id[0x14] uid[0x14] disabled) (XEN) ACPI: X2APIC (apic_id[0x15] uid[0x15] disabled) (XEN) ACPI: X2APIC (apic_id[0x16] uid[0x16] disabled) (XEN) ACPI: X2APIC (apic_id[0x17] uid[0x17] disabled) (XEN) ACPI: X2APIC (apic_id[0x18] uid[0x18] disabled) (XEN) ACPI: X2APIC (apic_id[0x19] uid[0x19] disabled) (XEN) ACPI: X2APIC (apic_id[0x1a] uid[0x1a] disabled) (XEN) ACPI: X2APIC (apic_id[0x1b] uid[0x1b] disabled) (XEN) ACPI: X2APIC (apic_id[0x1c] uid[0x1c] disabled) (XEN) ACPI: X2APIC (apic_id[0x1d] uid[0x1d] disabled) (XEN) ACPI: X2APIC (apic_id[0x1e] uid[0x1e] disabled) (XEN) ACPI: X2APIC (apic_id[0x1f] uid[0x1f] disabled) (XEN) ACPI: X2APIC (apic_id[0x20] uid[0x20] disabled) (XEN) ACPI: X2APIC (apic_id[0x21] uid[0x21] disabled) (XEN) ACPI: X2APIC (apic_id[0x22] uid[0x22] disabled) (XEN) ACPI: X2APIC (apic_id[0x23] uid[0x23] disabled) (XEN) ACPI: X2APIC (apic_id[0x24] uid[0x24] disabled) (XEN) ACPI: X2APIC (apic_id[0x25] uid[0x25] disabled) (XEN) ACPI: X2APIC (apic_id[0x26] uid[0x26] disabled) (XEN) ACPI: X2APIC (apic_id[0x27] uid[0x27] disabled) (XEN) ACPI: X2APIC (apic_id[0x28] uid[0x28] disabled) (XEN) ACPI: X2APIC (apic_id[0x29] uid[0x29] disabled) (XEN) ACPI: X2APIC (apic_id[0x2a] uid[0x2a] disabled) (XEN) ACPI: X2APIC (apic_id[0x2b] uid[0x2b] disabled) (XEN) ACPI: X2APIC (apic_id[0x2c] uid[0x2c] disabled) (XEN) ACPI: X2APIC (apic_id[0x2d] uid[0x2d] disabled) (XEN) ACPI: X2APIC (apic_id[0x2e] uid[0x2e] disabled) (XEN) ACPI: X2APIC (apic_id[0x2f] uid[0x2f] disabled) (XEN) ACPI: X2APIC (apic_id[0x30] uid[0x30] disabled) (XEN) ACPI: X2APIC (apic_id[0x31] uid[0x31] disabled) (XEN) ACPI: X2APIC (apic_id[0x32] uid[0x32] disabled) (XEN) ACPI: X2APIC (apic_id[0x33] uid[0x33] disabled) (XEN) ACPI: X2APIC (apic_id[0x34] uid[0x34] disabled) (XEN) ACPI: X2APIC (apic_id[0x35] uid[0x35] disabled) (XEN) ACPI: X2APIC (apic_id[0x36] uid[0x36] disabled) (XEN) ACPI: X2APIC (apic_id[0x37] uid[0x37] disabled) (XEN) ACPI: X2APIC (apic_id[0x38] uid[0x38] disabled) (XEN) ACPI: X2APIC (apic_id[0x39] uid[0x39] disabled) (XEN) ACPI: X2APIC (apic_id[0x3a] uid[0x3a] disabled) (XEN) ACPI: X2APIC (apic_id[0x3b] uid[0x3b] disabled) (XEN) ACPI: X2APIC (apic_id[0x3c] uid[0x3c] disabled) (XEN) ACPI: X2APIC (apic_id[0x3d] uid[0x3d] disabled) (XEN) ACPI: X2APIC (apic_id[0x3e] uid[0x3e] disabled) (XEN) ACPI: X2APIC (apic_id[0x3f] uid[0x3f] disabled) (XEN) ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) (XEN) ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) (XEN) IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 (XEN) ACPI: IOAPIC (id[0x01] address[0xfec3f000] gsi_base[24]) (XEN) IOAPIC[1]: apic_id 1, version 32, address 0xfec3f000, GSI 24-47 (XEN) ACPI: IOAPIC (id[0x02] address[0xfec7f000] gsi_base[48]) (XEN) IOAPIC[2]: apic_id 2, version 32, address 0xfec7f000, GSI 48-71 (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) (XEN) ACPI: IRQ0 used by override. (XEN) ACPI: IRQ2 used by override. (XEN) ACPI: IRQ9 used by override. (XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000 (XEN) Xen ERST support is initialized. (XEN) Using ACPI (MADT) for SMP configuration information (XEN) SMP: Allowing 128 CPUs (96 hotplug CPUs) (XEN) IRQ limits: 72 GSI, 6088 MSI/MSI-X (XEN) Switched to APIC driver x2apic_cluster. (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Detected 2693.566 MHz processor. (XEN) Initing memory sharing. (XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7 (XEN) mce_intel.c:1264: MCA Capability: BCAST 1 SER 1 CMCI 1 firstbank 0 extended MCE MSR 0 (XEN) Intel machine check reporting enabled (XEN) PCI: MCFG configuration 0: base c0000000 segment 0000 buses 00 - ff (XEN) PCI: MCFG area at c0000000 reserved in E820 (XEN) PCI: Using MCFG for segment 0000 bus 00-ff (XEN) Intel VT-d Snoop Control enabled. (XEN) Intel VT-d Dom0 DMA Passthrough not enabled. (XEN) Intel VT-d Queued Invalidation enabled. (XEN) Intel VT-d Interrupt Remapping enabled. (XEN) Intel VT-d Shared EPT tables enabled. (XEN) I/O virtualisation enabled (XEN) - Dom0 mode: Relaxed (XEN) Enabled directed EOI with ioapic_ack_old on! (XEN) ENABLING IO-APIC IRQs (XEN) -> Using old ACK method (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1 (XEN) TSC deadline timer enabled (XEN) Platform timer is 14.318MHz HPET (XEN) Defaulting to alternative key handling; send 'A' to switch to normal mode. (XEN) Allocated console ring of 4096 KiB. (XEN) VMX: Supported advanced features: (XEN) - APIC MMIO access virtualisation (XEN) - APIC TPR shadow (XEN) - Extended Page Tables (EPT) (XEN) - Virtual-Processor Identifiers (VPID) (XEN) - Virtual NMI (XEN) - MSR direct-access bitmap (XEN) - Unrestricted Guest (XEN) HVM: ASIDs enabled. (XEN) HVM: VMX enabled (XEN) HVM: Hardware Assisted Paging (HAP) detected (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB (XEN) Brought up 32 CPUs (XEN) ACPI sleep modes: S3 (XEN) mcheck_poll: Machine check polling timer started. (XEN) *** LOADING DOMAIN 0 *** (XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xa96000 (XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x8a0e0 (XEN) elf_parse_binary: phdr: paddr=0x1c8b000 memsz=0x13580 (XEN) elf_parse_binary: phdr: paddr=0x1c9f000 memsz=0x2b5000 (XEN) elf_parse_binary: memory: 0x1000000 -> 0x1f54000 (XEN) elf_xen_parse_note: GUEST_OS = "linux" (XEN) elf_xen_parse_note: GUEST_VERSION = "2.6" (XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0" (XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 (XEN) elf_xen_parse_note: ENTRY = 0xffffffff81c9f200 (XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000 (XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" (XEN) elf_xen_parse_note: PAE_MODE = "yes" (XEN) elf_xen_parse_note: LOADER = "generic" (XEN) elf_xen_parse_note: unknown xen elf note (0xd) (XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1 (XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 (XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0 (XEN) elf_xen_addr_calc_check: addresses: (XEN) virt_base = 0xffffffff80000000 (XEN) elf_paddr_offset = 0x0 (XEN) virt_offset = 0xffffffff80000000 (XEN) virt_kstart = 0xffffffff81000000 (XEN) virt_kend = 0xffffffff81f54000 (XEN) virt_entry = 0xffffffff81c9f200 (XEN) p2m_base = 0xffffffffffffffff (XEN) Xen kernel: 64-bit, lsb, compat32 (XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x1f54000 (XEN) PHYSICAL MEMORY ARRANGEMENT: (XEN) Dom0 alloc.: 0000000824000000->0000000828000000 (2074613 pages to be allocated) (XEN) Init. ramdisk: 000000083e7f5000->000000083ffff200 (XEN) VIRTUAL MEMORY ARRANGEMENT: (XEN) Loaded kernel: ffffffff81000000->ffffffff81f54000 (XEN) Init. ramdisk: ffffffff81f54000->ffffffff8375e200 (XEN) Phys-Mach map: ffffffff8375f000->ffffffff8475f000 (XEN) Start info: ffffffff8475f000->ffffffff8475f4b4 (XEN) Page tables: ffffffff84760000->ffffffff84789000 (XEN) Boot stack: ffffffff84789000->ffffffff8478a000 (XEN) TOTAL: ffffffff80000000->ffffffff84c00000 (XEN) ENTRY ADDRESS: ffffffff81c9f200 (XEN) Dom0 has maximum 32 VCPUs (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81a96000 (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81c8a0e0 (XEN) elf_load_binary: phdr 2 at 0xffffffff81c8b000 -> 0xffffffff81c9e580 (XEN) elf_load_binary: phdr 3 at 0xffffffff81c9f000 -> 0xffffffff81d32000 (XEN) Scrubbing Free RAM: ................................................................................................................................................................................................................................................done. (XEN) Initial low memory virq threshold set at 0x4000 pages. (XEN) Std. Loglevel: All (XEN) Guest Loglevel: All (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen) (XEN) Freed 236kB init memory. (XEN) PCI add device 0000:00:00.0 (XEN) PCI add device 0000:00:01.0 (XEN) PCI add device 0000:00:02.0 (XEN) PCI add device 0000:00:02.2 (XEN) PCI add device 0000:00:03.0 (XEN) PCI add device 0000:00:03.2 (XEN) PCI add device 0000:00:04.0 (XEN) PCI add device 0000:00:04.1 (XEN) PCI add device 0000:00:04.2 (XEN) PCI add device 0000:00:04.3 (XEN) PCI add device 0000:00:04.4 (XEN) PCI add device 0000:00:04.5 (XEN) PCI add device 0000:00:04.6 (XEN) PCI add device 0000:00:04.7 (XEN) PCI add device 0000:00:05.0 (XEN) PCI add device 0000:00:05.2 (XEN) PCI add device 0000:00:05.4 (XEN) PCI add device 0000:00:11.0 (XEN) PCI add device 0000:00:16.0 (XEN) PCI add device 0000:00:16.1 (XEN) PCI add device 0000:00:1a.0 (XEN) PCI add device 0000:00:1c.0 (XEN) PCI add device 0000:00:1c.7 (XEN) PCI add device 0000:00:1d.0 (XEN) PCI add device 0000:00:1e.0 (XEN) PCI add device 0000:00:1f.0 (XEN) PCI add device 0000:00:1f.2 (XEN) PCI add device 0000:00:1f.3 (XEN) PCI add device 0000:03:00.0 (XEN) PCI add device 0000:03:00.1 (XEN) PCI add device 0000:05:00.0 (XEN) PCI add device 0000:07:00.0 (XEN) PCI add device 0000:07:00.3 (XEN) PCI add device 0000:08:00.0 (XEN) PCI add device 0000:08:00.1 (XEN) PCI add device 0000:08:00.2 (XEN) PCI add device 0000:08:00.3 (XEN) PCI add device 0000:0a:00.0 (XEN) PCI add device 0000:80:02.0 (XEN) PCI add device 0000:80:04.0 (XEN) PCI add device 0000:80:04.1 (XEN) PCI add device 0000:80:04.2 (XEN) PCI add device 0000:80:04.3 (XEN) PCI add device 0000:80:04.4 (XEN) PCI add device 0000:80:04.5 (XEN) PCI add device 0000:80:04.6 (XEN) PCI add device 0000:80:04.7 (XEN) PCI add device 0000:80:05.0 (XEN) PCI add device 0000:80:05.2 (XEN) PCI add device 0000:80:05.4 (XEN) PCI add device 0000:81:00.0 (XEN) PCI add device 0000:82:02.0 (XEN) PCI add device 0000:82:04.0 (XEN) PCI add device 0000:83:00.0 (XEN) PCI add device 0000:83:00.1 (XEN) PCI add device 0000:85:00.0 (XEN) PCI add device 0000:85:00.1 (XEN) PCI add device 0000:7f:08.0 (XEN) PCI add device 0000:7f:09.0 (XEN) PCI add device 0000:7f:0a.0 (XEN) PCI add device 0000:7f:0a.1 (XEN) PCI add device 0000:7f:0a.2 (XEN) PCI add device 0000:7f:0a.3 (XEN) PCI add device 0000:7f:0b.0 (XEN) PCI add device 0000:7f:0b.3 (XEN) PCI add device 0000:7f:0c.0 (XEN) PCI add device 0000:7f:0c.1 (XEN) PCI add device 0000:7f:0c.2 (XEN) PCI add device 0000:7f:0c.3 (XEN) PCI add device 0000:7f:0c.6 (XEN) PCI add device 0000:7f:0c.7 (XEN) PCI add device 0000:7f:0d.0 (XEN) PCI add device 0000:7f:0d.1 (XEN) PCI add device 0000:7f:0d.2 (XEN) PCI add device 0000:7f:0d.3 (XEN) PCI add device 0000:7f:0d.6 (XEN) PCI add device 0000:7f:0e.0 (XEN) PCI add device 0000:7f:0e.1 (XEN) PCI add device 0000:7f:0f.0 (XEN) PCI add device 0000:7f:0f.1 (XEN) PCI add device 0000:7f:0f.2 (XEN) PCI add device 0000:7f:0f.3 (XEN) PCI add device 0000:7f:0f.4 (XEN) PCI add device 0000:7f:0f.5 (XEN) PCI add device 0000:7f:0f.6 (XEN) PCI add device 0000:7f:10.0 (XEN) PCI add device 0000:7f:10.1 (XEN) PCI add device 0000:7f:10.2 (XEN) PCI add device 0000:7f:10.3 (XEN) PCI add device 0000:7f:10.4 (XEN) PCI add device 0000:7f:10.5 (XEN) PCI add device 0000:7f:10.6 (XEN) PCI add device 0000:7f:10.7 (XEN) PCI add device 0000:7f:11.0 (XEN) PCI add device 0000:7f:13.0 (XEN) PCI add device 0000:7f:13.1 (XEN) PCI add device 0000:7f:13.4 (XEN) PCI add device 0000:7f:13.5 (XEN) PCI add device 0000:7f:13.6 (XEN) PCI add device 0000:ff:08.0 (XEN) PCI add device 0000:ff:09.0 (XEN) PCI add device 0000:ff:0a.0 (XEN) PCI add device 0000:ff:0a.1 (XEN) PCI add device 0000:ff:0a.2 (XEN) PCI add device 0000:ff:0a.3 (XEN) PCI add device 0000:ff:0b.0 (XEN) PCI add device 0000:ff:0b.3 (XEN) PCI add device 0000:ff:0c.0 (XEN) PCI add device 0000:ff:0c.1 (XEN) PCI add device 0000:ff:0c.2 (XEN) PCI add device 0000:ff:0c.3 (XEN) PCI add device 0000:ff:0c.6 (XEN) PCI add device 0000:ff:0c.7 (XEN) PCI add device 0000:ff:0d.0 (XEN) PCI add device 0000:ff:0d.1 (XEN) PCI add device 0000:ff:0d.2 (XEN) PCI add device 0000:ff:0d.3 (XEN) PCI add device 0000:ff:0d.6 (XEN) PCI add device 0000:ff:0e.0 (XEN) PCI add device 0000:ff:0e.1 (XEN) PCI add device 0000:ff:0f.0 (XEN) PCI add device 0000:ff:0f.1 (XEN) PCI add device 0000:ff:0f.2 (XEN) PCI add device 0000:ff:0f.3 (XEN) PCI add device 0000:ff:0f.4 (XEN) PCI add device 0000:ff:0f.5 (XEN) PCI add device 0000:ff:0f.6 (XEN) PCI add device 0000:ff:10.0 (XEN) PCI add device 0000:ff:10.1 (XEN) PCI add device 0000:ff:10.2 (XEN) PCI add device 0000:ff:10.3 (XEN) PCI add device 0000:ff:10.4 (XEN) PCI add device 0000:ff:10.5 (XEN) PCI add device 0000:ff:10.6 (XEN) PCI add device 0000:ff:10.7 (XEN) PCI add device 0000:ff:11.0 (XEN) PCI add device 0000:ff:13.0 (XEN) PCI add device 0000:ff:13.1 (XEN) PCI add device 0000:ff:13.4 (XEN) PCI add device 0000:ff:13.5 (XEN) PCI add device 0000:ff:13.6 (XEN) Cannot bind IRQ4 to dom0. In use by 'ns16550'. (XEN) Cannot bind IRQ2 to dom0. In use by 'cascade'. (XEN) Cannot bind IRQ4 to dom0. In use by 'ns16550'. (XEN) Cannot bind IRQ2 to dom0. In use by 'cascade'. (XEN) Cannot bind IRQ4 to dom0. In use by 'ns16550'. (XEN) Cannot bind IRQ2 to dom0. In use by 'cascade'. (XEN) Cannot bind IRQ4 to dom0. In use by 'ns16550'. (XEN) Cannot bind IRQ2 to dom0. In use by 'cascade'. (XEN) PCI add virtual function 0000:03:10.0 (XEN) PCI add virtual function 0000:03:10.2 (XEN) PCI add virtual function 0000:03:10.4 (XEN) PCI add virtual function 0000:03:10.1 (XEN) PCI add virtual function 0000:03:10.3 (XEN) PCI add virtual function 0000:03:10.5 (XEN) PCI add virtual function 0000:09:10.0 (XEN) PCI add virtual function 0000:09:10.4 (XEN) PCI add virtual function 0000:09:11.0 (XEN) PCI add virtual function 0000:09:11.4 (XEN) PCI add virtual function 0000:09:12.0 (XEN) PCI add virtual function 0000:09:12.4 (XEN) PCI add virtual function 0000:09:13.0 (XEN) PCI add virtual function 0000:09:10.1 (XEN) PCI add virtual function 0000:09:10.5 (XEN) PCI add virtual function 0000:09:11.1 (XEN) PCI add virtual function 0000:09:11.5 (XEN) PCI add virtual function 0000:09:12.1 (XEN) PCI add virtual function 0000:09:12.5 (XEN) PCI add virtual function 0000:09:13.1 (XEN) PCI add virtual function 0000:09:10.2 (XEN) PCI add virtual function 0000:09:10.6 (XEN) PCI add virtual function 0000:09:11.2 (XEN) PCI add virtual function 0000:09:11.6 (XEN) PCI add virtual function 0000:09:12.2 (XEN) PCI add virtual function 0000:09:12.6 (XEN) PCI add virtual function 0000:09:13.2 (XEN) PCI add virtual function 0000:09:10.3 (XEN) PCI add virtual function 0000:09:10.7 (XEN) PCI add virtual function 0000:09:11.3 (XEN) PCI add virtual function 0000:09:11.7 (XEN) PCI add virtual function 0000:09:12.3 (XEN) PCI add virtual function 0000:09:12.7 (XEN) PCI add virtual function 0000:09:13.3 (XEN) PCI add virtual function 0000:84:10.0 (XEN) PCI add virtual function 0000:84:10.2 (XEN) PCI add virtual function 0000:84:10.4 (XEN) PCI add virtual function 0000:84:10.6 (XEN) PCI add virtual function 0000:84:11.0 (XEN) PCI add virtual function 0000:84:11.2 (XEN) PCI add virtual function 0000:84:11.4 (XEN) PCI add virtual function 0000:84:10.1 (XEN) PCI add virtual function 0000:84:10.3 (XEN) PCI add virtual function 0000:84:10.5 (XEN) PCI add virtual function 0000:84:10.7 (XEN) PCI add virtual function 0000:84:11.1 (XEN) PCI add virtual function 0000:84:11.3 (XEN) PCI add virtual function 0000:84:11.5 (XEN) PCI add virtual function 0000:86:10.0 (XEN) PCI add virtual function 0000:86:10.2 (XEN) PCI add virtual function 0000:86:10.4 (XEN) PCI add virtual function 0000:86:10.6 (XEN) PCI add virtual function 0000:86:11.0 (XEN) PCI add virtual function 0000:86:11.2 (XEN) PCI add virtual function 0000:86:11.4 (XEN) PCI add virtual function 0000:86:10.1 (XEN) PCI add virtual function 0000:86:10.3 (XEN) PCI add virtual function 0000:86:10.5 (XEN) PCI add virtual function 0000:86:10.7 (XEN) PCI add virtual function 0000:86:11.1 (XEN) PCI add virtual function 0000:86:11.3 (XEN) PCI add virtual function 0000:86:11.5 (XEN) HVM1: HVM Loader (XEN) HVM1: Detected Xen v4.2-unstable (XEN) HVM1: Xenbus rings @0xfeffc000, event channel 4 (XEN) HVM1: System requested ROMBIOS (XEN) HVM1: CPU speed is 2694 MHz (XEN) irq.c:270: Dom1 PCI link 0 changed 0 -> 5 (XEN) HVM1: PCI-ISA link 0 routed to IRQ5 (XEN) irq.c:270: Dom1 PCI link 1 changed 0 -> 10 (XEN) HVM1: PCI-ISA link 1 routed to IRQ10 (XEN) irq.c:270: Dom1 PCI link 2 changed 0 -> 11 (XEN) HVM1: PCI-ISA link 2 routed to IRQ11 (XEN) irq.c:270: Dom1 PCI link 3 changed 0 -> 5 (XEN) HVM1: PCI-ISA link 3 routed to IRQ5 (XEN) HVM1: pci dev 01:3 INTA->IRQ10 (XEN) HVM1: pci dev 03:0 INTA->IRQ5 (XEN) HVM1: pci dev 04:0 INTA->IRQ5 (XEN) HVM1: pci dev 02:0 bar 10 size 02000000: f0000008 (XEN) HVM1: pci dev 03:0 bar 14 size 01000000: f2000008 (XEN) HVM1: pci dev 02:0 bar 14 size 00001000: f3000000 (XEN) HVM1: pci dev 03:0 bar 10 size 00000100: 0000c001 (XEN) HVM1: pci dev 04:0 bar 10 size 00000100: 0000c101 (XEN) HVM1: pci dev 04:0 bar 14 size 00000100: f3001000 (XEN) HVM1: pci dev 01:1 bar 20 size 00000010: 0000c201 (XEN) HVM1: Multiprocessor initialisation: (XEN) HVM1: - CPU0 ... 46-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done. (XEN) HVM1: - CPU1 ... 46-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done. (XEN) HVM1: Testing HVM environment: (XEN) HVM1: - REP INSB across page boundaries ... passed (XEN) HVM1: - GS base MSRs and SWAPGS ... passed (XEN) HVM1: Passed 2 of 2 tests (XEN) HVM1: Writing SMBIOS tables ... (XEN) HVM1: Loading ROMBIOS ... (XEN) HVM1: 9628 bytes of ROMBIOS high-memory extensions: (XEN) HVM1: Relocating to 0xfc001000-0xfc00359c ... done (XEN) HVM1: Creating MP tables ... (XEN) HVM1: Loading Cirrus VGABIOS ... (XEN) HVM1: Loading PCI Option ROM ... (XEN) HVM1: - Manufacturer: http://etherboot.org (XEN) HVM1: - Product name: gPXE (XEN) HVM1: Option ROMs: (XEN) HVM1: c0000-c8fff: VGA BIOS (XEN) HVM1: c9000-d6fff: Etherboot ROM (XEN) HVM1: Loading ACPI ... (XEN) HVM1: vm86 TSS at fc00f700 (XEN) HVM1: BIOS map: (XEN) HVM1: f0000-fffff: Main BIOS (XEN) HVM1: E820 table: (XEN) HVM1: [00]: 00000000:00000000 - 00000000:0009e000: RAM (XEN) HVM1: [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED (XEN) HVM1: HOLE: 00000000:000a0000 - 00000000:000e0000 (XEN) HVM1: [02]: 00000000:000e0000 - 00000000:00100000: RESERVED (XEN) HVM1: [03]: 00000000:00100000 - 00000000:3f800000: RAM (XEN) HVM1: HOLE: 00000000:3f800000 - 00000000:fc000000 (XEN) HVM1: [04]: 00000000:fc000000 - 00000001:00000000: RESERVED (XEN) HVM1: Invoking ROMBIOS ... (XEN) HVM1: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $ (XEN) stdvga.c:147:d1 entering stdvga and caching modes (XEN) HVM1: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $ (XEN) HVM1: Bochs BIOS - build: 06/23/99 (XEN) HVM1: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $ (XEN) HVM1: Options: apmbios pcibios eltorito PMM (XEN) HVM1: (XEN) HVM1: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63 (XEN) HVM1: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (8192 MBytes) (XEN) HVM1: IDE time out (XEN) HVM1: (XEN) HVM1: (XEN) HVM1: (XEN) HVM1: Press F12 for boot menu. (XEN) HVM1: (XEN) HVM1: Booting from Hard Disk... (XEN) HVM1: Booting from 0000:7c00 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=81 (XEN) HVM1: int13_harddisk: function 08, unmapped device for ELDL=81 (XEN) HVM1: *** int 15h function AX=00c0, BX=0000 not yet supported! (XEN) HVM1: *** int 15h function AX=ec00, BX=0002 not yet supported! (XEN) HVM1: KBD: unsupported int 16h function 03 (XEN) HVM1: *** int 15h function AX=e980, BX=0000 not yet supported! (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=81 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=81 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=82 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=82 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=83 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=83 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=84 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=84 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=85 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=85 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=86 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=86 (XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=87 (XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=87 (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 88 (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 88 (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 89 (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 89 (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8a (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8a (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8b (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8b (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8c (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8c (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8d (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8d (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8e (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8e (XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8f (XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8f (XEN) irq.c:350: Dom1 callback via changed to Direct Vector 0xe9 (XEN) irq.c:270: Dom1 PCI link 0 changed 5 -> 0 (XEN) irq.c:270: Dom1 PCI link 1 changed 10 -> 0 (XEN) irq.c:270: Dom1 PCI link 2 changed 11 -> 0 (XEN) irq.c:270: Dom1 PCI link 3 changed 5 -> 0 (XEN) memory_map:add: dom1 gfn=40000 mfn=fbe60 nr=4 (XEN) memory_map:add: dom1 gfn=40005 mfn=fbe41 nr=3 (XEN) memory_map:remove: dom1 gfn=40000 mfn=fbe60 nr=4 (XEN) memory_map:remove: dom1 gfn=40004 mfn=fbe40 nr=4 (XEN) p2m.c:719:d0 clear_mmio_p2m_entry: gfn_to_mfn failed! gfn=00040004 [-- Attachment #3: Type: text/plain, Size: 126 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-14 6:08 ` Ren, Yongjie @ 2012-03-15 10:12 ` Jan Beulich 2012-03-21 11:07 ` Ping: " Jan Beulich 2012-03-23 9:33 ` Jan Beulich 1 sibling, 1 reply; 16+ messages in thread From: Jan Beulich @ 2012-03-15 10:12 UTC (permalink / raw) To: Ian Jackson, Stefano Stabellini; +Cc: Yongjie Ren, Chao Zhou, xen-devel >>> On 14.03.12 at 07:08, "Ren, Yongjie" <yongjie.ren@intel.com> wrote: >> -----Original Message----- >> From: xen-devel-bounces@lists.xen.org >> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich >> Sent: Tuesday, March 13, 2012 7:34 PM >> To: Zhou, Chao >> Cc: xen-devel >> Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: >> d93dc5c4... >> >> >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> wrote: >> > 1. when detaching a VF from hvm guest, "xl dmesg" will show some >> warning >> > information >> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 >> >> Could you give the qemu-traditional patch below a try (and report >> the resulting "xl dmesg" regardless of whether this eliminates the >> warning)? >> > Yes, we tried your below patch. The warning still exists. > Attached is the output of 'xl dmesg' after detaching a VF. Okay, so this apparently is an ordering problem: unregister_real_device() -> pt_config_delete() -> pt_msix_delete() (frees [and fails to clear] ->msix) -> pt_unregister_regions() -> _pt_iomem_helper() (with the patch below) -> has_msix_mapping() (uses ->msix) As it is obviously necessary to call _pt_iomem_helper() (rather than xc_domain_memory_mapping() directly) from pt_unregister_regions(), it needs to be determined whether - the calls to pt_config_delete() and pt_unregister_regions() can be swapped, or - the calling of pt_msix_delete() (and for consistency also the freeing of ->msi) can be moved into or past the call to pt_unregister_regions(), or - yet something else can be done about this. Jan >> --- a/hw/pass-through.c >> +++ b/hw/pass-through.c >> @@ -1969,11 +1969,9 @@ static void pt_unregister_regions(struct >> if ( type == PCI_ADDRESS_SPACE_MEM || >> type == PCI_ADDRESS_SPACE_MEM_PREFETCH ) >> { >> - ret = xc_domain_memory_mapping(xc_handle, domid, >> - assigned_device->bases[i].e_physbase >> >> XC_PAGE_SHIFT, >> - assigned_device->bases[i].access.maddr >> >> XC_PAGE_SHIFT, >> - (e_size+XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT, >> - DPCI_REMOVE_MAPPING); >> + ret = _pt_iomem_helper(assigned_device, i, >> + >> assigned_device->bases[i].e_physbase, >> + e_size, >> DPCI_REMOVE_MAPPING); >> if ( ret != 0 ) >> { >> PT_LOG("Error: remove old mem mapping failed!\n"); >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Ping: Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-15 10:12 ` Jan Beulich @ 2012-03-21 11:07 ` Jan Beulich 2012-03-22 10:59 ` Stefano Stabellini 0 siblings, 1 reply; 16+ messages in thread From: Jan Beulich @ 2012-03-21 11:07 UTC (permalink / raw) To: Ian Jackson, Stefano Stabellini; +Cc: Yongjie Ren, Chao Zhou, xen-devel >>> On 15.03.12 at 11:12, "Jan Beulich" <JBeulich@suse.com> wrote: >>>> On 14.03.12 at 07:08, "Ren, Yongjie" <yongjie.ren@intel.com> wrote: >>> -----Original Message----- >>> From: xen-devel-bounces@lists.xen.org >>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich >>> Sent: Tuesday, March 13, 2012 7:34 PM >>> To: Zhou, Chao >>> Cc: xen-devel >>> Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: >>> d93dc5c4... >>> >>> >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> wrote: >>> > 1. when detaching a VF from hvm guest, "xl dmesg" will show some >>> warning >>> > information >>> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 >>> >>> Could you give the qemu-traditional patch below a try (and report >>> the resulting "xl dmesg" regardless of whether this eliminates the >>> warning)? >>> >> Yes, we tried your below patch. The warning still exists. >> Attached is the output of 'xl dmesg' after detaching a VF. > > Okay, so this apparently is an ordering problem: > > unregister_real_device() > -> pt_config_delete() > -> pt_msix_delete() (frees [and fails to clear] ->msix) > -> pt_unregister_regions() > -> _pt_iomem_helper() (with the patch below) > -> has_msix_mapping() (uses ->msix) > > As it is obviously necessary to call _pt_iomem_helper() (rather than > xc_domain_memory_mapping() directly) from pt_unregister_regions(), > it needs to be determined whether > - the calls to pt_config_delete() and pt_unregister_regions() can be > swapped, or > - the calling of pt_msix_delete() (and for consistency also the freeing > of ->msi) can be moved into or past the call to > pt_unregister_regions(), or > - yet something else can be done about this. I'd really appreciate some advice here. Jan >>> --- a/hw/pass-through.c >>> +++ b/hw/pass-through.c >>> @@ -1969,11 +1969,9 @@ static void pt_unregister_regions(struct >>> if ( type == PCI_ADDRESS_SPACE_MEM || >>> type == PCI_ADDRESS_SPACE_MEM_PREFETCH ) >>> { >>> - ret = xc_domain_memory_mapping(xc_handle, domid, >>> - assigned_device->bases[i].e_physbase >> >>> XC_PAGE_SHIFT, >>> - assigned_device->bases[i].access.maddr >> >>> XC_PAGE_SHIFT, >>> - (e_size+XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT, >>> - DPCI_REMOVE_MAPPING); >>> + ret = _pt_iomem_helper(assigned_device, i, >>> + >>> assigned_device->bases[i].e_physbase, >>> + e_size, >>> DPCI_REMOVE_MAPPING); >>> if ( ret != 0 ) >>> { >>> PT_LOG("Error: remove old mem mapping failed!\n"); >>> >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xen.org >>> http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Ping: Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-21 11:07 ` Ping: " Jan Beulich @ 2012-03-22 10:59 ` Stefano Stabellini 0 siblings, 0 replies; 16+ messages in thread From: Stefano Stabellini @ 2012-03-22 10:59 UTC (permalink / raw) To: Jan Beulich Cc: Yongjie Ren, xen-devel, Ian Jackson, Chao Zhou, Stefano Stabellini On Wed, 21 Mar 2012, Jan Beulich wrote: > >>> On 15.03.12 at 11:12, "Jan Beulich" <JBeulich@suse.com> wrote: > >>>> On 14.03.12 at 07:08, "Ren, Yongjie" <yongjie.ren@intel.com> wrote: > >>> -----Original Message----- > >>> From: xen-devel-bounces@lists.xen.org > >>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich > >>> Sent: Tuesday, March 13, 2012 7:34 PM > >>> To: Zhou, Chao > >>> Cc: xen-devel > >>> Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > >>> d93dc5c4... > >>> > >>> >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> wrote: > >>> > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > >>> warning > >>> > information > >>> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > >>> > >>> Could you give the qemu-traditional patch below a try (and report > >>> the resulting "xl dmesg" regardless of whether this eliminates the > >>> warning)? > >>> > >> Yes, we tried your below patch. The warning still exists. > >> Attached is the output of 'xl dmesg' after detaching a VF. > > > > Okay, so this apparently is an ordering problem: > > > > unregister_real_device() > > -> pt_config_delete() > > -> pt_msix_delete() (frees [and fails to clear] ->msix) > > -> pt_unregister_regions() > > -> _pt_iomem_helper() (with the patch below) > > -> has_msix_mapping() (uses ->msix) > > > > As it is obviously necessary to call _pt_iomem_helper() (rather than > > xc_domain_memory_mapping() directly) from pt_unregister_regions(), > > it needs to be determined whether > > - the calls to pt_config_delete() and pt_unregister_regions() can be > > swapped, or > > - the calling of pt_msix_delete() (and for consistency also the freeing > > of ->msi) can be moved into or past the call to > > pt_unregister_regions(), or > > - yet something else can be done about this. > > I'd really appreciate some advice here. It seems to me that pt_unregister_regions and pt_config_delete could be swapped without unwanted side effects ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-14 6:08 ` Ren, Yongjie 2012-03-15 10:12 ` Jan Beulich @ 2012-03-23 9:33 ` Jan Beulich 2012-03-26 7:06 ` Ren, Yongjie 1 sibling, 1 reply; 16+ messages in thread From: Jan Beulich @ 2012-03-23 9:33 UTC (permalink / raw) To: Chao Zhou, Yongjie Ren; +Cc: xen-devel [-- Attachment #1: Type: text/plain, Size: 984 bytes --] >>> On 14.03.12 at 07:08, "Ren, Yongjie" <yongjie.ren@intel.com> wrote: >> -----Original Message----- >> From: xen-devel-bounces@lists.xen.org >> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich >> Sent: Tuesday, March 13, 2012 7:34 PM >> To: Zhou, Chao >> Cc: xen-devel >> Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: >> d93dc5c4... >> >> >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> wrote: >> > 1. when detaching a VF from hvm guest, "xl dmesg" will show some >> warning >> > information >> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 >> >> Could you give the qemu-traditional patch below a try (and report >> the resulting "xl dmesg" regardless of whether this eliminates the >> warning)? >> > Yes, we tried your below patch. The warning still exists. > Attached is the output of 'xl dmesg' after detaching a VF. Okay, attached a second try (incorporating Stefano's feedback). Jan [-- Attachment #2: qemu-fix-bug1809.patch --] [-- Type: text/plain, Size: 2161 bytes --] pt_unregister_regions() also needs to use the newly introduced _pt_iomem_helper() instead of calling xc_domain_memory_mapping() directly, to take into consideration the hole created for the MSI-X table. For this to work, two calls in unregister_real_device() need to be swapped, since otherwise we'd have unregister_real_device() -> pt_config_delete() -> pt_msix_delete() (frees [and fails to clear] ->msix) -> pt_unregister_regions() -> _pt_iomem_helper() (with the patch below) -> has_msix_mapping() (uses ->msix) And to be certain to prevent (cath) further/future use-after-free instances, let's also clear dev->msix in pt_msix_delete(). --- a/hw/pass-through.c +++ b/hw/pass-through.c @@ -1969,11 +1969,9 @@ static void pt_unregister_regions(struct if ( type == PCI_ADDRESS_SPACE_MEM || type == PCI_ADDRESS_SPACE_MEM_PREFETCH ) { - ret = xc_domain_memory_mapping(xc_handle, domid, - assigned_device->bases[i].e_physbase >> XC_PAGE_SHIFT, - assigned_device->bases[i].access.maddr >> XC_PAGE_SHIFT, - (e_size+XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT, - DPCI_REMOVE_MAPPING); + ret = _pt_iomem_helper(assigned_device, i, + assigned_device->bases[i].e_physbase, + e_size, DPCI_REMOVE_MAPPING); if ( ret != 0 ) { PT_LOG("Error: remove old mem mapping failed!\n"); @@ -4393,12 +4391,12 @@ static int unregister_real_device(int de } } - /* delete all emulated config registers */ - pt_config_delete(assigned_device); - /* unregister real device's MMIO/PIO BARs */ pt_unregister_regions(assigned_device); + /* delete all emulated config registers */ + pt_config_delete(assigned_device); + pt_iomul_free(assigned_device); /* mark this devfn as free */ --- a/hw/pt-msi.c +++ b/hw/pt-msi.c @@ -627,4 +627,5 @@ void pt_msix_delete(struct pt_dev *dev) free(dev->msix); + dev->msix = NULL; } [-- Attachment #3: Type: text/plain, Size: 126 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-23 9:33 ` Jan Beulich @ 2012-03-26 7:06 ` Ren, Yongjie 0 siblings, 0 replies; 16+ messages in thread From: Ren, Yongjie @ 2012-03-26 7:06 UTC (permalink / raw) To: Jan Beulich, Zhou, Chao; +Cc: xen-devel [-- Attachment #1: Type: text/plain, Size: 1390 bytes --] > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: Friday, March 23, 2012 5:34 PM > To: Zhou, Chao; Ren, Yongjie > Cc: xen-devel > Subject: RE: [Xen-devel] VMX status report. Xen:24911 & Dom0: > d93dc5c4... > > >>> On 14.03.12 at 07:08, "Ren, Yongjie" <yongjie.ren@intel.com> wrote: > >> -----Original Message----- > >> From: xen-devel-bounces@lists.xen.org > >> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich > >> Sent: Tuesday, March 13, 2012 7:34 PM > >> To: Zhou, Chao > >> Cc: xen-devel > >> Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > >> d93dc5c4... > >> > >> >>> On 13.03.12 at 10:18, "Zhou, Chao" <chao.zhou@intel.com> > wrote: > >> > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > >> warning > >> > information > >> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > >> > >> Could you give the qemu-traditional patch below a try (and report > >> the resulting "xl dmesg" regardless of whether this eliminates the > >> warning)? > >> > > Yes, we tried your below patch. The warning still exists. > > Attached is the output of 'xl dmesg' after detaching a VF. > > Okay, attached a second try (incorporating Stefano's feedback). > Yes. This version patch will fix the warning (BZ#1809) I reported. Attached the output of 'xl dmesg'. -Jay [-- Attachment #2: xl_dmesg.log --] [-- Type: application/octet-stream, Size: 23551 bytes --] [-- Attachment #3: Type: text/plain, Size: 126 bytes --] _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? 2012-03-13 9:18 VMX status report. Xen:24911 & Dom0: d93dc5c4 Zhou, Chao 2012-03-13 11:33 ` Jan Beulich @ 2012-03-13 15:38 ` Pasi Kärkkäinen 2012-03-14 8:00 ` Ren, Yongjie 2012-03-13 16:55 ` VMX status report. Xen:24911 & Dom0: d93dc5c4 Konrad Rzeszutek Wilk 2 siblings, 1 reply; 16+ messages in thread From: Pasi Kärkkäinen @ 2012-03-13 15:38 UTC (permalink / raw) To: Zhou, Chao; +Cc: xen-devel@lists.xensource.com On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote: > Hi all, > Hello, > This is the test report of xen-unstable tree. We've switched our Dom0 to upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > We found four new issues and one old issue got fixed. > Is Intel planning to start testing Nested VMX ? It seems AMD has done a lot of testing with Nested SVM with Xen.. Thanks, -- Pasi > Version Info > ================================================================= > xen-changeset: 24911:d7fe4cd831a0 > Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...) > ================================================================= > > > New issues(4) > ============== > 1. when detaching a VF from hvm guest, "xl dmesg" will show some warning information > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup with a different VF before) > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810 > 3. RHEL6.2/6.1 guest runs quite slow > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811 > 4. after detaching a VF from a guest, shutdown the guest is very slow > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812 > > Fixed issue(1) > ============== > 1. Dom0 crash on power-off > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740 > ----kernel3.1.0 doesn't have this issue now > > Old issues(5) > ============== > 1. [ACPI] System cann't resume after do suspend > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707 > 2. [XL]"xl vcpu-set" causes dom0 crash or panic > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 > 3. [VT-D]fail to detach NIC from guest > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736 > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747 > 5. [VT-D] device reset fail when create/destroy guest > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752 > > > Thanks > Zhou, Chao > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? 2012-03-13 15:38 ` VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? Pasi Kärkkäinen @ 2012-03-14 8:00 ` Ren, Yongjie 2012-03-14 11:21 ` Pasi Kärkkäinen 2012-06-19 20:44 ` Pasi Kärkkäinen 0 siblings, 2 replies; 16+ messages in thread From: Ren, Yongjie @ 2012-03-14 8:00 UTC (permalink / raw) To: Pasi K?rkk?inen, Zhou, Chao; +Cc: xen-devel@lists.xensource.com > -----Original Message----- > From: xen-devel-bounces@lists.xen.org > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Pasi K?rkk?inen > Sent: Tuesday, March 13, 2012 11:39 PM > To: Zhou, Chao > Cc: xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > d93dc5c4... Nested VMX testing? > > On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote: > > Hi all, > > > > Hello, > > > This is the test report of xen-unstable tree. We've switched our Dom0 to > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > > We found four new issues and one old issue got fixed. > > > > Is Intel planning to start testing Nested VMX ? Yes, we've made several automated test cases for Nested VMX. The bad news is there's some bug on Nested VMX. >From my recent test, the following is the status for Nested VMX. Xen on Xen: failed. L1 Xen guest can't boot up. It hangs at the boot for xen hypervisor. KVM on Xen: pass. L2 RHEL5.5 guest can boot up on L1 KVM guest. (We use the same version of dom0 and xen-unstable as mentioned in the report.) Intel will make more effort on Nested VMX bug fixing this year. > It seems AMD has done a lot of testing with Nested SVM with Xen.. > > Thanks, > > -- Pasi > > > > Version Info > > > ============================================================ > ===== > > xen-changeset: 24911:d7fe4cd831a0 > > Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...) > > > ============================================================ > ===== > > > > > > New issues(4) > > ============== > > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > warning information > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup > with a different VF before) > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810 > > 3. RHEL6.2/6.1 guest runs quite slow > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811 > > 4. after detaching a VF from a guest, shutdown the guest is very slow > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812 > > > > Fixed issue(1) > > ============== > > 1. Dom0 crash on power-off > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740 > > ----kernel3.1.0 doesn't have this issue now > > > > Old issues(5) > > ============== > > 1. [ACPI] System cann't resume after do suspend > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707 > > 2. [XL]"xl vcpu-set" causes dom0 crash or panic > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 > > 3. [VT-D]fail to detach NIC from guest > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736 > > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747 > > 5. [VT-D] device reset fail when create/destroy guest > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752 > > > > > > Thanks > > Zhou, Chao > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? 2012-03-14 8:00 ` Ren, Yongjie @ 2012-03-14 11:21 ` Pasi Kärkkäinen 2012-06-19 20:44 ` Pasi Kärkkäinen 1 sibling, 0 replies; 16+ messages in thread From: Pasi Kärkkäinen @ 2012-03-14 11:21 UTC (permalink / raw) To: Ren, Yongjie; +Cc: xen-devel@lists.xensource.com, Zhou, Chao On Wed, Mar 14, 2012 at 08:00:09AM +0000, Ren, Yongjie wrote: > > -----Original Message----- > > From: xen-devel-bounces@lists.xen.org > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Pasi K?rkk?inen > > Sent: Tuesday, March 13, 2012 11:39 PM > > To: Zhou, Chao > > Cc: xen-devel@lists.xensource.com > > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > > d93dc5c4... Nested VMX testing? > > > > On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote: > > > Hi all, > > > > > > > Hello, > > > > > This is the test report of xen-unstable tree. We've switched our Dom0 to > > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > > > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > > > We found four new issues and one old issue got fixed. > > > > > > > Is Intel planning to start testing Nested VMX ? > > Yes, we've made several automated test cases for Nested VMX. > Great! > The bad news is there's some bug on Nested VMX. > From my recent test, the following is the status for Nested VMX. > Xen on Xen: failed. L1 Xen guest can't boot up. It hangs at the boot for xen hypervisor. > KVM on Xen: pass. L2 RHEL5.5 guest can boot up on L1 KVM guest. > (We use the same version of dom0 and xen-unstable as mentioned in the report.) > Intel will make more effort on Nested VMX bug fixing this year. > Ok, thanks for the results. I'm planning to test Nested VMX myself aswell in the near future.. -- Pasi > > > It seems AMD has done a lot of testing with Nested SVM with Xen.. > > > > Thanks, > > > > -- Pasi > > > > > > > Version Info > > > > > ============================================================ > > ===== > > > xen-changeset: 24911:d7fe4cd831a0 > > > Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...) > > > > > ============================================================ > > ===== > > > > > > > > > New issues(4) > > > ============== > > > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > > warning information > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > > > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup > > with a different VF before) > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810 > > > 3. RHEL6.2/6.1 guest runs quite slow > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811 > > > 4. after detaching a VF from a guest, shutdown the guest is very slow > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812 > > > > > > Fixed issue(1) > > > ============== > > > 1. Dom0 crash on power-off > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740 > > > ----kernel3.1.0 doesn't have this issue now > > > > > > Old issues(5) > > > ============== > > > 1. [ACPI] System cann't resume after do suspend > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707 > > > 2. [XL]"xl vcpu-set" causes dom0 crash or panic > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 > > > 3. [VT-D]fail to detach NIC from guest > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736 > > > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747 > > > 5. [VT-D] device reset fail when create/destroy guest > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752 > > > > > > > > > Thanks > > > Zhou, Chao > > > > > > _______________________________________________ > > > Xen-devel mailing list > > > Xen-devel@lists.xen.org > > > http://lists.xen.org/xen-devel > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? 2012-03-14 8:00 ` Ren, Yongjie 2012-03-14 11:21 ` Pasi Kärkkäinen @ 2012-06-19 20:44 ` Pasi Kärkkäinen 2012-06-20 5:46 ` Ren, Yongjie 1 sibling, 1 reply; 16+ messages in thread From: Pasi Kärkkäinen @ 2012-06-19 20:44 UTC (permalink / raw) To: Ren, Yongjie; +Cc: xen-devel@lists.xensource.com, Zhou, Chao On Wed, Mar 14, 2012 at 08:00:09AM +0000, Ren, Yongjie wrote: > > > > Hello, > > > > > This is the test report of xen-unstable tree. We've switched our Dom0 to > > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > > > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > > > We found four new issues and one old issue got fixed. > > > > > > > Is Intel planning to start testing Nested VMX ? > > Yes, we've made several automated test cases for Nested VMX. > The bad news is there's some bug on Nested VMX. > From my recent test, the following is the status for Nested VMX. > Xen on Xen: failed. L1 Xen guest can't boot up. It hangs at the boot for xen hypervisor. > KVM on Xen: pass. L2 RHEL5.5 guest can boot up on L1 KVM guest. > (We use the same version of dom0 and xen-unstable as mentioned in the report.) > Intel will make more effort on Nested VMX bug fixing this year. > Hello again, I'm wondering.. Does Intel have plans to do more Nested VMX testing (and bugfixes) before Xen 4.2 release? There's an action point on the Xen 4.2 status email about describing the status of Nested VMX support. > > > It seems AMD has done a lot of testing with Nested SVM with Xen.. > > Thanks, -- Pasi ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? 2012-06-19 20:44 ` Pasi Kärkkäinen @ 2012-06-20 5:46 ` Ren, Yongjie 0 siblings, 0 replies; 16+ messages in thread From: Ren, Yongjie @ 2012-06-20 5:46 UTC (permalink / raw) To: Pasi K?rkk?inen; +Cc: xen-devel@lists.xensource.com, Zhou, Chao > -----Original Message----- > From: Pasi Kärkkäinen [mailto:pasik@iki.fi] > Sent: Wednesday, June 20, 2012 4:44 AM > To: Ren, Yongjie > Cc: Zhou, Chao; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > d93dc5c4... Nested VMX testing? > > On Wed, Mar 14, 2012 at 08:00:09AM +0000, Ren, Yongjie wrote: > > > > > > Hello, > > > > > > > This is the test report of xen-unstable tree. We've switched our Dom0 > to > > > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > > > > We've also upgraded our nightly test system from RHEL5.5 to > RHEL6.2. > > > > We found four new issues and one old issue got fixed. > > > > > > > > > > Is Intel planning to start testing Nested VMX ? > > > > Yes, we've made several automated test cases for Nested VMX. > > The bad news is there's some bug on Nested VMX. > > From my recent test, the following is the status for Nested VMX. > > Xen on Xen: failed. L1 Xen guest can't boot up. It hangs at the boot > for xen hypervisor. > > KVM on Xen: pass. L2 RHEL5.5 guest can boot up on L1 KVM guest. > > (We use the same version of dom0 and xen-unstable as mentioned in the > report.) > > Intel will make more effort on Nested VMX bug fixing this year. > > > > Hello again, > > I'm wondering.. Does Intel have plans to do more Nested VMX testing (and > bugfixes) before Xen 4.2 release? > There's an action point on the Xen 4.2 status email about describing the > status of Nested VMX support. > Hum, we don't have specific plan or bug fixes on Nested VMX before 4.2 release. But we have an engineer who is looking at this issue now. And we also test nested vmx biweekly. The status is the same as I described before. Xen on Xen: failed. L1 guest can't boot up. KVM on Xen: good. > > > > > > It seems AMD has done a lot of testing with Nested SVM with Xen.. > > > > > > Thanks, > > -- Pasi ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-13 9:18 VMX status report. Xen:24911 & Dom0: d93dc5c4 Zhou, Chao 2012-03-13 11:33 ` Jan Beulich 2012-03-13 15:38 ` VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? Pasi Kärkkäinen @ 2012-03-13 16:55 ` Konrad Rzeszutek Wilk 2012-03-14 8:54 ` Ren, Yongjie 2 siblings, 1 reply; 16+ messages in thread From: Konrad Rzeszutek Wilk @ 2012-03-13 16:55 UTC (permalink / raw) To: Zhou, Chao; +Cc: xen-devel@lists.xensource.com On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote: > Hi all, > > This is the test report of xen-unstable tree. We've switched our Dom0 to upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. Nice! Thanks for doing that. Thought some of the issues reported below I think are fixed in 3.3.. Especially the 'xl vcpu-set' one - which I think is also back-ported to the 3.x stable kernel. > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > We found four new issues and one old issue got fixed. > > Version Info > ================================================================= > xen-changeset: 24911:d7fe4cd831a0 > Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...) > ================================================================= > > > New issues(4) > ============== > 1. when detaching a VF from hvm guest, "xl dmesg" will show some warning information > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup with a different VF before) > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810 > 3. RHEL6.2/6.1 guest runs quite slow > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811 > 4. after detaching a VF from a guest, shutdown the guest is very slow > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812 > > Fixed issue(1) > ============== > 1. Dom0 crash on power-off > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740 > ----kernel3.1.0 doesn't have this issue now > > Old issues(5) These old ones are with 3.1.x? > ============== > 1. [ACPI] System cann't resume after do suspend > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707 > 2. [XL]"xl vcpu-set" causes dom0 crash or panic > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 > 3. [VT-D]fail to detach NIC from guest > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736 > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747 > 5. [VT-D] device reset fail when create/destroy guest > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752 > > > Thanks > Zhou, Chao > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-13 16:55 ` VMX status report. Xen:24911 & Dom0: d93dc5c4 Konrad Rzeszutek Wilk @ 2012-03-14 8:54 ` Ren, Yongjie 2012-03-14 21:37 ` Konrad Rzeszutek Wilk 0 siblings, 1 reply; 16+ messages in thread From: Ren, Yongjie @ 2012-03-14 8:54 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Zhou, Chao; +Cc: xen-devel@lists.xensource.com > -----Original Message----- > From: xen-devel-bounces@lists.xen.org > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad Rzeszutek > Wilk > Sent: Wednesday, March 14, 2012 12:55 AM > To: Zhou, Chao > Cc: xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > d93dc5c4... > > On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote: > > Hi all, > > > > This is the test report of xen-unstable tree. We've switched our Dom0 to > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > > Nice! Thanks for doing that. Thought some of the issues reported below I > think > are fixed in 3.3.. Especially the 'xl vcpu-set' one - which I think is also > back-ported to the 3.x stable kernel. > As some our internal patches are based on 3.1-rc7, we use 3.1 as a base Dom0 for testing. We also tried most of these new bugs with 3.3 kernel, and found they still exist. As for the 'xl vcpu-set' issue, upstream linux 3.3 also has the similar issue. I just gave some update for this on the bugzilla. You may have a look. http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 -- Jay > > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > > We found four new issues and one old issue got fixed. > > > > Version Info > > > ============================================================ > ===== > > xen-changeset: 24911:d7fe4cd831a0 > > Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...) > > > ============================================================ > ===== > > > > > > New issues(4) > > ============== > > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > warning information > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup > with a different VF before) > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810 > > 3. RHEL6.2/6.1 guest runs quite slow > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811 > > 4. after detaching a VF from a guest, shutdown the guest is very slow > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812 > > > > Fixed issue(1) > > ============== > > 1. Dom0 crash on power-off > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740 > > ----kernel3.1.0 doesn't have this issue now > > > > Old issues(5) > > These old ones are with 3.1.x? > > ============== > > 1. [ACPI] System cann't resume after do suspend > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707 > > 2. [XL]"xl vcpu-set" causes dom0 crash or panic > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 > > 3. [VT-D]fail to detach NIC from guest > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736 > > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747 > > 5. [VT-D] device reset fail when create/destroy guest > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752 > > > > > > Thanks > > Zhou, Chao > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: VMX status report. Xen:24911 & Dom0: d93dc5c4... 2012-03-14 8:54 ` Ren, Yongjie @ 2012-03-14 21:37 ` Konrad Rzeszutek Wilk 0 siblings, 0 replies; 16+ messages in thread From: Konrad Rzeszutek Wilk @ 2012-03-14 21:37 UTC (permalink / raw) To: Ren, Yongjie; +Cc: xen-devel@lists.xensource.com, Zhou, Chao On Wed, Mar 14, 2012 at 08:54:50AM +0000, Ren, Yongjie wrote: > > -----Original Message----- > > From: xen-devel-bounces@lists.xen.org > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad Rzeszutek > > Wilk > > Sent: Wednesday, March 14, 2012 12:55 AM > > To: Zhou, Chao > > Cc: xen-devel@lists.xensource.com > > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: > > d93dc5c4... > > > > On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote: > > > Hi all, > > > > > > This is the test report of xen-unstable tree. We've switched our Dom0 to > > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree. > > > > Nice! Thanks for doing that. Thought some of the issues reported below I > > think > > are fixed in 3.3.. Especially the 'xl vcpu-set' one - which I think is also > > back-ported to the 3.x stable kernel. > > > As some our internal patches are based on 3.1-rc7, we use 3.1 as a base Dom0 for testing. > We also tried most of these new bugs with 3.3 kernel, and found they still exist. which of the BZs below are against 3.3? > As for the 'xl vcpu-set' issue, upstream linux 3.3 also has the similar issue. > I just gave some update for this on the bugzilla. You may have a look. > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 OK, first of I am really excited that you guys are using the upstream kernel and testing against the latest. I think the issue I've seen before and it was due to a mix of vCPU hotplug code. If I booted the kernel with max_cpus=16 (on the Linux command line) and did those operations it worked. But it is a bug nonethless. Any ideas for a fix? > > -- Jay > > > > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2. > > > We found four new issues and one old issue got fixed. > > > > > > Version Info > > > > > ============================================================ > > ===== > > > xen-changeset: 24911:d7fe4cd831a0 > > > Dom0: linux.git 3.1-rc7 (commit: d93dc5c4...) > > > > > ============================================================ > > ===== > > > > > > > > > New issues(4) > > > ============== > > > 1. when detaching a VF from hvm guest, "xl dmesg" will show some > > warning information > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809 > > > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup > > with a different VF before) > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810 > > > 3. RHEL6.2/6.1 guest runs quite slow > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811 > > > 4. after detaching a VF from a guest, shutdown the guest is very slow > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812 > > > > > > Fixed issue(1) > > > ============== > > > 1. Dom0 crash on power-off > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740 > > > ----kernel3.1.0 doesn't have this issue now > > > > > > Old issues(5) > > > > These old ones are with 3.1.x? > > > ============== > > > 1. [ACPI] System cann't resume after do suspend > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707 > > > 2. [XL]"xl vcpu-set" causes dom0 crash or panic > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730 > > > 3. [VT-D]fail to detach NIC from guest > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736 > > > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747 > > > 5. [VT-D] device reset fail when create/destroy guest > > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752 > > > > > > > > > Thanks > > > Zhou, Chao > > > > > > _______________________________________________ > > > Xen-devel mailing list > > > Xen-devel@lists.xen.org > > > http://lists.xen.org/xen-devel > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2012-06-20 5:46 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-03-13 9:18 VMX status report. Xen:24911 & Dom0: d93dc5c4 Zhou, Chao 2012-03-13 11:33 ` Jan Beulich 2012-03-14 6:08 ` Ren, Yongjie 2012-03-15 10:12 ` Jan Beulich 2012-03-21 11:07 ` Ping: " Jan Beulich 2012-03-22 10:59 ` Stefano Stabellini 2012-03-23 9:33 ` Jan Beulich 2012-03-26 7:06 ` Ren, Yongjie 2012-03-13 15:38 ` VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing? Pasi Kärkkäinen 2012-03-14 8:00 ` Ren, Yongjie 2012-03-14 11:21 ` Pasi Kärkkäinen 2012-06-19 20:44 ` Pasi Kärkkäinen 2012-06-20 5:46 ` Ren, Yongjie 2012-03-13 16:55 ` VMX status report. Xen:24911 & Dom0: d93dc5c4 Konrad Rzeszutek Wilk 2012-03-14 8:54 ` Ren, Yongjie 2012-03-14 21:37 ` Konrad Rzeszutek Wilk
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).