xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* PVH Dom0 with latest Linux kernels
@ 2014-02-18 16:48 Roger Pau Monné
  2014-02-19  1:47 ` Mukesh Rathor
  0 siblings, 1 reply; 8+ messages in thread
From: Roger Pau Monné @ 2014-02-18 16:48 UTC (permalink / raw)
  To: Mukesh Rathor, xen-devel

Hello,

I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's PVH 
Dom0 Xen tree (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
h=dom0pvh-v7), and got the following crash. Do you have any new Xen 
Dom0 series that I could use to test PVH Dom0?

 __  __            _  _   _  _                      _        _     _
 \ \/ /___ _ __   | || | | || |     _   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \  | || |_| || |_ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|__   _|__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_) |_|     \__,_|_| |_|___/\__\__,_|_.__/|_|\___|

(XEN) Xen version 4.4-unstable (root@) (FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610) debug=y Tue Feb 18 15:41:18 CET 2014
(XEN) Latest ChangeSet: Tue Feb 18 15:37:28 2014 +0100 git:f574c06-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: PXELINUX 4.02 debian-20101014
(XEN) Command line: dom0pvh=1 sync_console=true dom0_mem=1024M com1=115200,8n1 guest_loglvl=all loglvl=all console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN)  EDID info not retrieved because of reasons unknown
(XEN) Disc information:
(XEN)  Found 2 MBR signatures
(XEN)  Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000092400 (usable)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000dfdf9c00 (usable)
(XEN)  00000000dfdf9c00 - 00000000dfe4bc00 (ACPI NVS)
(XEN)  00000000dfe4bc00 - 00000000dfe4dc00 (ACPI data)
(XEN)  00000000dfe4dc00 - 00000000e0000000 (reserved)
(XEN)  00000000f8000000 - 00000000fd000000 (reserved)
(XEN)  00000000fe000000 - 00000000fed00400 (reserved)
(XEN)  00000000fee00000 - 00000000fef00000 (reserved)
(XEN)  00000000ffb00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 00000001a0000000 (usable)
(XEN) ACPI: RSDP 000FEC30, 0024 (r2 DELL  )
(XEN) ACPI: XSDT 000FCCC7, 007C (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: FACP 000FCDB7, 00F4 (r3 DELL    B10K          15 ASL        61)
(XEN) ACPI: DSDT FFE9E951, 4A74 (r1   DELL    dt_ex     1000 INTL 20050624)
(XEN) ACPI: FACS DFDF9C00, 0040
(XEN) ACPI: SSDT FFEA34D6, 009C (r1   DELL    st_ex     1000 INTL 20050624)
(XEN) ACPI: APIC 000FCEAB, 015E (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: BOOT 000FD009, 0028 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: ASF! 000FD031, 0096 (r32 DELL    B10K          15 ASL        61)
(XEN) ACPI: MCFG 000FD0C7, 003C (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: HPET 000FD103, 0038 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: TCPA 000FD35F, 0032 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: DMAR 000FD391, 00C8 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: SLIC 000FD13B, 0176 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: SSDT DFE4DC00, 15C4 (r1  INTEL PPM RCM  80000001 INTL 20061109)
(XEN) System RAM: 6141MB (6288940kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-00000001a0000000
(XEN) Domain heap initialised
(XEN) DMI 2.5 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[dfdf9c0c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x11] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x12] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x13] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x14] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x15] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x16] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x17] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x18] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x19] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x20] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
(XEN) ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: IOAPIC (id[0x09] address[0xfec80000] gsi_base[24])
(XEN) IOAPIC[1]: apic_id 9, version 32, address 0xfec80000, GSI 24-47
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 2 I/O APICs
(XEN) ACPI: HPET id: 0x8086a301 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 32 CPUs (24 hotplug CPUs)
(XEN) IRQ limits: 48 GSI, 1504 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3066.865 MHz processor.
(XEN) Initing memory sharing.
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) mwait-idle: MWAIT substates: 0x1120
(XEN) mwait-idle: v0.4 model 0x1a
(XEN) mwait-idle: lapic_timer_reliable_states 0x2
(XEN) HPET: 0 timers usable for broadcast (4 total)
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB
(XEN) Brought up 8 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xb2b000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0xc4110
(XEN) elf_parse_binary: phdr: paddr=0x1cc5000 memsz=0x14d40
(XEN) elf_parse_binary: phdr: paddr=0x1cda000 memsz=0xdd0000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x2aaa000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81cda1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
(XEN) elf_xen_parse_note: SUPPORTED_FEATURES = 0x90d
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff82aaa000
(XEN)     virt_entry       = 0xffffffff81cda1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x2aaa000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000194000000->0000000198000000 (244199 pages to be allocated)
(XEN)  Init. ramdisk: 000000019f9e7000->000000019ffff200
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff82aaa000
(XEN)  Init. ramdisk: ffffffff82aaa000->ffffffff830c2200
(XEN)  Phys-Mach map: ffffffff830c3000->ffffffff832c3000
(XEN)  Start info:    ffffffff832c3000->ffffffff832c44b4
(XEN)  Page tables:   ffffffff832c5000->ffffffff832e2000
(XEN)  Boot stack:    ffffffff832e2000->ffffffff832e3000
(XEN)  TOTAL:         ffffffff80000000->ffffffff83400000
(XEN)  ENTRY ADDRESS: ffffffff81cda1e0
(XEN) Dom0 has maximum 8 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2b000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81cc4110
(XEN) elf_load_binary: phdr 2 at 0xffffffff81cc5000 -> 0xffffffff81cd9d40
(XEN) elf_load_binary: phdr 3 at 0xffffffff81cda000 -> 0xffffffff81db3000
(XEN) Scrubbing Free RAM: ..................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) **********************************************
(XEN) ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) ******* This option is intended to aid debugging of Xen by ensuring
(XEN) ******* that all output is synchronously delivered on the serial line.
(XEN) ******* However it can introduce SIGNIFICANT latencies and affect
(XEN) ******* timekeeping. It is NOT recommended for production use!
(XEN) **********************************************
(XEN) 3... 2... 1...
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 240kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.14.0-rc3 (root@loki) (gcc version 4.4.5 (Debian 4.4.5-8) ) #0 SMP Wed Jan 8 11:20:24 CET 2014
[    0.000000] Command line: root=/dev/sda1 ro ramdisk_size=1024000 earlyprintk=xenboot loglevel=9 console=hvc0 debug
[    0.000000] Released 110 pages of unused memory
[    0.000000] Set 131701 page(s) to 1-1 mapping
[    0.000000] Populating 40000-4006e pfn range: 110 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000091fff] usable
[    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x00000000dfdf8fff] usable
[    0.000000] Xen: [mem 0x00000000dfdf9c00-0x00000000dfe4bbff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000dfe4bc00-0x00000000dfe4dbff] ACPI data
[    0.000000] Xen: [mem 0x00000000dfe4dc00-0x00000000dfffffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fcffffff] reserved
[    0.000000] Xen: [mem 0x00000000fe000000-0x00000000fed003ff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ffb00000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000019fffffff] usable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.5 present.
[    0.000000] DMI: Dell Inc. Precision WorkStation T3500  /09KPNV, BIOS A15 03/28/2012
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn = 0x1a0000 max_arch_pfn = 0x400000000
[    0.000000] e820: last_pfn = 0xdfdf9 max_arch_pfn = 0x400000000
[    0.000000] Base memory trampoline at [ffff88000008c000] 8c000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x3fe00000-0x3fffffff]
[    0.000000]  [mem 0x3fe00000-0x3fffffff] page 4k
[    0.000000] BRK [0x02688000, 0x02688fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x3c000000-0x3fdfffff]
[    0.000000]  [mem 0x3c000000-0x3fdfffff] page 4k
[    0.000000] BRK [0x02689000, 0x02689fff] PGTABLE
[    0.000000] BRK [0x0268a000, 0x0268afff] PGTABLE
[    0.000000] BRK [0x0268b000, 0x0268bfff] PGTABLE
[    0.000000] BRK [0x0268c000, 0x0268cfff] PGTABLE
[    0.000000] BRK [0x0268d000, 0x0268dfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x3bffffff]
[    0.000000]  [mem 0x00100000-0x3bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x40000000-0xdfdf8fff]
[    0.000000]  [mem 0x40000000-0xdfdf8fff] page 4k
[    0.000000] init_memory_mapping: [mem 0x100000000-0x19fffffff]
[    0.000000]  [mem 0x100000000-0x19fffffff] page 4k
[    0.000000] RAMDISK: [mem 0x02aaa000-0x030c2fff]
[    0.000000] ACPI: RSDP 00000000000fec30 000024 (v02 DELL  )
[    0.000000] ACPI: XSDT 00000000000fccc7 00007C (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: FACP 00000000000fcdb7 0000F4 (v03 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI BIOS Warning (bug): 32/64X length mismatch in FADT/Gpe0Block: 128/64 (20131218/tbfadt-603)
[    0.000000] ACPI: DSDT 00000000ffe9e951 004A74 (v01   DELL    dt_ex 00001000 INTL 20050624)
[    0.000000] ACPI: FACS 00000000dfdf9c00 000040
[    0.000000] ACPI: SSDT 00000000ffea34d6 00009C (v01   DELL    st_ex 00001000 INTL 20050624)
[    0.000000] ACPI: APIC 00000000000fceab 00015E (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: BOOT 00000000000fd009 000028 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: ASF! 00000000000fd031 000096 (v32 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: MCFG 00000000000fd0c7 00003C (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: HPET 00000000000fd103 000038 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: TCPA 00000000000fd35f 000032 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: DMAR 00000000000fd391 0000C8 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: SLIC 00000000000fd13b 000176 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: SSDT 00000000dfe4dc00 0015C4 (v01  INTEL PPM RCM  80000001 INTL 20061109)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x19fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00091fff]
[    0.000000]   node   0: [mem 0x00100000-0xdfdf8fff]
[    0.000000]   node   0: [mem 0x100000000-0x19fffffff]
[    0.000000] On node 0 totalpages: 1572234
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3985 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 12481 pages used for memmap
[    0.000000]   DMA32 zone: 912889 pages, LIFO batch:31
[    0.000000]   Normal zone: 8960 pages used for memmap
[    0.000000]   Normal zone: 655360 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x11] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x12] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x13] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x14] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x15] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x16] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x17] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x18] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x19] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x20] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: IOAPIC (id[0x09] address[0xfec80000] gsi_base[24])
[    0.000000] IOAPIC[1]: apic_id 9, version 32, address 0xfec80000, GSI 24-47
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000
[    0.000000] smpboot: 32 Processors exceeds NR_CPUS limit of 8
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 64
[    0.000000] e820: [mem 0xe0000000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel with PVH extensions on Xen
[    0.000000] Xen version: 4.4-unstable
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88003f000000 s85312 r8192 d21184 u262144
[    0.000000] pcpu-alloc: s85312 r8192 d21184 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 1550716
[    0.000000] Kernel command line: root=/dev/sda1 ro ramdisk_size=1024000 earlyprintk=xenboot loglevel=9 console=hvc0 debug
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[    0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.000000] software IO TLB [mem 0x34c00000-0x38c00000] (64MB) mapped at [ffff880034c00000-ffff880038bfffff]
[    0.000000] Memory: 838552K/6288936K available (6268K kernel code, 782K rwdata, 3244K rodata, 928K init, 9040K bss, 5450384K reserved)
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	CONFIG_RCU_FANOUT set to non-default value of 32
[    0.000000] 	RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:4352 nr_irqs:1152 16
[    0.000000] xen:events: Using FIFO-based ABI
[    0.000000] xen: sci override: global_irq=9 trigger=0 polarity=0
[    0.000000] xen: registering gsi 9 triggering 0 polarity 0
[    0.000000] xen: --> pirq=9 -> irq=9 (gsi=9)
[    0.000000] xen: acpi sci 9
[    0.000000] xen: --> pirq=1 -> irq=1 (gsi=1)
[    0.000000] xen: --> pirq=2 -> irq=2 (gsi=2)
[    0.000000] xen: --> pirq=3 -> irq=3 (gsi=3)
[    0.000000] xen: --> pirq=4 -> irq=4 (gsi=4)
[    0.000000] xen: --> pirq=5 -> irq=5 (gsi=5)
[    0.000000] xen: --> pirq=6 -> irq=6 (gsi=6)
[    0.000000] xen: --> pirq=7 -> irq=7 (gsi=7)
[    0.000000] xen: --> pirq=8 -> irq=8 (gsi=8)
[    0.000000] xen: --> pirq=10 -> irq=10 (gsi=10)
[    0.000000] xen: --> pirq=11 -> irq=11 (gsi=11)
[    0.000000] xen: --> pirq=12 -> irq=12 (gsi=12)
[    0.000000] xen: --> pirq=13 -> irq=13 (gsi=13)
[    0.000000] xen: --> pirq=14 -> irq=14 (gsi=14)
[    0.000000] xen: --> pirq=15 -> irq=15 (gsi=15)
(XEN) irq.c:375: Dom0 callback via changed to Direct Vector 0xf3
[    0.000000] xen:events: Xen HVM callback vector for event delivery is enabled
[    0.000000] ACPI: Core revision 20131218
(XEN) ----[ Xen-4.4-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d0801b43c0>] hvm_do_resume+0x50/0x150
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor
(XEN) rax: 000000000018a852   rbx: 000000000018a852   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff8300dfb0e000
(XEN) rbp: ffff82d0802c7dd8   rsp: ffff82d0802c7dc8   r8:  ffff82d0803087d0
(XEN) r9:  0000000000000005   r10: ffff82d080304890   r11: 0000000000000000
(XEN) r12: 0000000182a5ea8d   r13: ffff82d0803087d0   r14: ffff8300dfb0e000
(XEN) r15: ffff8300dfb0e000   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000019f96e000   cr2: 000000000018a870
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802c7dc8:
(XEN)    ffff82d0803093e0 ffff830199a1e000 ffff82d0802c7e08 ffff82d0801d63c8
(XEN)    0000000000000000 ffff8300dfb0e000 ffff8300dfb0e000 0000000001c9c380
(XEN)    ffff82d0802c7e18 ffff82d08015fbaa ffff82d0802c7ec8 ffff82d0801254fb
(XEN)    ffff82d080308880 000000003fd870ec 0000000182a5ea8d ffff8300dfb0e000
(XEN)    ffff82d0803087b8 0000000000000001 ffff82d0802c7e88 ffff82d080158ea9
(XEN)    0000000000000000 ffff82d080308900 ffff82d080308880 0000000182e2fa0e
(XEN)    ffff8300dfb0e000 0000000001c9c380 0000000000000000 ffff82d0802dfe00
(XEN)    0000000000000002 ffff82d0802dfe00 ffffffffffffffff 0000000000000001
(XEN)    ffff82d0802c7f08 ffff82d080126726 0000000000000001 ffff8300dfb0e000
(XEN)    ffffffff81d6a900 ffffffff81d730a0 0000000000000059 0000000000000000
(XEN)    ffffffff81c01e78 ffff82d0801e18da 0000000000000000 0000000000000059
(XEN)    ffffffff81d730a0 ffffffff81d6a900 ffffffff81c01e78 ffff88003480f400
(XEN)    0000000000000000 0000000000000000 000000000000001d 0000000000000000
(XEN)    ffffc900000020e1 ffffc90000000951 ffffffff81c13490 ffffc900000053c5
(XEN)    ffffc90000000951 000000fa0000beef ffffffff812b8380 000000bf0000beef
(XEN)    0000000000000006 ffffffff81c01e78 000000000000beef 000000000000beef
(XEN)    000000000000beef 000000000000beef 000000000000beef 0000000000000000
(XEN)    ffff8300dfb0e000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d0801b43c0>] hvm_do_resume+0x50/0x150
(XEN)    [<ffff82d0801d63c8>] vmx_do_resume+0x118/0x150
(XEN)    [<ffff82d08015fbaa>] continue_running+0xa/0x10
(XEN)    [<ffff82d0801254fb>] schedule+0x22b/0x310
(XEN)    [<ffff82d080126726>] __do_softirq+0x46/0xa0
(XEN)    [<ffff82d0801e18da>] vmx_asm_do_vmentry+0x2a/0x50
(XEN)
(XEN) Pagetable walk from 000000000018a870:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000018a870
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
  2014-02-18 16:48 PVH Dom0 with latest Linux kernels Roger Pau Monné
@ 2014-02-19  1:47 ` Mukesh Rathor
  2014-03-01  1:15   ` Mukesh Rathor
  0 siblings, 1 reply; 8+ messages in thread
From: Mukesh Rathor @ 2014-02-19  1:47 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel

On Tue, 18 Feb 2014 17:48:02 +0100
Roger Pau Monné <roger.pau@citrix.com> wrote:

> Hello,
> 
> I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's PVH 
> Dom0 Xen tree
> (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> h=dom0pvh-v7), and got the following crash. Do you have any new Xen
> Dom0 series that I could use to test PVH Dom0?
> 

It won't work. The final linux patches were changed a bit, and it makes
an hcall that will cause mem corruption in xen. So, just hang in there
a bit, new patches coming up in few days.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
@ 2014-02-19  2:08 Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-02-19  2:08 UTC (permalink / raw)
  To: Mukesh Rathor; +Cc: xen-devel, roger.pau


On Feb 18, 2014 8:47 PM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>
> On Tue, 18 Feb 2014 17:48:02 +0100
> Roger Pau Monné <roger.pau@citrix.com> wrote:
>
> > Hello,
> > 
> > I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's PVH 
> > Dom0 Xen tree
> > (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> > h=dom0pvh-v7), and got the following crash. Do you have any new Xen
> > Dom0 series that I could use to test PVH Dom0?
> >
>
> It won't work. The final linux patches were changed a bit, and it makes
> an hcall that will cause mem corruption in xen. So, just hang in there
> a bit, new patches coming up in few days.

Do they also crash/corrupt Xen as DomU?
>
> thanks
> Mukesh
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
       [not found] <201402190208.s1J28lw5002932@mantra.us.oracle.com>
@ 2014-02-19  2:15 ` Mukesh Rathor
  0 siblings, 0 replies; 8+ messages in thread
From: Mukesh Rathor @ 2014-02-19  2:15 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, roger.pau

On Tue, 18 Feb 2014 21:08:30 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> 
> On Feb 18, 2014 8:47 PM, Mukesh Rathor <mukesh.rathor@oracle.com>
> wrote:
> >
> > On Tue, 18 Feb 2014 17:48:02 +0100
> > Roger Pau Monné <roger.pau@citrix.com> wrote:
> >
> > > Hello,
> > > 
> > > I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's
> > > PVH Dom0 Xen tree
> > > (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> > > h=dom0pvh-v7), and got the following crash. Do you have any new
> > > Xen Dom0 series that I could use to test PVH Dom0?
> > >
> >
> > It won't work. The final linux patches were changed a bit, and it
> > makes an hcall that will cause mem corruption in xen. So, just hang
> > in there a bit, new patches coming up in few days.
> 
> Do they also crash/corrupt Xen as DomU?

Nope, I'd have mentioned on the list asap otherwise :)...

mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
  2014-02-19  1:47 ` Mukesh Rathor
@ 2014-03-01  1:15   ` Mukesh Rathor
  2014-03-14  2:09     ` Mukesh Rathor
  0 siblings, 1 reply; 8+ messages in thread
From: Mukesh Rathor @ 2014-03-01  1:15 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel

On Tue, 18 Feb 2014 17:47:04 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Tue, 18 Feb 2014 17:48:02 +0100
> Roger Pau Monné <roger.pau@citrix.com> wrote:
> 
> > Hello,
> > 
> > I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's
> > PVH Dom0 Xen tree
> > (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> > h=dom0pvh-v7), and got the following crash. Do you have any new Xen
> > Dom0 series that I could use to test PVH Dom0?
> > 
> 
> It won't work. The final linux patches were changed a bit, and it
> makes an hcall that will cause mem corruption in xen. So, just hang
> in there a bit, new patches coming up in few days.
> 
> thanks
> Mukesh

Hey Roger,

Sorry, still don't have the new version out. Just running into one
issue after another... I had to refresh the tree, that broke my tools
as they needed newer glibc, which caused me to reinstall, which caused
dom0 to break causing me to rebuild that entire thing... anyways, I
ve got stuff up, but guests are not starting right now. Debugging. Will
keep you posted.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
  2014-03-01  1:15   ` Mukesh Rathor
@ 2014-03-14  2:09     ` Mukesh Rathor
  2014-03-14 17:48       ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 8+ messages in thread
From: Mukesh Rathor @ 2014-03-14  2:09 UTC (permalink / raw)
  To: Mukesh Rathor; +Cc: xen-devel, Roger Pau Monné

On Fri, 28 Feb 2014 17:15:08 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Tue, 18 Feb 2014 17:47:04 -0800
> Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> 
> > On Tue, 18 Feb 2014 17:48:02 +0100
> > Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > > Hello,
> > > 
> > > I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's
> > > PVH Dom0 Xen tree
> > > (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> > > h=dom0pvh-v7), and got the following crash. Do you have any new
> > > Xen Dom0 series that I could use to test PVH Dom0?
> > > 
> > 
> > It won't work. The final linux patches were changed a bit, and it
> > makes an hcall that will cause mem corruption in xen. So, just hang
> > in there a bit, new patches coming up in few days.
> > 
> > thanks
> > Mukesh
> 
> Hey Roger,
> 
> Sorry, still don't have the new version out. Just running into one
> issue after another... I had to refresh the tree, that broke my tools
> as they needed newer glibc, which caused me to reinstall, which caused
> dom0 to break causing me to rebuild that entire thing... anyways, I
> ve got stuff up, but guests are not starting right now. Debugging.
> Will keep you posted.

JFYI... I am able to boot dom0 in PVH mode, then start PV and PVH guest.
But for HVM guest, during boot I see PF fautls in different places during 
load of acpi-cpufreq module. Very hard to debug since faults come from
different place everytime. I'm now trying to figure what load_module
does thruout.

[    3.804141] BUG: unable to handle kernel paging request at 00000014bf0d13c8
[    3.805127] IP: [<ffffffff8104c87c>] apply_relocate_add+0x7c/0x150
[    3.805127] PGD 0

------
[    3.779263] BUG: unable to handle kernel NULL pointer dereference at           (null)
[    3.780250] IP: [<ffffffff810eadf3>] add_unformed_module+0x23/0x150
[    3.780250] PGD 0
[    3.780250] Oops: 0002 [#1] SMP
--------
[    3.975346] BUG: unable to handle kernel paging request at 0000000000003100
[    3.976333] IP: [<ffffffff8130276d>] memcpy+0xd/0x110
[    3.976333] PGD 0
[    3.976333]  [<ffffffff810ede36>] ? layout_and_allocate+0x756/0xc00
[    3.976333]  [<ffffffff810ee3f1>] load_module+0x111/0x18b0


Strange, not sure why the problem is on PVH dom0 only... anyways,
debugging. The problem is intermittent and sometimes it boots just
fine...

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
  2014-03-14  2:09     ` Mukesh Rathor
@ 2014-03-14 17:48       ` Konrad Rzeszutek Wilk
  2014-03-14 22:56         ` Mukesh Rathor
  0 siblings, 1 reply; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-03-14 17:48 UTC (permalink / raw)
  To: Mukesh Rathor; +Cc: Roger Pau Monné, xen-devel

On Thu, Mar 13, 2014 at 07:09:54PM -0700, Mukesh Rathor wrote:
> On Fri, 28 Feb 2014 17:15:08 -0800
> Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> 
> > On Tue, 18 Feb 2014 17:47:04 -0800
> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > 
> > > On Tue, 18 Feb 2014 17:48:02 +0100
> > > Roger Pau Monné <roger.pau@citrix.com> wrote:
> > > 
> > > > Hello,
> > > > 
> > > > I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's
> > > > PVH Dom0 Xen tree
> > > > (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> > > > h=dom0pvh-v7), and got the following crash. Do you have any new
> > > > Xen Dom0 series that I could use to test PVH Dom0?
> > > > 
> > > 
> > > It won't work. The final linux patches were changed a bit, and it
> > > makes an hcall that will cause mem corruption in xen. So, just hang
> > > in there a bit, new patches coming up in few days.
> > > 
> > > thanks
> > > Mukesh
> > 
> > Hey Roger,
> > 
> > Sorry, still don't have the new version out. Just running into one
> > issue after another... I had to refresh the tree, that broke my tools
> > as they needed newer glibc, which caused me to reinstall, which caused
> > dom0 to break causing me to rebuild that entire thing... anyways, I
> > ve got stuff up, but guests are not starting right now. Debugging.
> > Will keep you posted.
> 
> JFYI... I am able to boot dom0 in PVH mode, then start PV and PVH guest.
> But for HVM guest, during boot I see PF fautls in different places during 
> load of acpi-cpufreq module. Very hard to debug since faults come from
> different place everytime. I'm now trying to figure what load_module
> does thruout.
> 
> [    3.804141] BUG: unable to handle kernel paging request at 00000014bf0d13c8
> [    3.805127] IP: [<ffffffff8104c87c>] apply_relocate_add+0x7c/0x150
> [    3.805127] PGD 0
> 
> ------
> [    3.779263] BUG: unable to handle kernel NULL pointer dereference at           (null)
> [    3.780250] IP: [<ffffffff810eadf3>] add_unformed_module+0x23/0x150
> [    3.780250] PGD 0
> [    3.780250] Oops: 0002 [#1] SMP
> --------
> [    3.975346] BUG: unable to handle kernel paging request at 0000000000003100
> [    3.976333] IP: [<ffffffff8130276d>] memcpy+0xd/0x110
> [    3.976333] PGD 0
> [    3.976333]  [<ffffffff810ede36>] ? layout_and_allocate+0x756/0xc00
> [    3.976333]  [<ffffffff810ee3f1>] load_module+0x111/0x18b0
> 
> 
> Strange, not sure why the problem is on PVH dom0 only... anyways,
> debugging. The problem is intermittent and sometimes it boots just
> fine...

The other thing that might be worth thinking - the virtual address that
modules are loaded on are way way at the end of the pagetables. Could
it be that the EPT is limited and said L4 entry (or L3) is shared with
Xen or hadn't been properly setup?

Oh wait, this is for HVM guests! And of course this problem you
see only occurs _after_ you have started PV and PVH guest right? If you
start the HVM guests first there are no problems?

Could you describe the steps and perhaps post a link to your updated git
tree for both Linux and Xen? More folks reproducing it could help with this.
(Like it helped with the other WP flag).
> 
> thanks
> Mukesh
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: PVH Dom0 with latest Linux kernels
  2014-03-14 17:48       ` Konrad Rzeszutek Wilk
@ 2014-03-14 22:56         ` Mukesh Rathor
  0 siblings, 0 replies; 8+ messages in thread
From: Mukesh Rathor @ 2014-03-14 22:56 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Roger Pau Monné, xen-devel

On Fri, 14 Mar 2014 13:48:22 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Mar 13, 2014 at 07:09:54PM -0700, Mukesh Rathor wrote:
> > On Fri, 28 Feb 2014 17:15:08 -0800
> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > 
> > > On Tue, 18 Feb 2014 17:47:04 -0800
> > > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > 
> > > > On Tue, 18 Feb 2014 17:48:02 +0100
> > > > Roger Pau Monné <roger.pau@citrix.com> wrote:
> > > > 
> > > > > Hello,
> > > > > 
> > > > > I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and
> > > > > Mukesh's PVH Dom0 Xen tree
> > > > > (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
> > > > > h=dom0pvh-v7), and got the following crash. Do you have any
> > > > > new Xen Dom0 series that I could use to test PVH Dom0?
> > > > > 
> > > > 
> > > > It won't work. The final linux patches were changed a bit, and
> > > > it makes an hcall that will cause mem corruption in xen. So,
> > > > just hang in there a bit, new patches coming up in few days.
> > > > 
> > > > thanks
> > > > Mukesh
> > > 
> > > Hey Roger,
> > > 
> > > Sorry, still don't have the new version out. Just running into one
> > > issue after another... I had to refresh the tree, that broke my
> > > tools as they needed newer glibc, which caused me to reinstall,
> > > which caused dom0 to break causing me to rebuild that entire
> > > thing... anyways, I ve got stuff up, but guests are not starting
> > > right now. Debugging. Will keep you posted.
> > 
> > JFYI... I am able to boot dom0 in PVH mode, then start PV and PVH
> > guest. But for HVM guest, during boot I see PF fautls in different
> > places during load of acpi-cpufreq module. Very hard to debug since
> > faults come from different place everytime. I'm now trying to
> > figure what load_module does thruout.
> > 
> > [    3.804141] BUG: unable to handle kernel paging request at
> > 00000014bf0d13c8 [    3.805127] IP: [<ffffffff8104c87c>]
> > apply_relocate_add+0x7c/0x150 [    3.805127] PGD 0
> > 
> > ------
> > [    3.779263] BUG: unable to handle kernel NULL pointer
> > dereference at           (null) [    3.780250] IP:
> > [<ffffffff810eadf3>] add_unformed_module+0x23/0x150 [    3.780250]
> > PGD 0 [    3.780250] Oops: 0002 [#1] SMP
> > --------
> > [    3.975346] BUG: unable to handle kernel paging request at
> > 0000000000003100 [    3.976333] IP: [<ffffffff8130276d>]
> > memcpy+0xd/0x110 [    3.976333] PGD 0
> > [    3.976333]  [<ffffffff810ede36>] ?
> > layout_and_allocate+0x756/0xc00 [    3.976333]
> > [<ffffffff810ee3f1>] load_module+0x111/0x18b0
> > 
> > 
> > Strange, not sure why the problem is on PVH dom0 only... anyways,
> > debugging. The problem is intermittent and sometimes it boots just
> > fine...
> 
> The other thing that might be worth thinking - the virtual address
> that modules are loaded on are way way at the end of the pagetables.
> Could it be that the EPT is limited and said L4 entry (or L3) is
> shared with Xen or hadn't been properly setup?
> 
> Oh wait, this is for HVM guests! And of course this problem you
> see only occurs _after_ you have started PV and PVH guest right? If
> you start the HVM guests first there are no problems?

no, after refresh to latest fc19 base, it's not dependent on PV/PVH.

> Could you describe the steps and perhaps post a link to your updated
> git tree for both Linux and Xen? More folks reproducing it could help
> with this. (Like it helped with the other WP flag).

Yeah, that's what I was thinking about, but just now I was able
to come up with reproducing it with running insmod in a loop. So, I now
need to understand load module code and debug it. Unfortunately, any
thing with timing, like printk, stepping with debugger, .. makes it go
away. I think I can solve it soon now after I get back to it.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-03-14 22:56 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-18 16:48 PVH Dom0 with latest Linux kernels Roger Pau Monné
2014-02-19  1:47 ` Mukesh Rathor
2014-03-01  1:15   ` Mukesh Rathor
2014-03-14  2:09     ` Mukesh Rathor
2014-03-14 17:48       ` Konrad Rzeszutek Wilk
2014-03-14 22:56         ` Mukesh Rathor
  -- strict thread matches above, loose matches on Subject: below --
2014-02-19  2:08 Konrad Rzeszutek Wilk
     [not found] <201402190208.s1J28lw5002932@mantra.us.oracle.com>
2014-02-19  2:15 ` Mukesh Rathor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).