xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Virtio on Xen
@ 2014-02-07 21:19 Peter X. Gao
  2014-02-10 10:04 ` Ian Campbell
  2014-02-10 10:42 ` Wei Liu
  0 siblings, 2 replies; 11+ messages in thread
From: Peter X. Gao @ 2014-02-07 21:19 UTC (permalink / raw)
  To: Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 142 bytes --]

Hi,

       I am new to Xen and I am trying to run Intel DPDK inside a domU with
virtio on Xen 4.2. Is it possible to do this?

Regards
Peter

[-- Attachment #1.2: Type: text/html, Size: 223 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-02-07 21:19 Virtio on Xen Peter X. Gao
@ 2014-02-10 10:04 ` Ian Campbell
  2014-02-10 10:42 ` Wei Liu
  1 sibling, 0 replies; 11+ messages in thread
From: Ian Campbell @ 2014-02-10 10:04 UTC (permalink / raw)
  To: Peter X. Gao; +Cc: xen-users

These sorts of questions are more appropriate to the users list, so
moving there.

On Fri, 2014-02-07 at 13:19 -0800, Peter X. Gao wrote:

>        I am new to Xen and I am trying to run Intel DPDK inside a domU
> with virtio on Xen 4.2. Is it possible to do this? 

There is no mainline support for virtio under Xen.

You can find info on the wiki about a GSoC project from a few years back
which prototyped this for some (but not all) configurations, but this
was only a prototype and is not ongoing work.

Why do you want virtio? The Xen PV drivers have been present in mainline
kernels for many years and are enabled in most distros these days.

Ian.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-02-07 21:19 Virtio on Xen Peter X. Gao
  2014-02-10 10:04 ` Ian Campbell
@ 2014-02-10 10:42 ` Wei Liu
  2014-02-10 11:19   ` Fabio Fantoni
  1 sibling, 1 reply; 11+ messages in thread
From: Wei Liu @ 2014-02-10 10:42 UTC (permalink / raw)
  To: Peter X. Gao; +Cc: Xen-devel, wei.liu2

On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
> Hi,
> 
>        I am new to Xen and I am trying to run Intel DPDK inside a domU with
> virtio on Xen 4.2. Is it possible to do this?
> 

DPDK doesn't seem to tightly coupled with VirtIO, does it?

Could you look at Xen's PV network protocol instead? VirtIO has no
mainline support on Xen while Xen's PV protocol has been in mainline for
years. And it's very likely to be enabled by default nowadays.

Wei.

> Regards
> Peter

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-02-10 10:42 ` Wei Liu
@ 2014-02-10 11:19   ` Fabio Fantoni
  2014-02-10 18:07     ` Peter X. Gao
  0 siblings, 1 reply; 11+ messages in thread
From: Fabio Fantoni @ 2014-02-10 11:19 UTC (permalink / raw)
  To: Wei Liu, Peter X. Gao; +Cc: Xen-devel

Il 10/02/2014 11:42, Wei Liu ha scritto:
> On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
>> Hi,
>>
>>         I am new to Xen and I am trying to run Intel DPDK inside a domU with
>> virtio on Xen 4.2. Is it possible to do this?
>>

Based on my tests about virtio:
- virtio-serial seems working out of box with windows domUs and also 
with xen pv driver, on linux domUs with old kernel (tested 2.6.32) is 
also working out of box but with newer kernel (tested >=3.2) require 
pci=nomsi to work correctly and works also with xen pvhvm drivers, for 
now I not found solution for msi problem, there are some posts about it.
- virtio-net was working out of box but with recent qemu versions is 
broken due qemu regression, I have narrowed down
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable 
to found the exact commit of regression because there are other critical 
problems with xen in the range.
- I not tested virtio-disk and I not know if is working with recent xen 
and qemu version.

> DPDK doesn't seem to tightly coupled with VirtIO, does it?
>
> Could you look at Xen's PV network protocol instead? VirtIO has no
> mainline support on Xen while Xen's PV protocol has been in mainline for
> years. And it's very likely to be enabled by default nowadays.
>
> Wei.
>
>> Regards
>> Peter
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-02-10 11:19   ` Fabio Fantoni
@ 2014-02-10 18:07     ` Peter X. Gao
  2014-02-11 13:01       ` Ian Campbell
  2014-02-11 13:06       ` Application using hugepages crashes PV guest Ian Campbell
  0 siblings, 2 replies; 11+ messages in thread
From: Peter X. Gao @ 2014-02-10 18:07 UTC (permalink / raw)
  To: Fabio Fantoni; +Cc: Xen-devel, Wei Liu, xen-users


[-- Attachment #1.1: Type: text/plain, Size: 22885 bytes --]

Thanks for your reply. I am now using virtio-net and it seems working.
However, Intel DPDK also requires hugepage. When a DPDK application is
initiating hugepage, I got the following error. Do I need to config
something in Xen to support hugepage?



[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.2.0-58-generic (buildd@allspice) (gcc
version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubuntu SMP Tue Dec 3
17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)
[    0.000000] Command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
ip=:127.0.255.255::::eth0:dhcp iommu=soft
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] ACPI in unprivileged domain disabled
[    0.000000] Released 0 pages of unused memory
[    0.000000] Set 0 page(s) to 1-1 mapping
[    0.000000] BIOS-provided physical RAM map:
[    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
[    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
[    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI not present or invalid.
[    0.000000] No AGP bridge found
[    0.000000] last_pfn = 0x100800 max_arch_pfn = 0x400000000
[    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
[    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
[    0.000000] init_memory_mapping: 0000000100000000-0000000100800000
[    0.000000] RAMDISK: 02060000 - 045e3000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at 0000000000000000-0000000100800000
[    0.000000] Initmem setup node 0 0000000000000000-0000000100800000
[    0.000000]   NODE_DATA [00000000ffff5000 - 00000000ffff9fff]
[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00000010 -> 0x00001000
[    0.000000]   DMA32    0x00001000 -> 0x00100000
[    0.000000]   Normal   0x00100000 -> 0x00100800
[    0.000000] Movable zone start PFN for each node
[    0.000000] early_node_map[2] active PFN ranges
[    0.000000]     0: 0x00000010 -> 0x000000a0
[    0.000000]     0: 0x00000100 -> 0x00100800
[    0.000000] SFI: Simple Firmware Interface v0.81
http://simplefirmware.org
[    0.000000] SMP: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] No local APIC present
[    0.000000] APIC: disable apic facility
[    0.000000] APIC: switched to apic NOOP
[    0.000000] PM: Registered nosave memory: 00000000000a0000 -
0000000000100000
[    0.000000] PCI: Warning: Cannot find a gap in the 32bit address range
[    0.000000] PCI: Unassigned devices with 32bit resource registers may
break!
[    0.000000] Allocating PCI resources starting at 100900000 (gap:
100900000:400000)
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.2.1 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8
nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136 r8192
d23360 u262144
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.
 Total pages: 1032084
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
ip=:127.0.255.255::::eth0:dhcp iommu=soft
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Placing 64MB software IO TLB between ffff8800f7400000 -
ffff8800fb400000
[    0.000000] software IO TLB at phys 0xf7400000 - 0xfb400000
[    0.000000] Memory: 3988436k/4202496k available (6588k kernel code, 448k
absent, 213612k reserved, 6617k data, 924k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
CPUs=8, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:16640 nr_irqs:336 16
[    0.000000] Console: colour dummy device 80x25
[    0.000000] console [tty0] enabled
[    0.000000] console [hvc0] enabled
[    0.000000] allocated 34603008 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want
memory cgroups
[    0.000000] installing Xen timer for CPU 0
[    0.000000] Detected 2793.098 MHz processor.
[    0.004000] Calibrating delay loop (skipped), value calculated using
timer frequency.. 5586.19 BogoMIPS (lpj=11172392)
[    0.004000] pid_max: default: 32768 minimum: 301
[    0.004000] Security Framework initialized
[    0.004000] AppArmor: AppArmor initialized
[    0.004000] Yama: becoming mindful.
[    0.004000] Dentry cache hash table entries: 524288 (order: 10, 4194304
bytes)
[    0.004000] Inode-cache hash table entries: 262144 (order: 9, 2097152
bytes)
[    0.004000] Mount-cache hash table entries: 256
[    0.004000] Initializing cgroup subsys cpuacct
[    0.004000] Initializing cgroup subsys memory
[    0.004000] Initializing cgroup subsys devices
[    0.004000] Initializing cgroup subsys freezer
[    0.004000] Initializing cgroup subsys blkio
[    0.004000] Initializing cgroup subsys perf_event
[    0.004000] CPU: Physical Processor ID: 0
[    0.004000] CPU: Processor Core ID: 0
[    0.004000] SMP alternatives: switching to UP code
[    0.031040] ftrace: allocating 26602 entries in 105 pages
[    0.032055] cpu 0 spinlock event irq 17
[    0.032115] Performance Events: unsupported p6 CPU model 26 no PMU
driver, software events only.
[    0.032244] NMI watchdog disabled (cpu0): hardware events not enabled
[    0.032350] installing Xen timer for CPU 1
[    0.032363] cpu 1 spinlock event irq 23
[    0.032623] SMP alternatives: switching to SMP code
[    0.057953] NMI watchdog disabled (cpu1): hardware events not enabled
[    0.058085] installing Xen timer for CPU 2
[    0.058103] cpu 2 spinlock event irq 29
[    0.058542] NMI watchdog disabled (cpu2): hardware events not enabled
[    0.058696] installing Xen timer for CPU 3
[    0.058724] cpu 3 spinlock event irq 35
[    0.059115] NMI watchdog disabled (cpu3): hardware events not enabled
[    0.059227] installing Xen timer for CPU 4
[    0.059246] cpu 4 spinlock event irq 41
[    0.059423] NMI watchdog disabled (cpu4): hardware events not enabled
[    0.059544] installing Xen timer for CPU 5
[    0.059562] cpu 5 spinlock event irq 47
[    0.059724] NMI watchdog disabled (cpu5): hardware events not enabled
[    0.059833] installing Xen timer for CPU 6
[    0.059852] cpu 6 spinlock event irq 53
[    0.060003] NMI watchdog disabled (cpu6): hardware events not enabled
[    0.060037] installing Xen timer for CPU 7
[    0.060056] cpu 7 spinlock event irq 59
[    0.060209] NMI watchdog disabled (cpu7): hardware events not enabled
[    0.060243] Brought up 8 CPUs
[    0.060494] devtmpfs: initialized
[    0.061531] EVM: security.selinux
[    0.061537] EVM: security.SMACK64
[    0.061542] EVM: security.capability
[    0.061711] Grant table initialized
[    0.061711] print_constraints: dummy:
[    0.083057] RTC time: 165:165:165, date: 165/165/65
[    0.083093] NET: Registered protocol family 16
[    0.083159] Trying to unpack rootfs image as initramfs...
[    0.084665] PCI: setting up Xen PCI frontend stub
[    0.086003] bio: create slab <bio-0> at 0
[    0.086003] ACPI: Interpreter disabled.
[    0.086003] xen/balloon: Initialising balloon driver.
[    0.088136] xen-balloon: Initialising balloon driver.
[    0.088139] vgaarb: loaded
[    0.088184] i2c-core: driver [aat2870] using legacy suspend method
[    0.088192] i2c-core: driver [aat2870] using legacy resume method
[    0.088283] SCSI subsystem initialized
[    0.088341] usbcore: registered new interface driver usbfs
[    0.088341] usbcore: registered new interface driver hub
[    0.088341] usbcore: registered new device driver usb
[    0.088341] PCI: System does not support PCI
[    0.088341] PCI: System does not support PCI
[    0.088341] NetLabel: Initializing
[    0.088341] NetLabel:  domain hash size = 128
[    0.184026] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.184051] NetLabel:  unlabeled traffic allowed by default
[    0.184159] Switching to clocksource xen
[    0.188203] Freeing initrd memory: 38412k freed
[    0.202280] AppArmor: AppArmor Filesystem Enabled
[    0.202308] pnp: PnP ACPI: disabled
[    0.205341] NET: Registered protocol family 2
[    0.205661] IP route cache hash table entries: 131072 (order: 8, 1048576
bytes)
[    0.207989] TCP established hash table entries: 524288 (order: 11,
8388608 bytes)
[    0.209497] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.209644] TCP: Hash tables configured (established 524288 bind 65536)
[    0.209650] TCP reno registered
[    0.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[    0.209704] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[    0.209817] NET: Registered protocol family 1
[    0.210139] platform rtc_cmos: registered platform RTC device (no PNP
device found)
[    0.211002] audit: initializing netlink socket (disabled)
[    0.211015] type=2000 audit(1392055157.599:1): initialized
[    0.229178] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.230818] VFS: Disk quotas dquot_6.5.2
[    0.230873] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.231462] fuse init (API version 7.17)
[    0.231605] msgmni has been set to 7864
[    0.232267] Block layer SCSI generic (bsg) driver version 0.4 loaded
(major 253)
[    0.232382] io scheduler noop registered
[    0.232417] io scheduler deadline registered
[    0.232449] io scheduler cfq registered (default)
[    0.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.232529] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.233195] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    0.437179] Linux agpgart interface v0.103
[    0.439329] brd: module loaded
[    0.440557] loop: module loaded
[    0.442439] blkfront device/vbd/51714 num-ring-pages 1 nr_ents 32.
[    0.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents 32.
[    0.447233] blkfront: xvda2: flush diskcache: enabled
[    0.447810] Fixed MDIO Bus: probed
[    0.447856] tun: Universal TUN/TAP device driver, 1.6
[    0.447864] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    0.447945] PPP generic driver version 2.4.2
[    0.448029] Initialising Xen virtual ethernet driver.
[    0.453923] blkfront: xvda1: flush diskcache: enabled
[    0.455000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.455031] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.455048] uhci_hcd: USB Universal Host Controller Interface driver
[    0.455100] usbcore: registered new interface driver libusual
[    0.455134] i8042: PNP: No PS/2 controller found. Probing ports directly.
[    1.455791] i8042: No controller found
[    1.456071] mousedev: PS/2 mouse device common for all mice
[    1.496241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
[    1.496489] rtc_cmos: probe of rtc_cmos failed with error -38
[    1.496624] device-mapper: uevent: version 1.0.3
...............
...............
...............
...............


[  135.957086] BUG: unable to handle kernel paging request at
ffff8800f36c0960
[  135.957105] IP: [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
[  135.957122] PGD 1c06067 PUD dd1067 PMD f6d067 PTE 80100000f36c0065
[  135.957134] Oops: 0003 [#1] SMP
[  135.957141] CPU 0
[  135.957144] Modules linked in: igb_uio(O) uio
[  135.957155]
[  135.957160] Pid: 659, comm: helloworld Tainted: G           O
3.2.0-58-generic #88-Ubuntu
[  135.957171] RIP: e030:[<ffffffff81008efe>]  [<ffffffff81008efe>]
xen_set_pte_at+0x3e/0x210
[  135.957183] RSP: e02b:ffff8800037ddc88  EFLAGS: 00010297
[  135.957189] RAX: 0000000000000000 RBX: 800000008c6000e7 RCX:
800000008c6000e7
[  135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI:
ffff880003044980
[  135.957205] RBP: ffff8800037ddcd8 R08: 0000000000000000 R09:
dead000000100100
[  135.957212] R10: dead000000200200 R11: 00007f4a64f7e02a R12:
ffffea0003c48000
[  135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15:
0000000000000001
[  135.957232] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
knlGS:0000000000000000
[  135.957241] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4:
0000000000002660
[  135.957255] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  135.957263] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  135.957271] Process helloworld (pid: 659, threadinfo ffff8800037dc000,
task ffff8800034c1700)
[  135.957279] Stack:
[  135.957283]  00007f4a65800000 ffff880003044980 dead000000200200
dead000000100100
[  135.957297]  0000000000000000 0000000000000000 ffffea0003c48000
800000008c6000e7
[  135.957310]  ffff8800030449ec 0000000000000001 ffff8800037ddd68
ffffffff81158453
[  135.957322] Call Trace:
[  135.957333]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  135.957342]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  135.957351]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  135.957361]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  135.957370]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  135.957378]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  135.957391]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  135.957399]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  135.957408]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  135.957417]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48 89 7d b8
48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83 f8 01 74 75
<49> 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0 4c 8b
[  135.957507] RIP  [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
[  135.957517]  RSP <ffff8800037ddc88>
[  135.957521] CR2: ffff8800f36c0960
[  135.957528] ---[ end trace f6a013072f2aee83 ]---
[  160.032062] BUG: soft lockup - CPU#0 stuck for 23s! [helloworld:659]
[  160.032129] Modules linked in: igb_uio(O) uio
[  160.032140] CPU 0
[  160.032143] Modules linked in: igb_uio(O) uio
[  160.032153]
[  160.032159] Pid: 659, comm: helloworld Tainted: G      D    O
3.2.0-58-generic #88-Ubuntu
[  160.032170] RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>]
hypercall_page+0x3aa/0x1000
[  160.032190] RSP: e02b:ffff8800037dd730  EFLAGS: 00000202
[  160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
ffffffff810013aa
[  160.032204] RDX: 0000000000000000 RSI: ffff8800037dd748 RDI:
0000000000000003
[  160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09:
ffff8800f6c000a0
[  160.032220] R10: 0000000000007ff0 R11: 0000000000000202 R12:
0000000000000011
[  160.032227] R13: 0000000000000201 R14: ffff880003044901 R15:
ffff880003044900
[  160.032239] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
knlGS:0000000000000000
[  160.032248] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  160.032255] CR2: ffff8800f36c0960 CR3: 0000000001c05000 CR4:
0000000000002660
[  160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  160.032271] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  160.032279] Process helloworld (pid: 659, threadinfo ffff8800037dc000,
task ffff8800034c1700)
[  160.032287] Stack:
[  160.032291]  0000000000000011 00000000fffffffa ffffffff813adade
ffff8800037dd764
[  160.032304]  ffffffff00000001 0000000000000000 00000004813ad17e
ffff8800037dd778
[  160.032317]  ffff8800030449ec ffff8800037dd788 ffffffff813af5e0
ffff8800037dd7d8
[  160.032329] Call Trace:
[  160.032341]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
[  160.032350]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
[  160.032360]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
[  160.032370]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
[  160.032381]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
[  160.032390]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
[  160.032400]  [<ffffffff81146408>] exit_mmap+0x58/0x140
[  160.032408]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
[  160.032416]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
[  160.032425]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032433]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032442]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032452]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
[  160.032460]  [<ffffffff81065f39>] mmput+0x29/0x30
[  160.032470]  [<ffffffff8106c943>] exit_mm+0x113/0x130
[  160.032479]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
[  160.032488]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
[  160.032496]  [<ffffffff8106cace>] do_exit+0x16e/0x450
[  160.032504]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
[  160.032513]  [<ffffffff8164812f>] no_context+0x150/0x15d
[  160.032520]  [<ffffffff81648307>] __bad_area_nosemaphore+0x1cb/0x1ea
[  160.032529]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
[  160.032537]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
[  160.032545]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
[  160.032555]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
[  160.032564]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init+0x48/0x160
[  160.032575]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
[  160.032588]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
[  160.032597]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.032605]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
[  160.032613]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
[  160.032622]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  160.032630]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  160.032638]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  160.032648]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  160.032656]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  160.032664]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  160.032673]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  160.032681]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  160.032689]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  160.032697]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05
<41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
[  160.032781] Call Trace:
[  160.032787]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
[  160.032795]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
[  160.032803]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
[  160.032811]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
[  160.032818]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
[  160.032826]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
[  160.032833]  [<ffffffff81146408>] exit_mmap+0x58/0x140
[  160.032841]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
[  160.032849]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
[  160.032857]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032866]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032874]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032882]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
[  160.032889]  [<ffffffff81065f39>] mmput+0x29/0x30
[  160.032896]  [<ffffffff8106c943>] exit_mm+0x113/0x130
[  160.032904]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
[  160.032912]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
[  160.032920]  [<ffffffff8106cace>] do_exit+0x16e/0x450
[  160.032928]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
[  160.032935]  [<ffffffff8164812f>] no_context+0x150/0x15d
[  160.032943]  [<ffffffff81648307>] __bad_area_nosemaphore+0x1cb/0x1ea
[  160.032951]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
[  160.032959]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
[  160.032967]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
[  160.032975]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
[  160.036054]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init+0x48/0x160
[  160.036054]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
[  160.036054]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
[  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.036054]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
[  160.036054]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
[  160.036054]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  160.036054]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  160.036054]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  160.036054]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  160.036054]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  160.036054]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  160.036054]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  160.036054]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  160.036054]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30



On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <fabio.fantoni@m2r.biz>wrote:

> Il 10/02/2014 11:42, Wei Liu ha scritto:
>
>  On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
>>
>>> Hi,
>>>
>>>         I am new to Xen and I am trying to run Intel DPDK inside a domU
>>> with
>>> virtio on Xen 4.2. Is it possible to do this?
>>>
>>>
> Based on my tests about virtio:
> - virtio-serial seems working out of box with windows domUs and also with
> xen pv driver, on linux domUs with old kernel (tested 2.6.32) is also
> working out of box but with newer kernel (tested >=3.2) require pci=nomsi
> to work correctly and works also with xen pvhvm drivers, for now I not
> found solution for msi problem, there are some posts about it.
> - virtio-net was working out of box but with recent qemu versions is
> broken due qemu regression, I have narrowed down
> with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable
> to found the exact commit of regression because there are other critical
> problems with xen in the range.
> - I not tested virtio-disk and I not know if is working with recent xen
> and qemu version.
>
>
>  DPDK doesn't seem to tightly coupled with VirtIO, does it?
>>
>> Could you look at Xen's PV network protocol instead? VirtIO has no
>> mainline support on Xen while Xen's PV protocol has been in mainline for
>> years. And it's very likely to be enabled by default nowadays.
>>
>> Wei.
>>
>>  Regards
>>> Peter
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
>

[-- Attachment #1.2: Type: text/html, Size: 28576 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-02-10 18:07     ` Peter X. Gao
@ 2014-02-11 13:01       ` Ian Campbell
  2014-03-05 18:02         ` Konrad Rzeszutek Wilk
  2014-02-11 13:06       ` Application using hugepages crashes PV guest Ian Campbell
  1 sibling, 1 reply; 11+ messages in thread
From: Ian Campbell @ 2014-02-11 13:01 UTC (permalink / raw)
  To: Peter X. Gao; +Cc: Xen-devel, Fabio Fantoni, Wei Liu, xen-users

On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> Thanks for your reply. I am now using virtio-net and it seems working.
> However, Intel DPDK also requires hugepage. When a DPDK application is
> initiating hugepage, I got the following error. Do I need to config
> something in Xen to support hugepage?

I'm not sure about the status of superpage support in mainline kernels
for PV Xen guests. IIRC there was a requirement to add a Xen command
line flag to enable it at the level.

Or you could just use an HVM guest, since no special support is needed
for hugepages there.

But maybe I'm confused because I think your use of virtio-net would
necessarily require that you be using an HVM not PV guest.

But then looking at your logs I see Xen PV block and net but no sign of
virtio -- so I suspect you are actually doing PV and not using
virtio-net at all.

Ian.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Application using hugepages crashes PV guest
  2014-02-10 18:07     ` Peter X. Gao
  2014-02-11 13:01       ` Ian Campbell
@ 2014-02-11 13:06       ` Ian Campbell
  1 sibling, 0 replies; 11+ messages in thread
From: Ian Campbell @ 2014-02-11 13:06 UTC (permalink / raw)
  To: Peter X. Gao
  Cc: Wei Liu, Fabio Fantoni, David Vrabel, Xen-devel, Boris Ostrovsky

On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> Thanks for your reply. I am now using virtio-net and it seems working.
> However, Intel DPDK also requires hugepage. When a DPDK application is
> initiating hugepage, I got the following error. Do I need to config
> something in Xen to support hugepage?

Retitling the thread and adding the Linux Xen maintainers, since
regardless of whether it is supported trying to use hugepages shouldn't
crash the kernel.

That said, this is with 3.2.0 so it may well be fixed already. Are you
able to try a more recent mainline kernel?

> 
> 
> 
> 
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Linux version 3.2.0-58-generic (buildd@allspice) (gcc
> version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubuntu SMP Tue Dec
> 3 17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)
> [    0.000000] Command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
> ip=:127.0.255.255::::eth0:dhcp iommu=soft
> [    0.000000] KERNEL supported cpus:
> [    0.000000]   Intel GenuineIntel
> [    0.000000]   AMD AuthenticAMD
> [    0.000000]   Centaur CentaurHauls
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] Released 0 pages of unused memory
> [    0.000000] Set 0 page(s) to 1-1 mapping
> [    0.000000] BIOS-provided physical RAM map:
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] No AGP bridge found
> [    0.000000] last_pfn = 0x100800 max_arch_pfn = 0x400000000
> [    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
> [    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
> [    0.000000] init_memory_mapping: 0000000100000000-0000000100800000
> [    0.000000] RAMDISK: 02060000 - 045e3000
> [    0.000000] NUMA turned off
> [    0.000000] Faking a node at 0000000000000000-0000000100800000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000100800000
> [    0.000000]   NODE_DATA [00000000ffff5000 - 00000000ffff9fff]
> [    0.000000] Zone PFN ranges:
> [    0.000000]   DMA      0x00000010 -> 0x00001000
> [    0.000000]   DMA32    0x00001000 -> 0x00100000
> [    0.000000]   Normal   0x00100000 -> 0x00100800
> [    0.000000] Movable zone start PFN for each node
> [    0.000000] early_node_map[2] active PFN ranges
> [    0.000000]     0: 0x00000010 -> 0x000000a0
> [    0.000000]     0: 0x00000100 -> 0x00100800
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] SMP: Allowing 8 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] PM: Registered nosave memory: 00000000000a0000 -
> 0000000000100000
> [    0.000000] PCI: Warning: Cannot find a gap in the 32bit address
> range
> [    0.000000] PCI: Unassigned devices with 32bit resource registers
> may break!
> [    0.000000] Allocating PCI resources starting at 100900000 (gap:
> 100900000:400000)
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.2.1 (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256
> nr_cpu_ids:8 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136
> r8192 d23360 u262144
> [    0.000000] Built 1 zonelists in Node order, mobility grouping on.
>  Total pages: 1032084
> [    0.000000] Policy zone: Normal
> [    0.000000] Kernel command line: root=/dev/xvda2 ro root=/dev/xvda2
> ro ip=:127.0.255.255::::eth0:dhcp iommu=soft
> [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
> [    0.000000] Placing 64MB software IO TLB between ffff8800f7400000 -
> ffff8800fb400000
> [    0.000000] software IO TLB at phys 0xf7400000 - 0xfb400000
> [    0.000000] Memory: 3988436k/4202496k available (6588k kernel code,
> 448k absent, 213612k reserved, 6617k data, 924k init)
> [    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
> CPUs=8, Nodes=1
> [    0.000000] Hierarchical RCU implementation.
> [    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
> [    0.000000] NR_IRQS:16640 nr_irqs:336 16
> [    0.000000] Console: colour dummy device 80x25
> [    0.000000] console [tty0] enabled
> [    0.000000] console [hvc0] enabled
> [    0.000000] allocated 34603008 bytes of page_cgroup
> [    0.000000] please try 'cgroup_disable=memory' option if you don't
> want memory cgroups
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] Detected 2793.098 MHz processor.
> [    0.004000] Calibrating delay loop (skipped), value calculated
> using timer frequency.. 5586.19 BogoMIPS (lpj=11172392)
> [    0.004000] pid_max: default: 32768 minimum: 301
> [    0.004000] Security Framework initialized
> [    0.004000] AppArmor: AppArmor initialized
> [    0.004000] Yama: becoming mindful.
> [    0.004000] Dentry cache hash table entries: 524288 (order: 10,
> 4194304 bytes)
> [    0.004000] Inode-cache hash table entries: 262144 (order: 9,
> 2097152 bytes)
> [    0.004000] Mount-cache hash table entries: 256
> [    0.004000] Initializing cgroup subsys cpuacct
> [    0.004000] Initializing cgroup subsys memory
> [    0.004000] Initializing cgroup subsys devices
> [    0.004000] Initializing cgroup subsys freezer
> [    0.004000] Initializing cgroup subsys blkio
> [    0.004000] Initializing cgroup subsys perf_event
> [    0.004000] CPU: Physical Processor ID: 0
> [    0.004000] CPU: Processor Core ID: 0
> [    0.004000] SMP alternatives: switching to UP code
> [    0.031040] ftrace: allocating 26602 entries in 105 pages
> [    0.032055] cpu 0 spinlock event irq 17
> [    0.032115] Performance Events: unsupported p6 CPU model 26 no PMU
> driver, software events only.
> [    0.032244] NMI watchdog disabled (cpu0): hardware events not
> enabled
> [    0.032350] installing Xen timer for CPU 1
> [    0.032363] cpu 1 spinlock event irq 23
> [    0.032623] SMP alternatives: switching to SMP code
> [    0.057953] NMI watchdog disabled (cpu1): hardware events not
> enabled
> [    0.058085] installing Xen timer for CPU 2
> [    0.058103] cpu 2 spinlock event irq 29
> [    0.058542] NMI watchdog disabled (cpu2): hardware events not
> enabled
> [    0.058696] installing Xen timer for CPU 3
> [    0.058724] cpu 3 spinlock event irq 35
> [    0.059115] NMI watchdog disabled (cpu3): hardware events not
> enabled
> [    0.059227] installing Xen timer for CPU 4
> [    0.059246] cpu 4 spinlock event irq 41
> [    0.059423] NMI watchdog disabled (cpu4): hardware events not
> enabled
> [    0.059544] installing Xen timer for CPU 5
> [    0.059562] cpu 5 spinlock event irq 47
> [    0.059724] NMI watchdog disabled (cpu5): hardware events not
> enabled
> [    0.059833] installing Xen timer for CPU 6
> [    0.059852] cpu 6 spinlock event irq 53
> [    0.060003] NMI watchdog disabled (cpu6): hardware events not
> enabled
> [    0.060037] installing Xen timer for CPU 7
> [    0.060056] cpu 7 spinlock event irq 59
> [    0.060209] NMI watchdog disabled (cpu7): hardware events not
> enabled
> [    0.060243] Brought up 8 CPUs
> [    0.060494] devtmpfs: initialized
> [    0.061531] EVM: security.selinux
> [    0.061537] EVM: security.SMACK64
> [    0.061542] EVM: security.capability
> [    0.061711] Grant table initialized
> [    0.061711] print_constraints: dummy: 
> [    0.083057] RTC time: 165:165:165, date: 165/165/65
> [    0.083093] NET: Registered protocol family 16
> [    0.083159] Trying to unpack rootfs image as initramfs...
> [    0.084665] PCI: setting up Xen PCI frontend stub
> [    0.086003] bio: create slab <bio-0> at 0
> [    0.086003] ACPI: Interpreter disabled.
> [    0.086003] xen/balloon: Initialising balloon driver.
> [    0.088136] xen-balloon: Initialising balloon driver.
> [    0.088139] vgaarb: loaded
> [    0.088184] i2c-core: driver [aat2870] using legacy suspend method
> [    0.088192] i2c-core: driver [aat2870] using legacy resume method
> [    0.088283] SCSI subsystem initialized
> [    0.088341] usbcore: registered new interface driver usbfs
> [    0.088341] usbcore: registered new interface driver hub
> [    0.088341] usbcore: registered new device driver usb
> [    0.088341] PCI: System does not support PCI
> [    0.088341] PCI: System does not support PCI
> [    0.088341] NetLabel: Initializing
> [    0.088341] NetLabel:  domain hash size = 128
> [    0.184026] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.184051] NetLabel:  unlabeled traffic allowed by default
> [    0.184159] Switching to clocksource xen
> [    0.188203] Freeing initrd memory: 38412k freed
> [    0.202280] AppArmor: AppArmor Filesystem Enabled
> [    0.202308] pnp: PnP ACPI: disabled
> [    0.205341] NET: Registered protocol family 2
> [    0.205661] IP route cache hash table entries: 131072 (order: 8,
> 1048576 bytes)
> [    0.207989] TCP established hash table entries: 524288 (order: 11,
> 8388608 bytes)
> [    0.209497] TCP bind hash table entries: 65536 (order: 8, 1048576
> bytes)
> [    0.209644] TCP: Hash tables configured (established 524288 bind
> 65536)
> [    0.209650] TCP reno registered
> [    0.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.209704] UDP-Lite hash table entries: 2048 (order: 4, 65536
> bytes)
> [    0.209817] NET: Registered protocol family 1
> [    0.210139] platform rtc_cmos: registered platform RTC device (no
> PNP device found)
> [    0.211002] audit: initializing netlink socket (disabled)
> [    0.211015] type=2000 audit(1392055157.599:1): initialized
> [    0.229178] HugeTLB registered 2 MB page size, pre-allocated 0
> pages
> [    0.230818] VFS: Disk quotas dquot_6.5.2
> [    0.230873] Dquot-cache hash table entries: 512 (order 0, 4096
> bytes)
> [    0.231462] fuse init (API version 7.17)
> [    0.231605] msgmni has been set to 7864
> [    0.232267] Block layer SCSI generic (bsg) driver version 0.4
> loaded (major 253)
> [    0.232382] io scheduler noop registered
> [    0.232417] io scheduler deadline registered
> [    0.232449] io scheduler cfq registered (default)
> [    0.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.232529] pciehp: PCI Express Hot Plug Controller Driver version:
> 0.4
> [    0.233195] Serial: 8250/16550 driver, 32 ports, IRQ sharing
> enabled
> [    0.437179] Linux agpgart interface v0.103
> [    0.439329] brd: module loaded
> [    0.440557] loop: module loaded
> [    0.442439] blkfront device/vbd/51714 num-ring-pages 1 nr_ents 32.
> [    0.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents 32.
> [    0.447233] blkfront: xvda2: flush diskcache: enabled
> [    0.447810] Fixed MDIO Bus: probed
> [    0.447856] tun: Universal TUN/TAP device driver, 1.6
> [    0.447864] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.447945] PPP generic driver version 2.4.2
> [    0.448029] Initialising Xen virtual ethernet driver.
> [    0.453923] blkfront: xvda1: flush diskcache: enabled
> [    0.455000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI)
> Driver
> [    0.455031] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.455048] uhci_hcd: USB Universal Host Controller Interface
> driver
> [    0.455100] usbcore: registered new interface driver libusual
> [    0.455134] i8042: PNP: No PS/2 controller found. Probing ports
> directly.
> [    1.455791] i8042: No controller found
> [    1.456071] mousedev: PS/2 mouse device common for all mice
> [    1.496241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as
> rtc0
> [    1.496489] rtc_cmos: probe of rtc_cmos failed with error -38
> [    1.496624] device-mapper: uevent: version 1.0.3
> ...............
> ...............
> ...............
> ...............
> 
> 
> 
> 
> [  135.957086] BUG: unable to handle kernel paging request at
> ffff8800f36c0960
> [  135.957105] IP: [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
> [  135.957122] PGD 1c06067 PUD dd1067 PMD f6d067 PTE 80100000f36c0065
> [  135.957134] Oops: 0003 [#1] SMP 
> [  135.957141] CPU 0 
> [  135.957144] Modules linked in: igb_uio(O) uio
> [  135.957155] 
> [  135.957160] Pid: 659, comm: helloworld Tainted: G           O
> 3.2.0-58-generic #88-Ubuntu  
> [  135.957171] RIP: e030:[<ffffffff81008efe>]  [<ffffffff81008efe>]
> xen_set_pte_at+0x3e/0x210
> [  135.957183] RSP: e02b:ffff8800037ddc88  EFLAGS: 00010297
> [  135.957189] RAX: 0000000000000000 RBX: 800000008c6000e7 RCX:
> 800000008c6000e7
> [  135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI:
> ffff880003044980
> [  135.957205] RBP: ffff8800037ddcd8 R08: 0000000000000000 R09:
> dead000000100100
> [  135.957212] R10: dead000000200200 R11: 00007f4a64f7e02a R12:
> ffffea0003c48000
> [  135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15:
> 0000000000000001
> [  135.957232] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> [  135.957241] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4:
> 0000000000002660
> [  135.957255] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  135.957263] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  135.957271] Process helloworld (pid: 659, threadinfo
> ffff8800037dc000, task ffff8800034c1700)
> [  135.957279] Stack:
> [  135.957283]  00007f4a65800000 ffff880003044980 dead000000200200
> dead000000100100
> [  135.957297]  0000000000000000 0000000000000000 ffffea0003c48000
> 800000008c6000e7
> [  135.957310]  ffff8800030449ec 0000000000000001 ffff8800037ddd68
> ffffffff81158453
> [  135.957322] Call Trace:
> [  135.957333]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  135.957342]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  135.957351]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  135.957361]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  135.957370]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  135.957378]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  135.957391]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  135.957399]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  135.957408]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  135.957417]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48 89
> 7d b8 48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83
> f8 01 74 75 <49> 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0
> 4c 8b 
> [  135.957507] RIP  [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
> [  135.957517]  RSP <ffff8800037ddc88>
> [  135.957521] CR2: ffff8800f36c0960
> [  135.957528] ---[ end trace f6a013072f2aee83 ]---
> [  160.032062] BUG: soft lockup - CPU#0 stuck for 23s!
> [helloworld:659]
> [  160.032129] Modules linked in: igb_uio(O) uio
> [  160.032140] CPU 0 
> [  160.032143] Modules linked in: igb_uio(O) uio
> [  160.032153] 
> [  160.032159] Pid: 659, comm: helloworld Tainted: G      D    O
> 3.2.0-58-generic #88-Ubuntu  
> [  160.032170] RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>]
> hypercall_page+0x3aa/0x1000
> [  160.032190] RSP: e02b:ffff8800037dd730  EFLAGS: 00000202
> [  160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
> ffffffff810013aa
> [  160.032204] RDX: 0000000000000000 RSI: ffff8800037dd748 RDI:
> 0000000000000003
> [  160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09:
> ffff8800f6c000a0
> [  160.032220] R10: 0000000000007ff0 R11: 0000000000000202 R12:
> 0000000000000011
> [  160.032227] R13: 0000000000000201 R14: ffff880003044901 R15:
> ffff880003044900
> [  160.032239] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> [  160.032248] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  160.032255] CR2: ffff8800f36c0960 CR3: 0000000001c05000 CR4:
> 0000000000002660
> [  160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  160.032271] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  160.032279] Process helloworld (pid: 659, threadinfo
> ffff8800037dc000, task ffff8800034c1700)
> [  160.032287] Stack:
> [  160.032291]  0000000000000011 00000000fffffffa ffffffff813adade
> ffff8800037dd764
> [  160.032304]  ffffffff00000001 0000000000000000 00000004813ad17e
> ffff8800037dd778
> [  160.032317]  ffff8800030449ec ffff8800037dd788 ffffffff813af5e0
> ffff8800037dd7d8
> [  160.032329] Call Trace:
> [  160.032341]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
> [  160.032350]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
> [  160.032360]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
> [  160.032370]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
> [  160.032381]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
> [  160.032390]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
> [  160.032400]  [<ffffffff81146408>] exit_mmap+0x58/0x140
> [  160.032408]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
> [  160.032416]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
> [  160.032425]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032433]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032442]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032452]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
> [  160.032460]  [<ffffffff81065f39>] mmput+0x29/0x30
> [  160.032470]  [<ffffffff8106c943>] exit_mm+0x113/0x130
> [  160.032479]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
> [  160.032488]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
> [  160.032496]  [<ffffffff8106cace>] do_exit+0x16e/0x450
> [  160.032504]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
> [  160.032513]  [<ffffffff8164812f>] no_context+0x150/0x15d
> [  160.032520]  [<ffffffff81648307>] __bad_area_nosemaphore
> +0x1cb/0x1ea
> [  160.032529]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
> [  160.032537]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
> [  160.032545]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
> [  160.032555]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
> [  160.032564]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init
> +0x48/0x160
> [  160.032575]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
> [  160.032588]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
> [  160.032597]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.032605]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
> [  160.032613]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
> [  160.032622]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  160.032630]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  160.032638]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  160.032648]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  160.032656]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  160.032664]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  160.032673]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  160.032681]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  160.032689]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  160.032697]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc
> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00
> 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
> cc cc 
> [  160.032781] Call Trace:
> [  160.032787]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
> [  160.032795]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
> [  160.032803]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
> [  160.032811]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
> [  160.032818]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
> [  160.032826]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
> [  160.032833]  [<ffffffff81146408>] exit_mmap+0x58/0x140
> [  160.032841]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
> [  160.032849]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
> [  160.032857]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032866]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032874]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032882]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
> [  160.032889]  [<ffffffff81065f39>] mmput+0x29/0x30
> [  160.032896]  [<ffffffff8106c943>] exit_mm+0x113/0x130
> [  160.032904]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
> [  160.032912]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
> [  160.032920]  [<ffffffff8106cace>] do_exit+0x16e/0x450
> [  160.032928]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
> [  160.032935]  [<ffffffff8164812f>] no_context+0x150/0x15d
> [  160.032943]  [<ffffffff81648307>] __bad_area_nosemaphore
> +0x1cb/0x1ea
> [  160.032951]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
> [  160.032959]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
> [  160.032967]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
> [  160.032975]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
> [  160.036054]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init
> +0x48/0x160
> [  160.036054]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
> [  160.036054]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
> [  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.036054]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
> [  160.036054]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
> [  160.036054]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  160.036054]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  160.036054]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  160.036054]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  160.036054]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  160.036054]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  160.036054]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  160.036054]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  160.036054]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
> 
> 
> 
> 
> On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <fabio.fantoni@m2r.biz>
> wrote:
>         Il 10/02/2014 11:42, Wei Liu ha scritto:
>         
>                 On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao
>                 wrote:
>                         Hi,
>                         
>                                 I am new to Xen and I am trying to run
>                         Intel DPDK inside a domU with
>                         virtio on Xen 4.2. Is it possible to do this?
>                         
>         
>         
>         Based on my tests about virtio:
>         - virtio-serial seems working out of box with windows domUs
>         and also with xen pv driver, on linux domUs with old kernel
>         (tested 2.6.32) is also working out of box but with newer
>         kernel (tested >=3.2) require pci=nomsi to work correctly and
>         works also with xen pvhvm drivers, for now I not found
>         solution for msi problem, there are some posts about it.
>         - virtio-net was working out of box but with recent qemu
>         versions is broken due qemu regression, I have narrowed down
>         with bisect (one commit between 4 Jul 2013 and 22 Jul 2013)
>         but I unable to found the exact commit of regression because
>         there are other critical problems with xen in the range.
>         - I not tested virtio-disk and I not know if is working with
>         recent xen and qemu version.
>         
>         
>                 DPDK doesn't seem to tightly coupled with VirtIO, does
>                 it?
>                 
>                 Could you look at Xen's PV network protocol instead?
>                 VirtIO has no
>                 mainline support on Xen while Xen's PV protocol has
>                 been in mainline for
>                 years. And it's very likely to be enabled by default
>                 nowadays.
>                 
>                 Wei.
>                 
>                         Regards
>                         Peter
>                         _______________________________________________
>                         Xen-devel mailing list
>                         Xen-devel@lists.xen.org
>                         http://lists.xen.org/xen-devel
>                 
>                 _______________________________________________
>                 Xen-devel mailing list
>                 Xen-devel@lists.xen.org
>                 http://lists.xen.org/xen-devel
>         
>         
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-02-11 13:01       ` Ian Campbell
@ 2014-03-05 18:02         ` Konrad Rzeszutek Wilk
  2014-03-05 22:33           ` Peter X. Gao
  0 siblings, 1 reply; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-03-05 18:02 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Peter X. Gao, Xen-devel, Fabio Fantoni, Wei Liu, xen-users

On Tue, Feb 11, 2014 at 01:01:40PM +0000, Ian Campbell wrote:
> On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> > Thanks for your reply. I am now using virtio-net and it seems working.
> > However, Intel DPDK also requires hugepage. When a DPDK application is
> > initiating hugepage, I got the following error. Do I need to config
> > something in Xen to support hugepage?
> 
> I'm not sure about the status of superpage support in mainline kernels
> for PV Xen guests. IIRC there was a requirement to add a Xen command
> line flag to enable it at the level.
> 
> Or you could just use an HVM guest, since no special support is needed
> for hugepages there.
> 
> But maybe I'm confused because I think your use of virtio-net would
> necessarily require that you be using an HVM not PV guest.
> 
> But then looking at your logs I see Xen PV block and net but no sign of
> virtio -- so I suspect you are actually doing PV and not using
> virtio-net at all.

DPDK 1.6 is out - and you can do Xen. You need to use HVM guests and
a special module in dom0 to setup 2MB contingous pages that
is shared with the guest.  

The protocol that DPDK uses is VirtIO.

See:
http://dpdk.org/browse/dpdk/commit/?id=47bd46112b710dc59b1becfb67e18da319c5debe
http://dpdk.org/browse/dpdk/commit/?id=148f963fb5323c1c6b6d5cea95084deb25cc73f8

> 
> Ian.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-03-05 18:02         ` Konrad Rzeszutek Wilk
@ 2014-03-05 22:33           ` Peter X. Gao
  0 siblings, 0 replies; 11+ messages in thread
From: Peter X. Gao @ 2014-03-05 22:33 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Xen-devel, Fabio Fantoni, Wei Liu, Ian Campbell, xen-users

Hi Konrad,

     Just to make sure I understand everything. So I need to set up
2MB hugepage in Dom0 and use a module to share it with a HVM guest.
How could I compile the module?
     Inside the DomU, how could I see the shared 2MB hugepages? Do I
need another frontend driver? Many thanks.

Regards
Peter

On Wed, Mar 5, 2014 at 10:02 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Feb 11, 2014 at 01:01:40PM +0000, Ian Campbell wrote:
>> On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
>> > Thanks for your reply. I am now using virtio-net and it seems working.
>> > However, Intel DPDK also requires hugepage. When a DPDK application is
>> > initiating hugepage, I got the following error. Do I need to config
>> > something in Xen to support hugepage?
>>
>> I'm not sure about the status of superpage support in mainline kernels
>> for PV Xen guests. IIRC there was a requirement to add a Xen command
>> line flag to enable it at the level.
>>
>> Or you could just use an HVM guest, since no special support is needed
>> for hugepages there.
>>
>> But maybe I'm confused because I think your use of virtio-net would
>> necessarily require that you be using an HVM not PV guest.
>>
>> But then looking at your logs I see Xen PV block and net but no sign of
>> virtio -- so I suspect you are actually doing PV and not using
>> virtio-net at all.
>
> DPDK 1.6 is out - and you can do Xen. You need to use HVM guests and
> a special module in dom0 to setup 2MB contingous pages that
> is shared with the guest.
>
> The protocol that DPDK uses is VirtIO.
>
> See:
> http://dpdk.org/browse/dpdk/commit/?id=47bd46112b710dc59b1becfb67e18da319c5debe
> http://dpdk.org/browse/dpdk/commit/?id=148f963fb5323c1c6b6d5cea95084deb25cc73f8
>
>>
>> Ian.
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Virtio on Xen
@ 2014-08-11 13:54 Divya Durgadas
  2014-08-11 14:55 ` Wei Liu
  0 siblings, 1 reply; 11+ messages in thread
From: Divya Durgadas @ 2014-08-11 13:54 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 288 bytes --]

Hi,

Is there updated information on the virtio on xen project?

I see that the working prototype detailed on wiki.xen.org/wiki/Virtio_On_Xen is outdated.

 

For my requirement, I need virtio to work for HVM guest. Looking for instructions for the same.

 

Thanks

Divya

[-- Attachment #1.2: Type: text/html, Size: 2082 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Virtio on Xen
  2014-08-11 13:54 Virtio on Xen Divya Durgadas
@ 2014-08-11 14:55 ` Wei Liu
  0 siblings, 0 replies; 11+ messages in thread
From: Wei Liu @ 2014-08-11 14:55 UTC (permalink / raw)
  To: Divya Durgadas; +Cc: xen-devel, wei.liu2

On Mon, Aug 11, 2014 at 06:54:05AM -0700, Divya Durgadas wrote:
> Hi,
> 
> Is there updated information on the virtio on xen project?
> 
> I see that the working prototype detailed on wiki.xen.org/wiki/Virtio_On_Xen is outdated.
> 
>  
> 
> For my requirement, I need virtio to work for HVM guest. Looking for instructions for the same.
> 

Have you tried to set it up? I see no reason why basic virtio for HVM
won't work out of the box, it's just a pci device from guest PoV after
all.

Wei.

>  
> 
> Thanks
> 
> Divya

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-08-11 14:55 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-07 21:19 Virtio on Xen Peter X. Gao
2014-02-10 10:04 ` Ian Campbell
2014-02-10 10:42 ` Wei Liu
2014-02-10 11:19   ` Fabio Fantoni
2014-02-10 18:07     ` Peter X. Gao
2014-02-11 13:01       ` Ian Campbell
2014-03-05 18:02         ` Konrad Rzeszutek Wilk
2014-03-05 22:33           ` Peter X. Gao
2014-02-11 13:06       ` Application using hugepages crashes PV guest Ian Campbell
  -- strict thread matches above, loose matches on Subject: below --
2014-08-11 13:54 Virtio on Xen Divya Durgadas
2014-08-11 14:55 ` Wei Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).