From mboxrd@z Thu Jan 1 00:00:00 1970 From: Takeshi HASEGAWA Subject: Re: Introduction to VirtIO on Xen project Date: Wed, 27 Apr 2011 23:05:28 +0900 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Wei Liu Cc: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org That's why I am trying to run Fedora 14 on upstream-qemu + xen-unstable. On HVM domain, as SPICE worked with some libxl patches, I guess virtio-pci should work if xl command launch qemu with appropriate command arguments. virtio-pci is just a virtual PCI device. Takeshi 2011/4/27 Wei Liu : > Hi, all. > > I'm Wei Liu, a graduate student from Wuhan University, Hubei, China. > I'm accepted to GSoC 2011 for Xen and responsible for the project > VirtIO on Xen. It's my honor to get accepted and involved in this > wonderful community. I've been doing Xen development for my lab since > late 2009. > > As you all know, VirtIO is a generic paravirtualized mainly used in > KVM now. But it should not be too hard to port VirtIO to Xen. When > done, Xen will have access to Linux kernel's VirtIO interfaces and > developers will have an alternative way to deliver PV drivers besides > from the original ring buffer flavor. This project requires: Modify > upstream QEMU, replace KVM-specific interface with generic QEMU > function; Modify Xen / Xentools to support VirtIO; Modify Linux > kernel's VirtIO interfaces. > > We must take two usage scenarios into consideration: > > 1. PV-on-HVM; > 2. Normal PV. > > These two scenarios require working on different set of functions: > > 1. XenBus vs VirtualPCI, it's about how to create a channel; > 2. PV vs HVM, it's about how events are handled. > > Most of the code in VirtIO will be left as-it-is. But the notification > mechanism should be replaced with Xen's event channel. This applies to > QEMU's porting as well. > > In the PV on HVM case, QEMU needs to use event channel to get / send > notification and foreign mapping / grant table functions in libxc > /libxl to map memory pages. Virtual PCI bus will be used to establish > a channel between Dom0 and DomU. In some sense, it makes no > differences on the Linux kernel side. > > In the normal PV case, QEMU needs to use event channel to get / send > notification, and foreign mapping functions in libxc / libxl to map > memory pages. XenBus / Xenstore will be used to establish a channel > between Dom0 and DomU. Linux VirtIO driver should use Xen's event > channel as kick / notify function. > > When the porting is finished, I will carry on some performance tests > with standardized tools such as ioperf, netperf and kernbench. > Testsuites will be run on five different configurations: > > 1. Native Linux > 2. Xen with PV-on-HVM VirtIO support > 3. Xen with normal PV VirtIO support > 4. Xen with original PV driver support > 5. KVM with VirtIO support > > A short report will be written based on the results. > > This is a brief introduction to the project. Any comments are welcomed. > > > -- > Best regards > Wei Liu > Twitter: @iliuw > Site: http://liuw.name > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > -- Takeshi HASEGAWA