From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1EuZbU-00081l-Hb for qemu-devel@nongnu.org; Thu, 05 Jan 2006 13:12:48 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1EuZbT-00081R-QL for qemu-devel@nongnu.org; Thu, 05 Jan 2006 13:12:48 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1EuZbT-00081L-Ke for qemu-devel@nongnu.org; Thu, 05 Jan 2006 13:12:47 -0500 Received: from [65.74.133.5] (helo=mail.codesourcery.com) by monty-python.gnu.org with esmtp (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA:24) (Exim 4.34) id 1EuZd5-0005Tw-I3 for qemu-devel@nongnu.org; Thu, 05 Jan 2006 13:14:27 -0500 From: Paul Brook Subject: Re: [Qemu-devel] PCI access virtualization Date: Thu, 5 Jan 2006 18:10:54 +0000 References: <1136456096.4464.90.camel@gimli> <200601051425.33430.paul@codesourcery.com> <200601051740.49151.mark.williamson@cl.cam.ac.uk> In-Reply-To: <200601051740.49151.mark.williamson@cl.cam.ac.uk> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200601051810.54954.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Mark Williamson Cc: qemu-devel@nongnu.org On Thursday 05 January 2006 17:40, Mark Williamson wrote: > > - IRQ sharing. Sharing host IRQs between native and virtualized devices > > is hard because the host needs to ack the interrupt in the IRQ handler, > > but doesn't really know how to do that until after it's run the guest to > > see what that does. > > Could maybe have the (inevitable) kernel portion of the code grab the > interrupt, and not ack it until userspace does an ioctl on a special file > (or something like that?). There are patches floating around for userspace > IRQ handling, so I guess that could work. This still requires cooperation from both sides (ie. both the host and guest drivers). > > - DMA. qemu needs to rewrite DMA requests (in both directions) because > > the guest physical memory won't be at the same address on the host. > > Inlike ISA where there's a fixed DMA engine, I don't think there's any > > general way of > > I was under the impression that you could get reasonably far by emulating a > few of the most popular commercial DMA engine chips and reissuing > address-corrected commands to the host. I'm not sure how common it is for > PCI cards to use custom DMA chips instead, though... IIUC PCI cards don't really have "DMA engines" as such. The PCI bridge just maps PCI address space onto physical memory. A Busmaster PCI device can then make arbitrary acceses whenever it wants. I expect the default mapping is a 1:1 mapping of the first 4G of physical ram. > > There are patches that allow virtualization of PCI devices that don't use > > either of the above features. It's sufficient to get some Network cards > > working, but that's about it. > > I guess PIO should be easy to work in any case. Yes. > > > I vaguely heard of a feature present in Xen, which allows to assign PCI > > > devices to one of the guests. I understand Xen works different than > > > QEMU, but maybe is would be possible to implement something similar. > > > > Xen is much easier because it cooperates with the host system (ie. xen), > > so both the above problems can be solved by tweaking the guest OS > > drivers/PCI subsystem setup. > > Yep, XenLinux redefines various macros that were already present to do > guest-physical <-> host-physical address translations, so DMA Just Works > (TM). > > > If you're testing specific drivers you could probably augment these > > drivers to pass the extra required information to qemu. ie. effectively > > use a special qemu pseudo-PCI interface rather than the normal piix PCI > > interface. > > How about something like this?: > I'd imagine you could get away with a special header file with different > macro defines (as for Xen, above), just in the driver in question, and a > special "translation device / service" available to the QEmu virtual > machine - could be as simple as "write the guest physical address to an IO > port, returns the real physical address on next read". The virt_to_bus > (etc) macros would use the translation service to perform the appropriate > translation at runtime. That's exactly the sort of thing I meant. Ideally you'd just implement it as a different type of PCI bridge, and everything would just work. I don't know if linux supports such heterogeneous configurations though. Paul