From: Paul Brook <paul@codesourcery.com>
To: Mark Williamson <mark.williamson@cl.cam.ac.uk>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] PCI access virtualization
Date: Thu, 5 Jan 2006 18:10:54 +0000 [thread overview]
Message-ID: <200601051810.54954.paul@codesourcery.com> (raw)
In-Reply-To: <200601051740.49151.mark.williamson@cl.cam.ac.uk>
On Thursday 05 January 2006 17:40, Mark Williamson wrote:
> > - IRQ sharing. Sharing host IRQs between native and virtualized devices
> > is hard because the host needs to ack the interrupt in the IRQ handler,
> > but doesn't really know how to do that until after it's run the guest to
> > see what that does.
>
> Could maybe have the (inevitable) kernel portion of the code grab the
> interrupt, and not ack it until userspace does an ioctl on a special file
> (or something like that?). There are patches floating around for userspace
> IRQ handling, so I guess that could work.
This still requires cooperation from both sides (ie. both the host and guest
drivers).
> > - DMA. qemu needs to rewrite DMA requests (in both directions) because
> > the guest physical memory won't be at the same address on the host.
> > Inlike ISA where there's a fixed DMA engine, I don't think there's any
> > general way of
>
> I was under the impression that you could get reasonably far by emulating a
> few of the most popular commercial DMA engine chips and reissuing
> address-corrected commands to the host. I'm not sure how common it is for
> PCI cards to use custom DMA chips instead, though...
IIUC PCI cards don't really have "DMA engines" as such. The PCI bridge just
maps PCI address space onto physical memory. A Busmaster PCI device can then
make arbitrary acceses whenever it wants. I expect the default mapping is a
1:1 mapping of the first 4G of physical ram.
> > There are patches that allow virtualization of PCI devices that don't use
> > either of the above features. It's sufficient to get some Network cards
> > working, but that's about it.
>
> I guess PIO should be easy to work in any case.
Yes.
> > > I vaguely heard of a feature present in Xen, which allows to assign PCI
> > > devices to one of the guests. I understand Xen works different than
> > > QEMU, but maybe is would be possible to implement something similar.
> >
> > Xen is much easier because it cooperates with the host system (ie. xen),
> > so both the above problems can be solved by tweaking the guest OS
> > drivers/PCI subsystem setup.
>
> Yep, XenLinux redefines various macros that were already present to do
> guest-physical <-> host-physical address translations, so DMA Just Works
> (TM).
>
> > If you're testing specific drivers you could probably augment these
> > drivers to pass the extra required information to qemu. ie. effectively
> > use a special qemu pseudo-PCI interface rather than the normal piix PCI
> > interface.
>
> How about something like this?:
> I'd imagine you could get away with a special header file with different
> macro defines (as for Xen, above), just in the driver in question, and a
> special "translation device / service" available to the QEmu virtual
> machine - could be as simple as "write the guest physical address to an IO
> port, returns the real physical address on next read". The virt_to_bus
> (etc) macros would use the translation service to perform the appropriate
> translation at runtime.
That's exactly the sort of thing I meant. Ideally you'd just implement it as a
different type of PCI bridge, and everything would just work. I don't know if
linux supports such heterogeneous configurations though.
Paul
next prev parent reply other threads:[~2006-01-05 18:12 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-01-05 10:14 [Qemu-devel] PCI access virtualization Michael Renzmann
2006-01-05 14:25 ` Paul Brook
2006-01-05 17:40 ` Mark Williamson
2006-01-05 18:10 ` Paul Brook [this message]
2006-01-05 21:13 ` Jim C. Brown
2006-01-06 1:54 ` Mark Williamson
2006-01-06 13:27 ` Paul Brook
2006-01-06 14:23 ` Mark Williamson
2006-01-05 16:25 ` Mark Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200601051810.54954.paul@codesourcery.com \
--to=paul@codesourcery.com \
--cc=mark.williamson@cl.cam.ac.uk \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).