public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* BUG: virtio_mmio multi-queue competely  broken -- virtio *registers* considered harmful
@ 2013-05-02  3:40 Tom Lyon
  2013-05-02  5:25 ` Michael S. Tsirkin
  2013-05-02 11:37 ` Pawel Moll
  0 siblings, 2 replies; 5+ messages in thread
From: Tom Lyon @ 2013-05-02  3:40 UTC (permalink / raw)
  To: rusty, mst, pawel.moll, virtualization, kvm

Virtiio_mmio attempts to mimic the layout of some control registers from 
virtio_pci.  These registers, in particular VIRTIO_MMIO_QUEUE_SEL and 
VIRTIO_PCI_QUEUE_SEL,
are active in nature, and not just passive like a normal memory 
location.  Thus, the host side must react immediately upon write of 
these registers to map some other registers (queue address, size, etc) 
to queue-specific locations.  This is just not possible for mmio, and, I 
would argue, not desirable for PCI either.

Because the queue selector register doesn't work in mmio, it is clear 
that only single queue virtio devices can work.  This means no 
virtio_net - I've seen a few messages
complaining that it doesn't work but nothing so far on why.

It seems from some messages back in March that there is a register 
re-layout in the works for virtio_pci.  I think that virtio_pci could 
become just one of the
various ways to configure a virtio_mmio device and there would no need 
for any "registers", just memory locations acting like memory. The one 
gotcha is in
figuring out the kick/notify mechanism for the guest to notify the host 
when there is work on a queue.  For notify, using a hypervisor call 
could unify the pci and mmio
cases, but comes with the cost of leaving the pure pci domain.

I got into this code because I am looking at the possibility of using an 
off the shelf embedded processor sitting on a PCIe port to emulate the 
virtio pci interface.  The
notion of active registers makes this a non-starter, whereas if there 
was a purely memory based system like mmio (with mq fixes), a real PCI 
device could easily emulate it.
Excepting, of course, whatever the notify mechanism is.  If it were 
hypercall based, then the hypervisor could call a transport or device 
specific way of notifying and a small
notify driver could poke the PCI device is some way.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-05-02 15:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-02  3:40 BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Tom Lyon
2013-05-02  5:25 ` Michael S. Tsirkin
2013-05-02 15:05   ` Tom Lyon
2013-05-02 11:37 ` Pawel Moll
2013-05-02 15:17   ` Tom Lyon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox