From: Tom Lyon <pugs@lyon-about.com>
To: rusty@rustcorp.com.au, mst@redhat.com, pawel.moll@arm.com,
virtualization@lists.linux-foundation.org, kvm@vger.kernel.org
Subject: BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful
Date: Wed, 01 May 2013 20:40:54 -0700 [thread overview]
Message-ID: <5181E046.7000309@lyon-about.com> (raw)
Virtiio_mmio attempts to mimic the layout of some control registers from
virtio_pci. These registers, in particular VIRTIO_MMIO_QUEUE_SEL and
VIRTIO_PCI_QUEUE_SEL,
are active in nature, and not just passive like a normal memory
location. Thus, the host side must react immediately upon write of
these registers to map some other registers (queue address, size, etc)
to queue-specific locations. This is just not possible for mmio, and, I
would argue, not desirable for PCI either.
Because the queue selector register doesn't work in mmio, it is clear
that only single queue virtio devices can work. This means no
virtio_net - I've seen a few messages
complaining that it doesn't work but nothing so far on why.
It seems from some messages back in March that there is a register
re-layout in the works for virtio_pci. I think that virtio_pci could
become just one of the
various ways to configure a virtio_mmio device and there would no need
for any "registers", just memory locations acting like memory. The one
gotcha is in
figuring out the kick/notify mechanism for the guest to notify the host
when there is work on a queue. For notify, using a hypervisor call
could unify the pci and mmio
cases, but comes with the cost of leaving the pure pci domain.
I got into this code because I am looking at the possibility of using an
off the shelf embedded processor sitting on a PCIe port to emulate the
virtio pci interface. The
notion of active registers makes this a non-starter, whereas if there
was a purely memory based system like mmio (with mq fixes), a real PCI
device could easily emulate it.
Excepting, of course, whatever the notify mechanism is. If it were
hypercall based, then the hypervisor could call a transport or device
specific way of notifying and a small
notify driver could poke the PCI device is some way.
next reply other threads:[~2013-05-02 3:40 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-02 3:40 Tom Lyon [this message]
2013-05-02 5:25 ` BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Michael S. Tsirkin
2013-05-02 15:05 ` Tom Lyon
2013-05-02 11:37 ` Pawel Moll
2013-05-02 15:17 ` Tom Lyon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5181E046.7000309@lyon-about.com \
--to=pugs@lyon-about.com \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=pawel.moll@arm.com \
--cc=rusty@rustcorp.com.au \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox