From: "Michael S. Tsirkin" <mst@redhat.com>
To: Tom Lyon <pugs@lyon-about.com>
Cc: kvm@vger.kernel.org, pawel.moll@arm.com,
virtualization@lists.linux-foundation.org
Subject: Re: BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful
Date: Thu, 2 May 2013 08:25:38 +0300 [thread overview]
Message-ID: <20130502052538.GA27500@redhat.com> (raw)
In-Reply-To: <5181E046.7000309@lyon-about.com>
On Wed, May 01, 2013 at 08:40:54PM -0700, Tom Lyon wrote:
> Virtiio_mmio attempts to mimic the layout of some control registers
> from virtio_pci. These registers, in particular
> VIRTIO_MMIO_QUEUE_SEL and VIRTIO_PCI_QUEUE_SEL,
> are active in nature, and not just passive like a normal memory
> location. Thus, the host side must react immediately upon write of
> these registers to map some other registers (queue address, size,
> etc) to queue-specific locations. This is just not possible for
> mmio, and, I would argue, not desirable for PCI either.
>
> Because the queue selector register doesn't work in mmio, it is
> clear that only single queue virtio devices can work. This means no
> virtio_net - I've seen a few messages
> complaining that it doesn't work but nothing so far on why.
>
> It seems from some messages back in March that there is a register
> re-layout in the works for virtio_pci. I think that virtio_pci
> could become just one of the
> various ways to configure a virtio_mmio device and there would no
> need for any "registers", just memory locations acting like memory.
> The one gotcha is in
> figuring out the kick/notify mechanism for the guest to notify the
> host when there is work on a queue. For notify, using a hypervisor
> call could unify the pci and mmio
> cases, but comes with the cost of leaving the pure pci domain.
>
> I got into this code because I am looking at the possibility of
> using an off the shelf embedded processor sitting on a PCIe port to
> emulate the virtio pci interface. The
> notion of active registers makes this a non-starter, whereas if
> there was a purely memory based system like mmio (with mq fixes), a
> real PCI device could easily emulate it.
> Excepting, of course, whatever the notify mechanism is. If it were
> hypercall based, then the hypervisor could call a transport or
> device specific way of notifying and a small
> notify driver could poke the PCI device is some way.
This was discussed on this thread:
'[PATCH 16/22] virtio_pci: use separate notification offsets for each vq'
Please take a look there and confirm that this addresses your concern.
I'm working on making memory io as fast as pio on x86,
implemented on intel, once I do it on AMD too and assuming it's
as fast as PIO, we'll do mmio everywhere.
Then with a PCI card, you won't have exits for notification just normal
passthrough.
--
MST
next prev parent reply other threads:[~2013-05-02 5:25 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-02 3:40 BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Tom Lyon
2013-05-02 5:25 ` Michael S. Tsirkin [this message]
2013-05-02 15:05 ` Tom Lyon
2013-05-02 11:37 ` Pawel Moll
2013-05-02 15:17 ` Tom Lyon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130502052538.GA27500@redhat.com \
--to=mst@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=pawel.moll@arm.com \
--cc=pugs@lyon-about.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox