From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lyon Subject: Re: BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Date: Thu, 02 May 2013 08:05:09 -0700 Message-ID: <518280A5.8000804@lyon-about.com> References: <5181E046.7000309@lyon-about.com> <20130502052538.GA27500@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: rusty@rustcorp.com.au, pawel.moll@arm.com, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org To: "Michael S. Tsirkin" Return-path: Received: from qmta06.emeryville.ca.mail.comcast.net ([76.96.30.56]:34203 "EHLO qmta06.emeryville.ca.mail.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753409Ab3EBPMY (ORCPT ); Thu, 2 May 2013 11:12:24 -0400 In-Reply-To: <20130502052538.GA27500@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/01/2013 10:25 PM, Michael S. Tsirkin wrote: > On Wed, May 01, 2013 at 08:40:54PM -0700, Tom Lyon wrote: >> Virtiio_mmio attempts to mimic the layout of some control registers >> from virtio_pci. These registers, in particular >> VIRTIO_MMIO_QUEUE_SEL and VIRTIO_PCI_QUEUE_SEL, >> are active in nature, and not just passive like a normal memory >> location. Thus, the host side must react immediately upon write of >> these registers to map some other registers (queue address, size, >> etc) to queue-specific locations. This is just not possible for >> mmio, and, I would argue, not desirable for PCI either. >> >> Because the queue selector register doesn't work in mmio, it is >> clear that only single queue virtio devices can work. This means no >> virtio_net - I've seen a few messages >> complaining that it doesn't work but nothing so far on why. >> >> It seems from some messages back in March that there is a register >> re-layout in the works for virtio_pci. I think that virtio_pci >> could become just one of the >> various ways to configure a virtio_mmio device and there would no >> need for any "registers", just memory locations acting like memory. >> The one gotcha is in >> figuring out the kick/notify mechanism for the guest to notify the >> host when there is work on a queue. For notify, using a hypervisor >> call could unify the pci and mmio >> cases, but comes with the cost of leaving the pure pci domain. >> >> I got into this code because I am looking at the possibility of >> using an off the shelf embedded processor sitting on a PCIe port to >> emulate the virtio pci interface. The >> notion of active registers makes this a non-starter, whereas if >> there was a purely memory based system like mmio (with mq fixes), a >> real PCI device could easily emulate it. >> Excepting, of course, whatever the notify mechanism is. If it were >> hypercall based, then the hypervisor could call a transport or >> device specific way of notifying and a small >> notify driver could poke the PCI device is some way. > > This was discussed on this thread: > '[PATCH 16/22] virtio_pci: use separate notification offsets for each vq' > Please take a look there and confirm that this addresses your concern. > I'm working on making memory io as fast as pio on x86, > implemented on intel, once I do it on AMD too and assuming it's > as fast as PIO, we'll do mmio everywhere. > Then with a PCI card, you won't have exits for notification just normal > passthrough. > > Yes, I had seen that thread. It addresses my concerns for pci, but not mmio, although I slightly favor a hypercall notify mechanism over a pci write.