From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [RFC] virtio-blk PCI backend Date: Thu, 08 Nov 2007 13:02:12 -0600 Message-ID: <47335D34.8030608@us.ibm.com> References: <11944902733951-git-send-email-aliguori@us.ibm.com> <4732ABA0.5090603@qumranet.com> <473315DB.9030803@us.ibm.com> <4733170B.70206@qumranet.com> <473326B4.2080307@us.ibm.com> <47332BB7.2000900@qumranet.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Avi Kivity Return-path: In-Reply-To: <47332BB7.2000900-atKUWr5tajBWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Avi Kivity wrote: > Anthony Liguori wrote: >>>>> >>>>>> + case VIRTIO_PCI_QUEUE_NOTIFY: >>>>>> + if (val < VIRTIO_PCI_QUEUE_MAX) >>>>>> + virtio_ring_kick(vdev, &vdev->vq[val]); >>>>>> + break; >>>>>> >>>>> >>>>> I see you're not using hypercalls for this, presumably for >>>>> compatibility >>>>> with -no-kvm. >>>> >>>> More than just that. By stick to PIO, we are compatible with just >>>> about any VMM. For instance, we get Xen support for free. If we >>>> used hypercalls, even if we agreed on a way to determine which >>>> number to use and how to make those calls, it would still be >>>> difficult to implement in something like Xen. >>>> >>> >>> But pio through the config space basically means you're committed to >>> handling it in qemu. We want a more flexible mechanism. >> >> There's no reason that the PIO operations couldn't be handled in the >> kernel. You'll already need some level of cooperation in userspace >> unless you plan on implementing the PCI bus in kernel space too. >> It's easy enough in the pci_map function in QEMU to just notify the >> kernel that it should listen on a particular PIO range. > > With my new understanding of what this is all about, I suggest each > virtqueue having an ID filled in by the host. This ID is globally > unique, and is used as an argument for kick. It would map into a Xen > domain id + event channel number, a number to be written into a pio > port for kvm-lite or non-hypercall kvm, the argument for a kick > hypercall on kvm, or whatever. Yeah, right now, I maintain a virtqueue "selector" within virtio-pci and use that for notification. This index is also exposed in the config->find_vq() within virtio. Changing that to an opaque ID would require introducing another mechanism to enumerate the virtqueues since you couldn't just start from 0 and keep going until you hit an invalid virtqueue. I'm not sure I'm convinced that you couldn't just hide this "id" notion in the virtio-xen implementation if you needed to. Regards, Anthony Liguori > This is independent of virtio-pci, which is good. > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/