From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [RFC] virtio-blk PCI backend Date: Sun, 11 Nov 2007 11:23:35 +0200 Message-ID: <4736CA17.1020502@qumranet.com> References: <11944902733951-git-send-email-aliguori@us.ibm.com> <4732ABA0.5090603@qumranet.com> <473315DB.9030803@us.ibm.com> <4733170B.70206@qumranet.com> <473326B4.2080307@us.ibm.com> <473328EC.4090905@qumranet.com> <473337B9.8040503@us.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Anthony Liguori Return-path: In-Reply-To: <473337B9.8040503-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Anthony Liguori wrote: > Avi Kivity wrote: >>> There's no reason that the PIO operations couldn't be handled in the >>> kernel. You'll already need some level of cooperation in userspace >>> unless you plan on implementing the PCI bus in kernel space too. >>> It's easy enough in the pci_map function in QEMU to just notify the >>> kernel that it should listen on a particular PIO range. >>> >>> >> >> This is a config space write, right? If so, the range is the regular >> 0xcf8-0xcff and it has to be very specially handled. > > This is a per-device IO slot and as best as I can tell, the PCI device > advertises the size of the region and the OS then identifies a range > of PIO space to use and tells the PCI device about it. So we would > just need to implement a generic userspace virtio PCI device in QEMU > that did an ioctl to the kernel when this happened to tell the kernel > what region to listen on for a particular device. > I'll just go and read the patches more carefully before making any more stupid remarks about the code. >>> vmcalls will certainly get faster but I doubt that the cost >>> difference between vmcall and pio will ever be greater than a few >>> hundred cycles. The only performance sensitive operation here would >>> be the kick and I don't think a few hundred cycles in the kick path >>> is ever going to be that significant for overall performance. >>> >>> >> >> Why do you think the different will be a few hundred cycles? > > The only difference in hardware between a PIO exit and a vmcall is > that you don't have write out an exit reason in the VMC[SB]. So the > performance difference between PIO/vmcall shouldn't be that great (and > if it were, the difference would probably be obvious today). That's > different from, say, a PF exit because with a PF, you also have to > attempt to resolve it by walking the guest page table before > determining that you do in fact need to exit. > You have to look at the pio bitmaps with pio. Point taken though. > >>> So why introduce the extra complexity? >>> >> >> Overall I think it reduces comlexity if we have in-kernel devices. >> Anyway we can add additional signalling methods later. > > In-kernel virtio backends add quite a lot of complexity. Just the > mechanism to setup the device is complicated enough. I suspect that > it'll be necessary down the road for performance but I certainly don't > think it's a simplification. I didn't mean that in-kernel devices simplify things (they don't), but that using hypercalls is simpler for in-kernel than pio. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/