From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:53781) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TMIyb-0007WQ-Nb for qemu-devel@nongnu.org; Thu, 11 Oct 2012 09:39:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TMIyX-00040k-6u for qemu-devel@nongnu.org; Thu, 11 Oct 2012 09:39:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:11029) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TMIyW-00040U-V1 for qemu-devel@nongnu.org; Thu, 11 Oct 2012 09:38:57 -0400 Message-ID: <1349962734.2759.334.camel@ul30vt.home> From: Alex Williamson Date: Thu, 11 Oct 2012 07:38:54 -0600 In-Reply-To: <20121011103718.GC5552@redhat.com> References: <20121002191609.31100.77382.stgit@bling.home> <1349711912.2759.96.camel@ul30vt.home> <20121008201539.GC17303@redhat.com> <1349724453.2759.163.camel@ul30vt.home> <20121008214052.GC17820@redhat.com> <1349730697.2759.211.camel@ul30vt.home> <5073CD9A.3060702@siemens.com> <1349897512.2759.331.camel@ul30vt.home> <20121011103718.GC5552@redhat.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Jan Kiszka , "qemu-devel@nongnu.org" On Thu, 2012-10-11 at 12:37 +0200, Michael S. Tsirkin wrote: > On Wed, Oct 10, 2012 at 01:31:52PM -0600, Alex Williamson wrote: > > On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote: > > > On 2012-10-08 23:11, Alex Williamson wrote: > > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote: > > > >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote: > > > >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote: > > > >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote: > > > >>>>> Michael, Jan, > > > >>>>> > > > >>>>> Any comments on these? I'd like to make the PCI changes before I update > > > >>>>> vfio-pci to make use of the new resampling irqfd in kvm. We don't have > > > >>>>> anyone officially listed as maintainer of pci-assign since it's been > > > >>>>> moved to qemu. I could include the pci-assign patches in my tree if you > > > >>>>> prefer. Thanks, > > > >>>>> > > > >>>>> Alex > > > >>>> > > > >>>> Patches themselves look fine, but I'd like to > > > >>>> better understand why do we want the INTx fallback. > > > >>>> Isn't it easier to add intx routing support? > > > >>> > > > >>> vfio-pci can work with or without intx routing support. Its presence is > > > >>> just one requirement to enable kvm accelerated intx support. Regardless > > > >>> of whether it's easy or hard to implement intx routing in a given > > > >>> chipset, I currently can't probe for it and make useful decisions about > > > >>> whether or not to enable kvm support without potentially hitting an > > > >>> assert. It's arguable how important intx acceleration is for specific > > > >>> applications, so while I'd like all chipsets to implement it, I don't > > > >>> know that it should be a gating factor to chipset integration. Thanks, > > > >>> > > > >>> Alex > > > >> > > > >> Yes but there's nothing kvm specific in the routing API, > > > >> and IIRC it actually works fine without kvm. > > > > > > > > Correct, but intx routing isn't very useful without kvm. > > > > > > Right now: yes. Long-term: no. The concept in general is also required > > > for decoupling I/O paths lock-wise from our main thread. We need to > > > explore the IRQ path and cache it in order to avoid taking lots of locks > > > on each delivery, possibly even the BQL. But we will likely need > > > something smarter at that point, i.e. something PCI-independent. > > > > That sounds great long term, but in the interim I think this trivial > > extension to the API is more than justified. I hope that it can go in > > soon so we can get vfio-pci kvm intx acceleration in before freeze > > deadlines get much closer. Thanks, > > > > Alex > > Simply reorder the patches: > 1. add vfio acceleration with no fallback > 2. add way for intx routing to fail > 3. add vfio fallback if intx routing fails > > Then we can apply 1 and argue about the need for 2/3 > afterwards. And patches 2-6 of this series; are they also far too controversial to consider applying now?