From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: [PATCH v2 0/9] qemu-kvm: Clean up and enhance MSI irqchip support Date: Wed, 27 Apr 2011 11:06:19 +0200 Message-ID: <4DB7DC8B.1030402@siemens.com> References: <4DB7C56D.8040503@redhat.com> <4DB7DB24.8060403@siemens.com> <4DB7DC11.1010308@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , "kvm@vger.kernel.org" , "Michael S. Tsirkin" To: Avi Kivity Return-path: Received: from thoth.sbs.de ([192.35.17.2]:23425 "EHLO thoth.sbs.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754987Ab1D0JG1 (ORCPT ); Wed, 27 Apr 2011 05:06:27 -0400 In-Reply-To: <4DB7DC11.1010308@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 2011-04-27 11:04, Avi Kivity wrote: > On 04/27/2011 12:00 PM, Jan Kiszka wrote: >> On 2011-04-27 09:27, Avi Kivity wrote: >>> On 04/26/2011 04:19 PM, Jan Kiszka wrote: >>>> I've still plans to consolidate MSI-X mask notifiers and KVM hooks, but >>>> that can wait until we go upstream. >>>> >>>> This version still makes classic MSI usable in irqchip mode, now not >>>> only for PCI devices (AHCI, HDA) but also for the HPET (with msi=on). >>>> Moreover, it contains an additional patch to refresh the MSI IRQ routes >>>> after vmload. >>>> >>> >>> Patches 1-8 applied, thanks. I'm not sure about 9 (hpet kvm msi >>> integration) - it seems very intrusive to do this to every >>> msi-supporting device. At least for pci we get all pci devices done in >>> one shot. >> >> Right, it is a but intrusive, but I do not see any real alternative. >> >>> >>> We could do this transparently in hw/apic.c. When the message is sent >>> for the first time we look it up, fail, and update the kvm routing >>> entry. Next time the lookup succeeds and we just use KVM_IRQ_LINE, >>> until the message changes and we need to update the irq entry again. >> >> I thought about this, also for PCI devices that aren't assigned or >> vhost-driven, but we would quickly end up with unused and never freed >> IRQ routing entries. We still need to track the vector configurations. > > We can simply drop all route entries that are used exclusively in qemu > (i.e. not bound to an irqfd) and let the cache rebuild itself. When should they be dropped? > >> What would help at least in the HPET case is a new DELIVER_MSI syscall >> that completely skips the IRQ routing thing. > > It would only help users of 2.6.40 kernels. Exactly. > >> Actually, we only need >> routing for IRQs that shall be injected directly at kernel level. OTOH, >> this service would not be available on existing kernels, and we would >> not be able to simplify the PCI code that way (due to vhost >> requirements). So I dropped this idea as well and accepted that IRQ >> routing is the way to go. > > I think that with the cache cleanup as outlined above it can work, no? > I don't see yet how. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux