From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NDeWq-00015m-VG for qemu-devel@nongnu.org; Thu, 26 Nov 2009 08:37:01 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NDeWl-00011x-Aj for qemu-devel@nongnu.org; Thu, 26 Nov 2009 08:36:59 -0500 Received: from [199.232.76.173] (port=60756 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NDeWk-00011f-Hu for qemu-devel@nongnu.org; Thu, 26 Nov 2009 08:36:54 -0500 Received: from mx1.redhat.com ([209.132.183.28]:26524) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NDeWk-0000w2-3U for qemu-devel@nongnu.org; Thu, 26 Nov 2009 08:36:54 -0500 Date: Thu, 26 Nov 2009 15:34:09 +0200 From: "Michael S. Tsirkin" Subject: Re: [Qemu-devel] Re: [PATCH 0/4] pci: interrupt status/interrupt disable support Message-ID: <20091126133409.GB31817@redhat.com> References: <20091125165834.GA24783@redhat.com> <200911261241.04148.paul@codesourcery.com> <20091126125910.GA31731@redhat.com> <200911261321.39347.paul@codesourcery.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200911261321.39347.paul@codesourcery.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paul Brook Cc: Isaku Yamahata , qemu-devel@nongnu.org On Thu, Nov 26, 2009 at 01:21:39PM +0000, Paul Brook wrote: > >> It's really not that much of a fast path. Unless you're doing something > >> particularly obscure then even under heavy load you're unlikely to exceed > >> a few kHz. > > > >I think with kvm, heavy disk stressing benchmark can get higher. > > I'd still expect this to be the least of your problems. > > If nothing else you've at least one host signal delivery and/or thread context > switch in there. iotread which does the signalling might be running in parallel with the guest CPU. > Not to mention the overhead to forwarding the interrupt to > the guest CPU. This is often mitigated as KVM knows to inject the interrupt on the next vmexit. > > > Compared to the average PIC implementation, and the overhead of the > > > actual CPU interrupt, I find it hard to believe that looping over > > > precisely 4 entries has any real performance hit. > > > > I don't think it is major, but I definitely have seen, in the past, > > that extra branches and memory accesses have small but measureable effect > > when taken in interrupt handler routines in drivers, and same should > > apply here. > > > > OTOH keeping the sum around is trivial. > > Not entirely. You now have two different bits of information that you have to > keep consistent. This is inherent in pci spec definition: interrupt status bit in config space duplicates interrupt state. > Unless you can show that this is performance critical code I strongly > recommend keeping it as simple as possible. > > Paul I don't see there is anything left show: interrupt delivery is *obviously* performance critical: people are running *latency benchmarks* measuring how fast a packet can get from an external interface into guest, in microseconds. We definitely want to remove obvious waste there. -- MST