From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Williamson Subject: Re: [RFT] IRQ sharing for assigned devices - method selection Date: Fri, 07 Jan 2011 11:57:56 -0700 Message-ID: <1294426676.3214.20.camel@x201> References: <4D26DDD8.5040707@web.de> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: kvm , "Michael S. Tsirkin" , Avi Kivity To: Jan Kiszka Return-path: Received: from mx1.redhat.com ([209.132.183.28]:55743 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751590Ab1AGS56 (ORCPT ); Fri, 7 Jan 2011 13:57:58 -0500 In-Reply-To: <4D26DDD8.5040707@web.de> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote: > Hi, > > to finally select the approach for adding overdue IRQ sharing support > for PCI pass-through, I hacked up two versions based on Thomas' patches > and his suggestion to use a timeout-based mode transition: > > git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify > git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout > > git://git.kiszka.org/qemu-kvm.git queues/dev-assign > > Both approaches work, but I'm either lacking a sufficiently stressing > test environment to tickle out a relevant delta, even between masking at > irqchip vs. PCI config space level - or there is none... Yes, there are > differences at micro level but they do not manifest in measurable (ie. > above the noise level) load increase or throughput/latency decrease in > my limited tests here. I that actually turns out to be true, I would > happily bury all this dynamic mode switching again. > > So, if you have a good high-bandwidth test case at hand, I would > appreciate if you could give this a try and report your findings. Does > switching from exclusive to shared IRQ mode decrease the throughput or > increase the host load? Is there a difference to current kvm? I think any sufficiently high bandwidth device will be using MSI and or NAPI, so I wouldn't expect we're going to see much change there. Perhaps you can simply force a 1GbE device to use INTx and do some netperf TCP_RR tests to try to expose any latency differences. Thanks, Alex