From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: [RFT] IRQ sharing for assigned devices - method selection Date: Fri, 07 Jan 2011 20:03:19 +0100 Message-ID: <4D276377.2090000@web.de> References: <4D26DDD8.5040707@web.de> <1294426676.3214.20.camel@x201> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enigAB32DDCB1296C9A78892F790" Cc: kvm , "Michael S. Tsirkin" , Avi Kivity To: Alex Williamson Return-path: Received: from fmmailgate03.web.de ([217.72.192.234]:48686 "EHLO fmmailgate03.web.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755101Ab1AGTDV (ORCPT ); Fri, 7 Jan 2011 14:03:21 -0500 In-Reply-To: <1294426676.3214.20.camel@x201> Sender: kvm-owner@vger.kernel.org List-ID: This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enigAB32DDCB1296C9A78892F790 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Am 07.01.2011 19:57, Alex Williamson wrote: > On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote: >> Hi, >> >> to finally select the approach for adding overdue IRQ sharing support >> for PCI pass-through, I hacked up two versions based on Thomas' patche= s >> and his suggestion to use a timeout-based mode transition: >> >> git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify >> git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout >> >> git://git.kiszka.org/qemu-kvm.git queues/dev-assign >> >> Both approaches work, but I'm either lacking a sufficiently stressing >> test environment to tickle out a relevant delta, even between masking = at >> irqchip vs. PCI config space level - or there is none... Yes, there ar= e >> differences at micro level but they do not manifest in measurable (ie.= >> above the noise level) load increase or throughput/latency decrease in= >> my limited tests here. I that actually turns out to be true, I would >> happily bury all this dynamic mode switching again. >> >> So, if you have a good high-bandwidth test case at hand, I would >> appreciate if you could give this a try and report your findings. Does= >> switching from exclusive to shared IRQ mode decrease the throughput or= >> increase the host load? Is there a difference to current kvm? >=20 > I think any sufficiently high bandwidth device will be using MSI and or= > NAPI, so I wouldn't expect we're going to see much change there. That's also why I'm no longer sure it's worth to worry about irq_disable vs. PCI disable. Anyone who cares about performance in a large pass-through scenario will try to use MSI-capable hardware anyway (or was so far unable to use tons of legacy IRQ driven devices due to IRQ conflicts). > Perhaps you can simply force a 1GbE device to use INTx and do some > netperf TCP_RR tests to try to expose any latency differences. Thanks,= I had the same idea, but I'm lacking a 1GbE peer here. :( Jan --------------enigAB32DDCB1296C9A78892F790 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.15 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org/ iEYEARECAAYFAk0nY3cACgkQitSsb3rl5xQfagCgnEI1P0vQOlhUnDbFXtBGOa+3 FeoAn31BeTtfTnktJGEOsszynRWLaU8l =36e7 -----END PGP SIGNATURE----- --------------enigAB32DDCB1296C9A78892F790--