From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [RFT] IRQ sharing for assigned devices - method selection Date: Sun, 09 Jan 2011 11:42:34 +0200 Message-ID: <4D29830A.403@redhat.com> References: <4D26DDD8.5040707@web.de> <1294426676.3214.20.camel@x201> <4D276377.2090000@web.de> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Alex Williamson , kvm , "Michael S. Tsirkin" To: Jan Kiszka Return-path: Received: from mx1.redhat.com ([209.132.183.28]:29342 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751000Ab1AIJmj (ORCPT ); Sun, 9 Jan 2011 04:42:39 -0500 In-Reply-To: <4D276377.2090000@web.de> Sender: kvm-owner@vger.kernel.org List-ID: On 01/07/2011 09:03 PM, Jan Kiszka wrote: > >> > >> So, if you have a good high-bandwidth test case at hand, I would > >> appreciate if you could give this a try and report your findings. Does > >> switching from exclusive to shared IRQ mode decrease the throughput or > >> increase the host load? Is there a difference to current kvm? > > > > I think any sufficiently high bandwidth device will be using MSI and or > > NAPI, so I wouldn't expect we're going to see much change there. > > That's also why I'm no longer sure it's worth to worry about irq_disable > vs. PCI disable. Anyone who cares about performance in a large > pass-through scenario will try to use MSI-capable hardware anyway (or > was so far unable to use tons of legacy IRQ driven devices due to IRQ > conflicts). PCI disable is probably only ridiculously slow with cf8/cfc config space access, and significantly faster (though still slow) with mmconfig. Needs to be taken into account as well. -- error compiling committee.c: too many arguments to function