From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Williamson Subject: Re: The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM Date: Mon, 26 Nov 2012 08:28:06 -0700 Message-ID: <1353943686.1809.32.camel@bling.home> References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: kvm@vger.kernel.org To: yi li Return-path: Received: from mx1.redhat.com ([209.132.183.28]:14726 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754237Ab2KZP2H (ORCPT ); Mon, 26 Nov 2012 10:28:07 -0500 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Fri, 2012-11-23 at 11:06 +0800, yi li wrote: > Hi Guys, >=20 > there have a issue about smp_affinity cannot work correctly on guest > os when PCI passthrough device using msi/msi-x with KVM. >=20 > My reason: > pcpu will occur a lot of ipi interrupt to find the vcpu to handle the > irq. so the guest os will VM_EXIT frequelty. right? >=20 > if smp_affinity can work correctly on guest os, the best way is that > the vcpu handle the irq is cputune at the pcpu which handle the > kvm:pci-bus irq on the host.but unfortunly, i find that smp_affinity > can not work correctly on guest os when msi/msi-x. >=20 > how to reproduce: > 1: passthrough a netcard (Brodcom BCM5716S) to the guest os >=20 > 2: ifup the netcard, the card will use msi-x interrupt default, and c= lose the > irqbalance service >=20 > 3: echo 4 > cat /proc/irq/NETCARDIRQ/smp_affinity, so we assume the = vcpu2 > handle the irq. >=20 > 4: we have set and set the irq kvm= :pci-bus to > the pcpu1 on the host. >=20 > we think this configure will reduce the ipi interrupt when inject int= errupt to > the guest os. but this irq is not only handle on vcpu2. maybe it is > not our expect=E3=80=82 What version of qemu-kvm/qemu are you using? There's been some work recently specifically to enable this. Thanks, Alex