From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:37057) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UH9Zb-00082g-6f for qemu-devel@nongnu.org; Sun, 17 Mar 2013 05:08:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UH9ZX-0001b8-5P for qemu-devel@nongnu.org; Sun, 17 Mar 2013 05:08:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:17817) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UH9ZW-0001at-U4 for qemu-devel@nongnu.org; Sun, 17 Mar 2013 05:08:07 -0400 Date: Sun, 17 Mar 2013 11:08:17 +0200 From: "Michael S. Tsirkin" Message-ID: <20130317090816.GB28528@redhat.com> References: <6ce47933-c4df-4a64-9b94-69922ea3ef9a@mailpro> <20130314175003.GD29411@redhat.com> <72939A33-1A1D-4D5B-88CB-22D2F6803428@unidata.it> <14BEFB3E-0F36-44C3-943A-F4EAE92AACA6@dlhnet.de> <36D0FF56-686D-47DC-80FD-8DA6A5F48309@unidata.it> <5142CC80.503@dlhnet.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5142CC80.503@dlhnet.de> Subject: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, Alexandre DERUMIER , Davide Guerri , Jan Kiszka , Peter Lieven , Dietmar Maurer On Fri, Mar 15, 2013 at 08:23:44AM +0100, Peter Lieven wrote: > On 15.03.2013 00:04, Davide Guerri wrote: > >Yes this is definitely an option :) > > > >Just for curiosity, what is the effect of "in-kernel irqchip"? > > it emulates the irqchip in-kernel (in the KVM kernel module) which > avoids userspace exits to qemu. in your particular case I remember > that it made all IRQs deliverd to vcpu0 on. So I think this is a workaround > and not the real fix. I think Michael is right that it is a > client kernel bug. It would be good to find out what it is and ask > the 2.6.32 maintainers to include it. i further have seen that > with more recent kernels and inkernel-irqchip the irqs are delivered > to vcpu0 only again (without multiqueue). > > >Is it possible to disable it on a "live" domain? > > try it. i don't know. you definetely have to do a live migration for it, > but I have no clue if the VM will survice this. > > Peter I doubt you can migrate VMs between irqchip/non irqchip configurations. > > > >Cheers, > > Davide > > > > > >On 14/mar/2013, at 19:21, Peter Lieven wrote: > > > >> > >>Am 14.03.2013 um 19:15 schrieb Davide Guerri : > >> > >>>Of course I can do some test but a kernel upgrade is not an option here :( > >> > >>disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option. > >> > >>Peter > >> > >> > >