From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MA5Lf-0007IG-4d for qemu-devel@nongnu.org; Fri, 29 May 2009 12:54:27 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MA5La-0007B7-Dw for qemu-devel@nongnu.org; Fri, 29 May 2009 12:54:26 -0400 Received: from [199.232.76.173] (port=34971 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MA5La-0007Am-8d for qemu-devel@nongnu.org; Fri, 29 May 2009 12:54:22 -0400 Received: from mx2.redhat.com ([66.187.237.31]:42757) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MA5LZ-0004r3-NQ for qemu-devel@nongnu.org; Fri, 29 May 2009 12:54:22 -0400 Date: Fri, 29 May 2009 19:54:18 +0300 From: Gleb Natapov Message-ID: <20090529165418.GA917@redhat.com> References: <4A1F9B7C.4020201@siemens.com> <20090529130806.GB28542@redhat.com> <4A1FF6B9.9050502@siemens.com> <20090529162015.GA29579@redhat.com> <4A201177.2090103@siemens.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4A201177.2090103@siemens.com> Subject: [Qemu-devel] Re: Lost interrupts with upstream KVM List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: qemu-devel , kvm-devel On Fri, May 29, 2009 at 06:46:47PM +0200, Jan Kiszka wrote: > Gleb Natapov wrote: > > On Fri, May 29, 2009 at 04:52:41PM +0200, Jan Kiszka wrote: > >> Gleb Natapov wrote: > >>> On Fri, May 29, 2009 at 10:23:24AM +0200, Jan Kiszka wrote: > >>>> Hi Gleb, > >>>> > >>>> with latest kernel modules, namely beginning with 6bc0a1a235 (Remove > >>>> irq_pending bitmap), I'm loosing interrupts with upstream's KVM support. > >>>> After some bisecting, hair-pulling and a bit meditation I added a > >>>> WARN_ON(kvm_cpu_has_interrupt(vcpu)) to kvm_vcpu_ioctl_interrupt, and it > >>>> actually triggered right before the guest got stuck. > >>>> > >>>> This didn't trigger with qemu-kvm (and -no-kvm-irqchip) yet but, on the > >>>> other hand, I currently do not see a potential bug in upstream's > >>>> kvm_arch_pre_run. Could you have a look if you can reproduce, > >>>> specifically if this isn't a KVM kernel issue in the end? > >>>> > >>> In kvm_cpu_exec() after calling kvm_arch_pre_run() env->exit_request is > >>> tested and function can exit without calling kvm_vcpu_ioctl(KVM_RUN). > >>> Can you check if this what happens in your case? > >> This path is executed quite frequently here. No obvious correlation with > >> the lost IRQ. > >> > > If kvm_arch_pre_run() injected interrupt kvm_vcpu_ioctl(KVM_RUN) have to > > be executed before injecting another interrupt, so if on the fist call > > of kvm_cpu_exec() kvm_arch_pre_run() injected interrupt, but > > kvm_vcpu_ioctl(KVM_RUN) was not executed because of env->exit_request > > and on the next kvm_cpu_exec() other interrupt is injected the previous > > one will be lost. > > ...and kvm_run->ready_for_interrupt_injection is not updated either in > that case, right? That makes be wonder if KVM_INTERRUPT shouldn't better > return an error in case the queue is full already. > If kvm_vcpu_ioctl(KVM_RUN) is called, but exit happens before interrupt is injected kvm_run->ready_for_interrupt_injection should be update to reflect that fact. -- Gleb.