From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MA5EZ-00020P-CV for qemu-devel@nongnu.org; Fri, 29 May 2009 12:47:07 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MA5EU-0001t4-Id for qemu-devel@nongnu.org; Fri, 29 May 2009 12:47:06 -0400 Received: from [199.232.76.173] (port=55274 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MA5EU-0001sV-9P for qemu-devel@nongnu.org; Fri, 29 May 2009 12:47:02 -0400 Received: from lizzard.sbs.de ([194.138.37.39]:18253) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1MA5ET-0002d2-5n for qemu-devel@nongnu.org; Fri, 29 May 2009 12:47:01 -0400 Message-ID: <4A201177.2090103@siemens.com> Date: Fri, 29 May 2009 18:46:47 +0200 From: Jan Kiszka MIME-Version: 1.0 References: <4A1F9B7C.4020201@siemens.com> <20090529130806.GB28542@redhat.com> <4A1FF6B9.9050502@siemens.com> <20090529162015.GA29579@redhat.com> In-Reply-To: <20090529162015.GA29579@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: Lost interrupts with upstream KVM List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gleb Natapov Cc: qemu-devel , kvm-devel Gleb Natapov wrote: > On Fri, May 29, 2009 at 04:52:41PM +0200, Jan Kiszka wrote: >> Gleb Natapov wrote: >>> On Fri, May 29, 2009 at 10:23:24AM +0200, Jan Kiszka wrote: >>>> Hi Gleb, >>>> >>>> with latest kernel modules, namely beginning with 6bc0a1a235 (Remove >>>> irq_pending bitmap), I'm loosing interrupts with upstream's KVM support. >>>> After some bisecting, hair-pulling and a bit meditation I added a >>>> WARN_ON(kvm_cpu_has_interrupt(vcpu)) to kvm_vcpu_ioctl_interrupt, and it >>>> actually triggered right before the guest got stuck. >>>> >>>> This didn't trigger with qemu-kvm (and -no-kvm-irqchip) yet but, on the >>>> other hand, I currently do not see a potential bug in upstream's >>>> kvm_arch_pre_run. Could you have a look if you can reproduce, >>>> specifically if this isn't a KVM kernel issue in the end? >>>> >>> In kvm_cpu_exec() after calling kvm_arch_pre_run() env->exit_request is >>> tested and function can exit without calling kvm_vcpu_ioctl(KVM_RUN). >>> Can you check if this what happens in your case? >> This path is executed quite frequently here. No obvious correlation with >> the lost IRQ. >> > If kvm_arch_pre_run() injected interrupt kvm_vcpu_ioctl(KVM_RUN) have to > be executed before injecting another interrupt, so if on the fist call > of kvm_cpu_exec() kvm_arch_pre_run() injected interrupt, but > kvm_vcpu_ioctl(KVM_RUN) was not executed because of env->exit_request > and on the next kvm_cpu_exec() other interrupt is injected the previous > one will be lost. ...and kvm_run->ready_for_interrupt_injection is not updated either in that case, right? That makes be wonder if KVM_INTERRUPT shouldn't better return an error in case the queue is full already. Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux