From mboxrd@z Thu Jan 1 00:00:00 1970 From: Radim =?utf-8?B?S3LEjW3DocWZ?= Subject: Re: [PATCH v3] KVM: x86: Fix nmi injection failure when vcpu got blocked Date: Tue, 30 May 2017 15:36:05 +0200 Message-ID: <20170530133605.GA18926@potion> References: <1495775808-10396-1-git-send-email-ann.zhuangyanying@huawei.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: pbonzini@redhat.com, herongguang.he@huawei.com, qemu-devel@nongnu.org, arei.gonglei@huawei.com, oscar.zhangbo@huawei.com, kvm@vger.kernel.org To: Zhuangyanying Return-path: Received: from mx1.redhat.com ([209.132.183.28]:56482 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750902AbdE3NgK (ORCPT ); Tue, 30 May 2017 09:36:10 -0400 Content-Disposition: inline In-Reply-To: <1495775808-10396-1-git-send-email-ann.zhuangyanying@huawei.com> Sender: kvm-owner@vger.kernel.org List-ID: 2017-05-26 13:16+0800, Zhuangyanying: > From: ZhuangYanying > > When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads, > other than the lock-holding one, would enter into S state because of > pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could > not be injected into vm. > > The reason is: > 1 It sets nmi_queued to 1 when calling ioctl KVM_NMI in qemu, and sets > cpu->kvm_vcpu_dirty to true in do_inject_external_nmi() meanwhile. > 2 It sets nmi_queued to 0 in process_nmi(), before entering guest, because > cpu->kvm_vcpu_dirty is true. > > It's not enough just to check nmi_queued to decide whether to stay in > vcpu_block() or not. NMI should be injected immediately at any situation. > Add checking nmi_pending, and testing KVM_REQ_NMI replaces nmi_queued > in vm_vcpu_has_events(). > > Do the same change for SMIs. > > Signed-off-by: Zhuang Yanying > --- > v1->v2 > - simplify message. The complete description is here: > http://www.spinics.net/lists/kvm/msg150380.html > - Testing KVM_REQ_NMI replaces nmi_pending. > - Add Testing kvm_x86_ops->nmi_allowed(vcpu). > v2->v3 > - Testing KVM_REQ_NMI replaces nmi_queued, not nmi_pending. > - Do the same change for SMIs. > --- > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > @@ -8394,10 +8394,13 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) > if (vcpu->arch.pv.pv_unhalted) > return true; > > - if (atomic_read(&vcpu->arch.nmi_queued)) > + if (kvm_test_request(KVM_REQ_NMI, vcpu) || > + (vcpu->arch.nmi_pending && I think the logic should be if ((kvm_test_request(KVM_REQ_NMI, vcpu) || vcpu->arch.nmi_pending) && kvm_x86_ops->nmi_allowed(vcpu)) because there is no reason to resume the VCPU if we cannot inject. > + kvm_x86_ops->nmi_allowed(vcpu))) > return true; > > - if (kvm_test_request(KVM_REQ_SMI, vcpu)) > + if (kvm_test_request(KVM_REQ_SMI, vcpu) || > + (vcpu->arch.smi_pending && !is_smm(vcpu))) Ditto. > return true; > > if (kvm_arch_interrupt_allowed(vcpu) && We'll then be consistent with other interrupts, Thanks.