From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: KVM: x86: vcpu state writeback should be aware of REQ_NMI Date: Thu, 24 Mar 2011 15:26:31 +0200 Message-ID: <4D8B4687.8040104@redhat.com> References: <20110324124700.GA26882@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm , Gleb Natapov To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:31614 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751930Ab1CXN0e (ORCPT ); Thu, 24 Mar 2011 09:26:34 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p2ODQYrC031306 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 24 Mar 2011 09:26:34 -0400 In-Reply-To: <20110324124700.GA26882@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On 03/24/2011 02:47 PM, Marcelo Tosatti wrote: > Since "Fix race between nmi injection and enabling nmi window", pending NMI > can be represented in KVM_REQ_NMI vcpu->requests bit. > > When setting vcpu state via SET_VCPU_EVENTS, for example during reset, > the REQ_NMI bit should be cleared otherwise pending NMI is transferred > to nmi_pending upon vcpu entry. > > Also should consider requests bit on runnable conditional. > > BZ: http://bugzilla.redhat.com/show_bug.cgi?id=684719 > > Signed-off-by: Marcelo Tosatti > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index f1e4025..d7f4c4f 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -2677,8 +2677,11 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, > events->interrupt.shadow); > > vcpu->arch.nmi_injected = events->nmi.injected; > - if (events->flags& KVM_VCPUEVENT_VALID_NMI_PENDING) > + if (events->flags& KVM_VCPUEVENT_VALID_NMI_PENDING) { > vcpu->arch.nmi_pending = events->nmi.pending; > + if (!vcpu->arch.nmi_pending) > + clear_bit(KVM_REQ_NMI,&vcpu->requests); > + } > kvm_x86_ops->set_nmi_mask(vcpu, events->nmi.masked); > > if (events->flags& KVM_VCPUEVENT_VALID_SIPI_VECTOR) > @@ -6149,7 +6152,8 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) > !vcpu->arch.apf.halted) > || !list_empty_careful(&vcpu->async_pf.done) > || vcpu->arch.mp_state == KVM_MP_STATE_SIPI_RECEIVED > - || vcpu->arch.nmi_pending || > + || vcpu->arch.nmi_pending > + || test_bit(KVM_REQ_NMI,&vcpu->requests) || > (kvm_arch_interrupt_allowed(vcpu)&& > kvm_cpu_has_interrupt(vcpu)); > } > Ouch, right. But shouldn't we have similar processing when getting vcpu events? Otherwise a pending nmi can be lost during live migration. -- error compiling committee.c: too many arguments to function