From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: KVM: x86: vcpu state writeback should be aware of REQ_NMI Date: Thu, 24 Mar 2011 15:27:16 +0200 Message-ID: <20110324132716.GA13195@redhat.com> References: <20110324124700.GA26882@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm , Avi Kivity To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:63554 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751930Ab1CXN1S (ORCPT ); Thu, 24 Mar 2011 09:27:18 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p2ODRISM031524 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 24 Mar 2011 09:27:18 -0400 Content-Disposition: inline In-Reply-To: <20110324124700.GA26882@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote: > > Since "Fix race between nmi injection and enabling nmi window", pending NMI > can be represented in KVM_REQ_NMI vcpu->requests bit. > > When setting vcpu state via SET_VCPU_EVENTS, for example during reset, > the REQ_NMI bit should be cleared otherwise pending NMI is transferred > to nmi_pending upon vcpu entry. > > Also should consider requests bit on runnable conditional. > > BZ: http://bugzilla.redhat.com/show_bug.cgi?id=684719 > Looks like we need to clear request bit on cpu reset too. KVM_REQ_NMI start to become more complicated that it was initially. May be replaced it with something like this: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1b8b16a..6a66d19 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5151,6 +5151,7 @@ static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) static int vcpu_enter_guest(struct kvm_vcpu *vcpu) { int r; + int nmi_pending; bool req_int_win = !irqchip_in_kernel(vcpu->kvm) && vcpu->run->request_interrupt_window; @@ -5188,19 +5189,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) r = 1; goto out; } - if (kvm_check_request(KVM_REQ_NMI, vcpu)) - vcpu->arch.nmi_pending = true; } r = kvm_mmu_reload(vcpu); if (unlikely(r)) goto out; + nmi_pending = vcpu->arch.nmi_pending; + if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) { inject_pending_event(vcpu); /* enable NMI/IRQ window open exits if needed */ - if (vcpu->arch.nmi_pending) + if (nmi_pending) kvm_x86_ops->enable_nmi_window(vcpu); else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) kvm_x86_ops->enable_irq_window(vcpu); > Signed-off-by: Marcelo Tosatti > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index f1e4025..d7f4c4f 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -2677,8 +2677,11 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, > events->interrupt.shadow); > > vcpu->arch.nmi_injected = events->nmi.injected; > - if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) > + if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) { > vcpu->arch.nmi_pending = events->nmi.pending; > + if (!vcpu->arch.nmi_pending) > + clear_bit(KVM_REQ_NMI, &vcpu->requests); > + } > kvm_x86_ops->set_nmi_mask(vcpu, events->nmi.masked); > > if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR) > @@ -6149,7 +6152,8 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) > !vcpu->arch.apf.halted) > || !list_empty_careful(&vcpu->async_pf.done) > || vcpu->arch.mp_state == KVM_MP_STATE_SIPI_RECEIVED > - || vcpu->arch.nmi_pending || > + || vcpu->arch.nmi_pending > + || test_bit(KVM_REQ_NMI, &vcpu->requests) || > (kvm_arch_interrupt_allowed(vcpu) && > kvm_cpu_has_interrupt(vcpu)); > } > -- Gleb.