From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: KVM: x86: vcpu state writeback should be aware of REQ_NMI Date: Thu, 24 Mar 2011 17:19:35 +0200 Message-ID: <20110324151935.GB14544@redhat.com> References: <20110324124700.GA26882@amt.cnet> <20110324132716.GA13195@redhat.com> <20110324145910.GB26882@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm , Avi Kivity To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:9096 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753909Ab1CXPTh (ORCPT ); Thu, 24 Mar 2011 11:19:37 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p2OFJbhM006791 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 24 Mar 2011 11:19:37 -0400 Content-Disposition: inline In-Reply-To: <20110324145910.GB26882@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Mar 24, 2011 at 11:59:11AM -0300, Marcelo Tosatti wrote: > On Thu, Mar 24, 2011 at 03:27:16PM +0200, Gleb Natapov wrote: > > On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote: > > > > > > Since "Fix race between nmi injection and enabling nmi window", pending NMI > > > can be represented in KVM_REQ_NMI vcpu->requests bit. > > > > > > When setting vcpu state via SET_VCPU_EVENTS, for example during reset, > > > the REQ_NMI bit should be cleared otherwise pending NMI is transferred > > > to nmi_pending upon vcpu entry. > > > > > > Also should consider requests bit on runnable conditional. > > > > > > BZ: http://bugzilla.redhat.com/show_bug.cgi?id=684719 > > > > > Looks like we need to clear request bit on cpu reset too. KVM_REQ_NMI > > start to become more complicated that it was initially. May be replaced > > it with something like this: > > > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 1b8b16a..6a66d19 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -5151,6 +5151,7 @@ static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) > > static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > { > > int r; > > + int nmi_pending; > > bool req_int_win = !irqchip_in_kernel(vcpu->kvm) && > > vcpu->run->request_interrupt_window; > > > > @@ -5188,19 +5189,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > r = 1; > > goto out; > > } > > - if (kvm_check_request(KVM_REQ_NMI, vcpu)) > > - vcpu->arch.nmi_pending = true; > > } > > > > r = kvm_mmu_reload(vcpu); > > if (unlikely(r)) > > goto out; > > + nmi_pending = vcpu->arch.nmi_pending; > > + > > if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) { > > Yep thats nicer. > > Race: remote CPU sets nmi_pending = true here, NMI injection not > allowed, fails to open NMI window when it should. Unless i'm mistaken > this should be rare enough to be irrelevant. > Yes, that what Avi pointed too. I think the worse thing that could happen is delaying NMI till next vcpu entry. > Two patches one to revert REQ_NMI then another to fix the original problem > makes backporting easier. If we agree this is the way to go will do that. -- Gleb.