From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: KVM: x86: better fix for race between nmi injection and enabling nmi window Date: Thu, 31 Mar 2011 11:47:57 +0200 Message-ID: <20110331094757.GK7766@redhat.com> References: <20110330163028.GA27365@amt.cnet> <4D936572.3060801@redhat.com> <20110330184703.GC7741@redhat.com> <4D944810.3070702@redhat.com> <20110331092445.GA1964@redhat.com> <4D94489A.9060705@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Marcelo Tosatti , kvm To: Avi Kivity Return-path: Received: from mx1.redhat.com ([209.132.183.28]:60576 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752289Ab1CaJr7 (ORCPT ); Thu, 31 Mar 2011 05:47:59 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p2V9lxpQ006406 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 31 Mar 2011 05:47:59 -0400 Content-Disposition: inline In-Reply-To: <4D94489A.9060705@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Mar 31, 2011 at 11:25:46AM +0200, Avi Kivity wrote: > On 03/31/2011 11:24 AM, Gleb Natapov wrote: > >On Thu, Mar 31, 2011 at 11:23:28AM +0200, Avi Kivity wrote: > >> On 03/30/2011 08:47 PM, Gleb Natapov wrote: > >> >On Wed, Mar 30, 2011 at 07:16:34PM +0200, Avi Kivity wrote: > >> >> On 03/30/2011 06:30 PM, Marcelo Tosatti wrote: > >> >> >Based on Gleb's idea, fix race between nmi injection and enabling > >> >> >nmi window in a simpler way. > >> >> > > >> >> >Signed-off-by: Marcelo Tosatti > >> >> > > >> >> >diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > >> >> >index a6a129f..9a7cc1be 100644 > >> >> >--- a/arch/x86/kvm/x86.c > >> >> >+++ b/arch/x86/kvm/x86.c > >> >> >@@ -5152,6 +5152,7 @@ static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) > >> >> > static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > >> >> > { > >> >> > int r; > >> >> >+ int nmi_pending; > >> >> > bool req_int_win = !irqchip_in_kernel(vcpu->kvm)&& > >> >> > vcpu->run->request_interrupt_window; > >> >> > > >> >> >@@ -5195,11 +5196,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > >> >> > if (unlikely(r)) > >> >> > goto out; > >> >> > > >> >> >+ nmi_pending = ACCESS_ONCE(vcpu->arch.nmi_pending); > >> >> >+ > >> >> > if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) { > >> >> > inject_pending_event(vcpu); > >> >> > > >> >> > /* enable NMI/IRQ window open exits if needed */ > >> >> >- if (vcpu->arch.nmi_pending) > >> >> >+ if (nmi_pending) > >> >> > kvm_x86_ops->enable_nmi_window(vcpu); > >> >> > else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > >> >> > kvm_x86_ops->enable_irq_window(vcpu); > >> >> > > >> >> > >> >> What about the check in inject_pending_events()? > >> >> > >> >Didn't we decide that this check is not a problem? Worst that can happen > >> >is NMI injection will be delayed till next exit. > >> > >> Could be very far in the future. > >> > >Next host interrupt. But with tickles host and guest yeah. > > > > esp. important with NMI, which may be used in a situation where your > tick (and everything else) are dead. > If host is alive eventually cpu should receive reschedule IPI. -- Gleb.