From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH v4 5/6] KVM: nVMX: Fix conditions for NMI injection Date: Sun, 14 Apr 2013 19:43:21 +0300 Message-ID: <20130414164321.GB6542@redhat.com> References: <523b4691a695525a22102c496a078fda6a13d392.1365934368.git.jan.kiszka@web.de> <20130414152359.GT17919@redhat.com> <516AD0E1.5000000@web.de> <20130414161806.GA6542@redhat.com> <516ADAC2.7090306@web.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Marcelo Tosatti , kvm , Paolo Bonzini , "Nadav Har'El" To: Jan Kiszka Return-path: Received: from mx1.redhat.com ([209.132.183.28]:34606 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752908Ab3DNQn1 (ORCPT ); Sun, 14 Apr 2013 12:43:27 -0400 Content-Disposition: inline In-Reply-To: <516ADAC2.7090306@web.de> Sender: kvm-owner@vger.kernel.org List-ID: On Sun, Apr 14, 2013 at 06:35:14PM +0200, Jan Kiszka wrote: > On 2013-04-14 18:18, Gleb Natapov wrote: > > On Sun, Apr 14, 2013 at 05:53:05PM +0200, Jan Kiszka wrote: > >> On 2013-04-14 17:23, Gleb Natapov wrote: > >>> On Sun, Apr 14, 2013 at 12:12:49PM +0200, Jan Kiszka wrote: > >>>> From: Jan Kiszka > >>>> > >>>> The logic for checking if interrupts can be injected has to be applied > >>>> also on NMIs. The difference is that if NMI interception is on these > >>>> events are consumed and blocked by the VM exit. > >>>> > >>>> Signed-off-by: Jan Kiszka > >>>> --- > >>>> arch/x86/kvm/vmx.c | 28 ++++++++++++++++++++++++++++ > >>>> 1 files changed, 28 insertions(+), 0 deletions(-) > >>>> > >>>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > >>>> index 56e7519..ad9b4bc 100644 > >>>> --- a/arch/x86/kvm/vmx.c > >>>> +++ b/arch/x86/kvm/vmx.c > >>>> @@ -4190,6 +4190,12 @@ static bool nested_exit_on_intr(struct kvm_vcpu *vcpu) > >>>> PIN_BASED_EXT_INTR_MASK; > >>>> } > >>>> > >>>> +static bool nested_exit_on_nmi(struct kvm_vcpu *vcpu) > >>>> +{ > >>>> + return get_vmcs12(vcpu)->pin_based_vm_exec_control & > >>>> + PIN_BASED_NMI_EXITING; > >>>> +} > >>>> + > >>>> static void enable_irq_window(struct kvm_vcpu *vcpu) > >>>> { > >>>> u32 cpu_based_vm_exec_control; > >>>> @@ -4315,6 +4321,28 @@ static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) > >>>> > >>>> static int vmx_nmi_allowed(struct kvm_vcpu *vcpu) > >>>> { > >>>> + if (is_guest_mode(vcpu)) { > >>>> + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); > >>>> + > >>>> + if (to_vmx(vcpu)->nested.nested_run_pending || > >>>> + vmcs_read32(GUEST_ACTIVITY_STATE) == > >>>> + GUEST_ACTIVITY_WAIT_SIPI) > >>> The same is true for interrupt too, > >> > >> Yes, but aren't we already waiting with interrupts disabled in that state? > >> > > Why? L1 can do vmcs_write(GUEST_ACTIVITY_STATE, > > GUEST_ACTIVITY_WAIT_SIPI) at any point. > > Hmm, ok. > > > > >>> but I do not think that we should allow > >>> nested guest directly enter GUEST_ACTIVITY_WAIT_SIPI state or any other > >>> state except ACTIVE. They should be emulated instead. > >> > >> Well, they aren't emulated yet but directly applied. So I think the > >> patch is correct in the current context at least. > >> > > If my understanding is correct the facts that it is directly applied is > > a serious bug and should be fixed. > > Yeah, L1 could put the vCPU in wait-for-SIPI, and only that physical > signal will wake it up again. But L0 will not send it... > > > > >> What negative effects do you expect from entering those states with L2? > >> > > As I wrote below (and you misunderstood me :)) it looks like L0 external > > interrupts do not generate vmexit while active VMCS is in Wait-For-SIPI > > state. In any case we should carefully examine what are the implications > > of this state since KVM never uses it and does not know how to handle it. > > OK, but this should be conceptually unrelated to this patch. So, given > that you applied patch 4, should I simply remove the activity state > check from this one in order to proceed with it? > Yes. Non ACTIVE vmcs state issue should be handled separately. But until it is fixed we should remember to not enable vmx nesting by default :) -- Gleb.