From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: [PATCH] KVM: nVMX: Rework event injection and recovery Date: Wed, 20 Feb 2013 17:48:40 +0100 Message-ID: <5124FE68.5030101@siemens.com> References: <5124C93B.50902@siemens.com> <20130220164634.GR3600@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm , "Nadav Har'El" , "Nakajima, Jun" To: Gleb Natapov Return-path: Received: from goliath.siemens.de ([192.35.17.28]:26027 "EHLO goliath.siemens.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751318Ab3BTQsw (ORCPT ); Wed, 20 Feb 2013 11:48:52 -0500 In-Reply-To: <20130220164634.GR3600@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 2013-02-20 17:46, Gleb Natapov wrote: > On Wed, Feb 20, 2013 at 02:01:47PM +0100, Jan Kiszka wrote: >> This aligns VMX more with SVM regarding event injection and recovery for >> nested guests. The changes allow to inject interrupts directly from L0 >> to L2. >> >> One difference to SVM is that we always transfer the pending event >> injection into the architectural state of the VCPU and then drop it from >> there if it turns out that we left L2 to enter L1. >> >> VMX and SVM are now identical in how they recover event injections from >> unperformed vmlaunch/vmresume: We detect that VM_ENTRY_INTR_INFO_FIELD >> still contains a valid event and, if yes, transfer the content into L1's >> idt_vectoring_info_field. >> >> To avoid that we incorrectly leak an event into the architectural VCPU >> state that L1 wants to inject, we skip cancellation on nested run. >> >> Signed-off-by: Jan Kiszka >> --- >> >> Survived moderate testing here and (currently) makes sense to me, but >> please review very carefully. I wouldn't be surprised if I'm still >> missing some subtle corner case. >> >> arch/x86/kvm/vmx.c | 57 +++++++++++++++++++++++---------------------------- >> 1 files changed, 26 insertions(+), 31 deletions(-) >> >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> index dd3a8a0..7d2fbd2 100644 >> --- a/arch/x86/kvm/vmx.c >> +++ b/arch/x86/kvm/vmx.c >> @@ -6489,8 +6489,6 @@ static void __vmx_complete_interrupts(struct vcpu_vmx *vmx, >> >> static void vmx_complete_interrupts(struct vcpu_vmx *vmx) >> { >> - if (is_guest_mode(&vmx->vcpu)) >> - return; >> __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info, >> VM_EXIT_INSTRUCTION_LEN, >> IDT_VECTORING_ERROR_CODE); >> @@ -6498,7 +6496,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) >> >> static void vmx_cancel_injection(struct kvm_vcpu *vcpu) >> { >> - if (is_guest_mode(vcpu)) >> + if (to_vmx(vcpu)->nested.nested_run_pending) >> return; > Why is this needed here? Please check if my reply to Nadav explains this sufficiently. Jan -- Siemens AG, Corporate Technology, CT RTC ITP SDP-DE Corporate Competence Center Embedded Linux