From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH] KVM: nVMX: Rework event injection and recovery Date: Wed, 20 Feb 2013 18:51:19 +0200 Message-ID: <20130220165119.GS3600@redhat.com> References: <5124C93B.50902@siemens.com> <20130220164634.GR3600@redhat.com> <5124FE68.5030101@siemens.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Marcelo Tosatti , kvm , "Nadav Har'El" , "Nakajima, Jun" To: Jan Kiszka Return-path: Received: from mx1.redhat.com ([209.132.183.28]:32131 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750971Ab3BTQv3 (ORCPT ); Wed, 20 Feb 2013 11:51:29 -0500 Content-Disposition: inline In-Reply-To: <5124FE68.5030101@siemens.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Feb 20, 2013 at 05:48:40PM +0100, Jan Kiszka wrote: > On 2013-02-20 17:46, Gleb Natapov wrote: > > On Wed, Feb 20, 2013 at 02:01:47PM +0100, Jan Kiszka wrote: > >> This aligns VMX more with SVM regarding event injection and recovery for > >> nested guests. The changes allow to inject interrupts directly from L0 > >> to L2. > >> > >> One difference to SVM is that we always transfer the pending event > >> injection into the architectural state of the VCPU and then drop it from > >> there if it turns out that we left L2 to enter L1. > >> > >> VMX and SVM are now identical in how they recover event injections from > >> unperformed vmlaunch/vmresume: We detect that VM_ENTRY_INTR_INFO_FIELD > >> still contains a valid event and, if yes, transfer the content into L1's > >> idt_vectoring_info_field. > >> > >> To avoid that we incorrectly leak an event into the architectural VCPU > >> state that L1 wants to inject, we skip cancellation on nested run. > >> > >> Signed-off-by: Jan Kiszka > >> --- > >> > >> Survived moderate testing here and (currently) makes sense to me, but > >> please review very carefully. I wouldn't be surprised if I'm still > >> missing some subtle corner case. > >> > >> arch/x86/kvm/vmx.c | 57 +++++++++++++++++++++++---------------------------- > >> 1 files changed, 26 insertions(+), 31 deletions(-) > >> > >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > >> index dd3a8a0..7d2fbd2 100644 > >> --- a/arch/x86/kvm/vmx.c > >> +++ b/arch/x86/kvm/vmx.c > >> @@ -6489,8 +6489,6 @@ static void __vmx_complete_interrupts(struct vcpu_vmx *vmx, > >> > >> static void vmx_complete_interrupts(struct vcpu_vmx *vmx) > >> { > >> - if (is_guest_mode(&vmx->vcpu)) > >> - return; > >> __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info, > >> VM_EXIT_INSTRUCTION_LEN, > >> IDT_VECTORING_ERROR_CODE); > >> @@ -6498,7 +6496,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) > >> > >> static void vmx_cancel_injection(struct kvm_vcpu *vcpu) > >> { > >> - if (is_guest_mode(vcpu)) > >> + if (to_vmx(vcpu)->nested.nested_run_pending) > >> return; > > Why is this needed here? > > Please check if my reply to Nadav explains this sufficiently. > Ah, sorry. Will follow up there if it is not. -- Gleb.