From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: KVM: x86: fix maintenance of guest/host xcr0 state Date: Wed, 8 May 2013 13:16:58 +0300 Message-ID: <20130508101658.GR12349@redhat.com> References: <20130416023013.GA3943@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, Paolo Bonzini , Ulrich Obergfell To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:33488 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752798Ab3EHKQ7 (ORCPT ); Wed, 8 May 2013 06:16:59 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r48AGxiI027318 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 8 May 2013 06:16:59 -0400 Content-Disposition: inline In-Reply-To: <20130416023013.GA3943@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Apr 15, 2013 at 11:30:13PM -0300, Marcelo Tosatti wrote: > > ** Untested **. > > Emulation of xcr0 writes zero guest_xcr0_loaded variable so that > subsequent VM-entry reloads CPU's xcr0 with guests xcr0 value. > > However, this is incorrect because guest_xcr0_loaded variable is > read to decide whether to reload hosts xcr0. > > In case the vcpu thread is scheduled out after the guest_xcr0_loaded = 0 > assignment, and scheduler decides to preload FPU: > > switch_to > { > __switch_to > __math_state_restore > restore_fpu_checking > fpu_restore_checking > if (use_xsave()) > fpu_xrstor_checking > xrstor64 with CPU's xcr0 == guests xcr0 > > Fix by properly restoring hosts xcr0 during emulation of xcr0 writes. > > Analyzed-by: Ulrich Obergfell > Signed-off-by: Marcelo Tosatti > Applied, thanks. > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 999d124..222926a 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -555,6 +555,25 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw) > } > EXPORT_SYMBOL_GPL(kvm_lmsw); > > +static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu) > +{ > + if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) && > + !vcpu->guest_xcr0_loaded) { > + /* kvm_set_xcr() also depends on this */ > + xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); > + vcpu->guest_xcr0_loaded = 1; > + } > +} > + > +static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) > +{ > + if (vcpu->guest_xcr0_loaded) { > + if (vcpu->arch.xcr0 != host_xcr0) > + xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); > + vcpu->guest_xcr0_loaded = 0; > + } > +} > + > int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) > { > u64 xcr0; > @@ -571,8 +590,8 @@ int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) > return 1; > if (xcr0 & ~host_xcr0) > return 1; > + kvm_put_guest_xcr0(vcpu); > vcpu->arch.xcr0 = xcr0; > - vcpu->guest_xcr0_loaded = 0; > return 0; > } > > @@ -5600,25 +5619,6 @@ static void inject_pending_event(struct kvm_vcpu *vcpu) > } > } > > -static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu) > -{ > - if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) && > - !vcpu->guest_xcr0_loaded) { > - /* kvm_set_xcr() also depends on this */ > - xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); > - vcpu->guest_xcr0_loaded = 1; > - } > -} > - > -static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) > -{ > - if (vcpu->guest_xcr0_loaded) { > - if (vcpu->arch.xcr0 != host_xcr0) > - xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); > - vcpu->guest_xcr0_loaded = 0; > - } > -} > - > static void process_nmi(struct kvm_vcpu *vcpu) > { > unsigned limit = 2; -- Gleb.