From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: [PATCH] KVM: nVMX: Replace kvm_set_cr0 with vmx_set_cr0 in load_vmcs12_host_state Date: Tue, 30 Apr 2013 14:42:20 +0200 Message-ID: <517FBC2C.6040607@siemens.com> References: <5129361A.7090608@web.de> <517CF7F6.3070405@web.de> <20130430114604.GA22125@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm , "Nadav Har'El" , "Nakajima, Jun" , Avi Kivity To: Gleb Natapov Return-path: Received: from goliath.siemens.de ([192.35.17.28]:30041 "EHLO goliath.siemens.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760555Ab3D3Mmh (ORCPT ); Tue, 30 Apr 2013 08:42:37 -0400 In-Reply-To: <20130430114604.GA22125@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 2013-04-30 13:46, Gleb Natapov wrote: > On Sun, Apr 28, 2013 at 12:20:38PM +0200, Jan Kiszka wrote: >> On 2013-02-23 22:35, Jan Kiszka wrote: >>> From: Jan Kiszka >>> >>> Likely a typo, but a fatal one as kvm_set_cr0 performs checks on the >>> state transition that may prevent loading L1's cr0. >>> >>> Signed-off-by: Jan Kiszka >>> --- >>> arch/x86/kvm/vmx.c | 2 +- >>> 1 files changed, 1 insertions(+), 1 deletions(-) >>> >>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >>> index 26d47e9..94f3b66 100644 >>> --- a/arch/x86/kvm/vmx.c >>> +++ b/arch/x86/kvm/vmx.c >>> @@ -7429,7 +7429,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, >>> * fpu_active (which may have changed). >>> * Note that vmx_set_cr0 refers to efer set above. >>> */ >>> - kvm_set_cr0(vcpu, vmcs12->host_cr0); >>> + vmx_set_cr0(vcpu, vmcs12->host_cr0); >>> /* >>> * If we did fpu_activate()/fpu_deactivate() during L2's run, we need >>> * to apply the same changes to L1's vmcs. We just set cr0 correctly, >>> >> >> This one still applies, is necessary for nested unrestricted guest mode, >> and I'm still convinced it's an appropriate way to fix the bug. How to >> proceed? >> > What check that is done by kvm_set_cr0() fails? Would have to reproduce the bug to confirm, but from the top of my head and from looking at the code again: if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) { if ((vcpu->arch.efer & EFER_LME)) { int cs_db, cs_l; if (!is_pae(vcpu)) return 1; kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l); if (cs_l) return 1; I think to remember this last check triggered. When we come from the guest with paging off, we may run through this check an incorrectly bail out here when the host state fulfills the conditions (PG, EFER_LME, and L bit set). Jan -- Siemens AG, Corporate Technology, CT RTC ITP SDP-DE Corporate Competence Center Embedded Linux