From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0556717AAD for ; Wed, 9 Aug 2023 11:37:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A10FC433C8; Wed, 9 Aug 2023 11:37:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1691581069; bh=yEvXydHl2jTUvjIB/gRwlyi4tzjAXV64TMTvqRAUTe8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OD7bj24+h9dJw2Jafk3euZV5sWkB8noxlqySRTMLjBBEE2XKcZLt5shTTOnLoU9y7 14/PowdcarJ8xF3tUPnDC4Umg7N+wnag/o9HQtd/oJpeYV1LrCTfXwItqkydDpMvBQ Hq3y5Co4pa7VfR7F3wW//Jq53m9yG+jbjcd8oALM= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Sean Christopherson , Paolo Bonzini , Sasha Levin Subject: [PATCH 5.10 095/201] KVM: VMX: Fold ept_update_paging_mode_cr0() back into vmx_set_cr0() Date: Wed, 9 Aug 2023 12:41:37 +0200 Message-ID: <20230809103647.001726655@linuxfoundation.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230809103643.799166053@linuxfoundation.org> References: <20230809103643.799166053@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Sean Christopherson [ Upstream commit c834fd7fc1308a0e0429d203a6c3af528cd902fa ] Move the CR0/CR3/CR4 shenanigans for EPT without unrestricted guest back into vmx_set_cr0(). This will allow a future patch to eliminate the rather gross stuffing of vcpu->arch.cr0 in the paging transition cases by snapshotting the old CR0. No functional change intended. Signed-off-by: Sean Christopherson Message-Id: <20210713163324.627647-24-seanjc@google.com> Signed-off-by: Paolo Bonzini Stable-dep-of: c4abd7352023 ("KVM: VMX: Don't fudge CR0 and CR4 for restricted L2 guest") Signed-off-by: Sasha Levin --- arch/x86/kvm/vmx/vmx.c | 40 +++++++++++++++++----------------------- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 574acfa98fa9b..b9abe08c9d590 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3063,27 +3063,6 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu) kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); } -static void ept_update_paging_mode_cr0(unsigned long cr0, struct kvm_vcpu *vcpu) -{ - struct vcpu_vmx *vmx = to_vmx(vcpu); - - if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3)) - vmx_cache_reg(vcpu, VCPU_EXREG_CR3); - if (!(cr0 & X86_CR0_PG)) { - /* From paging/starting to nonpaging */ - exec_controls_setbit(vmx, CPU_BASED_CR3_LOAD_EXITING | - CPU_BASED_CR3_STORE_EXITING); - vcpu->arch.cr0 = cr0; - vmx_set_cr4(vcpu, kvm_read_cr4(vcpu)); - } else if (!is_paging(vcpu)) { - /* From nonpaging to paging */ - exec_controls_clearbit(vmx, CPU_BASED_CR3_LOAD_EXITING | - CPU_BASED_CR3_STORE_EXITING); - vcpu->arch.cr0 = cr0; - vmx_set_cr4(vcpu, kvm_read_cr4(vcpu)); - } -} - void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -3113,8 +3092,23 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) } #endif - if (enable_ept && !is_unrestricted_guest(vcpu)) - ept_update_paging_mode_cr0(cr0, vcpu); + if (enable_ept && !is_unrestricted_guest(vcpu)) { + if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3)) + vmx_cache_reg(vcpu, VCPU_EXREG_CR3); + if (!(cr0 & X86_CR0_PG)) { + /* From paging/starting to nonpaging */ + exec_controls_setbit(vmx, CPU_BASED_CR3_LOAD_EXITING | + CPU_BASED_CR3_STORE_EXITING); + vcpu->arch.cr0 = cr0; + vmx_set_cr4(vcpu, kvm_read_cr4(vcpu)); + } else if (!is_paging(vcpu)) { + /* From nonpaging to paging */ + exec_controls_clearbit(vmx, CPU_BASED_CR3_LOAD_EXITING | + CPU_BASED_CR3_STORE_EXITING); + vcpu->arch.cr0 = cr0; + vmx_set_cr4(vcpu, kvm_read_cr4(vcpu)); + } + } vmcs_writel(CR0_READ_SHADOW, cr0); vmcs_writel(GUEST_CR0, hw_cr0); -- 2.40.1