From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68591101C0 for ; Mon, 15 May 2023 17:52:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0409C433D2; Mon, 15 May 2023 17:52:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1684173122; bh=0VpwqI+4BzTNpddx710URyG3UVcR0gewcp7j1IM5LNw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D01IoQQ5+5dsFo0DNmP+tSLhbkLx66g5TxZQMmgJaOzW55z0cht7MlQYRbKjKGc3l 4wds3LFSg9KqixOwdSsSPjxl2mI0nByYvqsoNcg57nyXbIhRksnVaIeEyhbr/E/Ej3 3K49zDUJJgR7779oHW/D+GhVgPjAe6mdmIPSFG+c= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Sean Christopherson , Paolo Bonzini , Rishabh Bhatnagar , Allen Pais Subject: [PATCH 5.10 374/381] KVM: x86: do not set st->preempted when going back to user space Date: Mon, 15 May 2023 18:30:25 +0200 Message-Id: <20230515161753.838989822@linuxfoundation.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230515161736.775969473@linuxfoundation.org> References: <20230515161736.775969473@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Rishabh Bhatnagar From: Paolo Bonzini commit 54aa83c90198e68eee8b0850c749bc70efb548da upstream. Similar to the Xen path, only change the vCPU's reported state if the vCPU was actually preempted. The reason for KVM's behavior is that for example optimistic spinning might not be a good idea if the guest is doing repeated exits to userspace; however, it is confusing and unlikely to make a difference, because well-tuned guests will hardly ever exit KVM_RUN in the first place. Suggested-by: Sean Christopherson Signed-off-by: Paolo Bonzini [risbhat@amazon.com: Don't check for xen msr as support is not available and skip the SEV-ES condition] Signed-off-by: Rishabh Bhatnagar Tested-by: Allen Pais Acked-by: Sean Christopherson Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/x86.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4139,16 +4139,18 @@ void kvm_arch_vcpu_put(struct kvm_vcpu * { int idx; - if (vcpu->preempted) + if (vcpu->preempted) { vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu); - /* - * kvm_memslots() will be called by - * kvm_write_guest_offset_cached() so take the srcu lock. - */ - idx = srcu_read_lock(&vcpu->kvm->srcu); - kvm_steal_time_set_preempted(vcpu); - srcu_read_unlock(&vcpu->kvm->srcu, idx); + /* + * Take the srcu lock as memslots will be accessed to check the gfn + * cache generation against the memslots generation. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); + kvm_steal_time_set_preempted(vcpu); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + } + kvm_x86_ops.vcpu_put(vcpu); vcpu->arch.last_host_tsc = rdtsc(); /*