From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jesse Subject: [PATCH]: pointer to vmcs getting lost Date: Fri, 01 Aug 2008 15:18:52 -0700 Message-ID: <48938BCC.2030402@neuraliq.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit To: kvm@vger.kernel.org Return-path: Received: from IP-012-129-246-136.Alorica.com ([12.129.246.136]:53525 "EHLO mail.neuraliq.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1751942AbYHAWgv (ORCPT ); Fri, 1 Aug 2008 18:36:51 -0400 Sender: kvm-owner@vger.kernel.org List-ID: Greetings, I noticed a race condition when running two guests simultaneously and debugging both guests (on 64-bit intel cpus). Periodically I would get errors from the vmread, vmwrite, or vmresume instructions. Some research revealed that these errors were being caused by having an invalid vmcs loaded. Further, I found that the vmcs is a per_cpu variable, which I believe means that any reference to it is invalid after a context switch. (Corrections appreciated). This means that the vmcs must be reloaded each time the process is switched to. The patch below fixed the problem for me. This patch does three things. 1. Extends the critical section in __vcpu_run to include the handling of vmexits, where many of the vmread/writes occur. 2. Perform a vcpu_load after we enter the critical section, and after we return from kvm_resched. 3. Move the call to kvm_guest_debug_pre into the critical section (because it calls vmread/write). I hope you find this useful. I am not on list, so please CC me on replies. ~Jesse Dutton diff -ruNa kvm-72/kernel/x86.c kvm-72-changed/kernel/x86.c --- kvm-72/kernel/x86.c 2008-07-27 06:20:10.000000000 -0700 +++ kvm-72-changed/kernel/x86.c 2008-07-31 15:25:25.000000000 -0700 @@ -2845,8 +2845,6 @@ vapic_enter(vcpu); preempted: - if (vcpu->guest_debug.enabled) - kvm_x86_ops->guest_debug_pre(vcpu); again: if (vcpu->requests) @@ -2878,7 +2876,12 @@ clear_bit(KVM_REQ_PENDING_TIMER, &vcpu->requests); kvm_inject_pending_timer_irqs(vcpu); + vcpu_put(vcpu); preempt_disable(); + vcpu_load(vcpu); + + if (vcpu->guest_debug.enabled) + kvm_x86_ops->guest_debug_pre(vcpu); kvm_x86_ops->prepare_guest_switch(vcpu); kvm_load_guest_fpu(vcpu); @@ -2941,7 +2944,6 @@ kvm_guest_exit(); - preempt_enable(); down_read(&vcpu->kvm->slots_lock); @@ -2960,6 +2962,8 @@ r = kvm_x86_ops->handle_exit(kvm_run, vcpu); + preempt_enable(); + if (r > 0) { if (dm_request_for_irq_injection(vcpu, kvm_run)) { r = -EINTR; @@ -2974,7 +2978,9 @@ out: up_read(&vcpu->kvm->slots_lock); if (r > 0) { + vcpu_put(vcpu); kvm_resched(vcpu); + vcpu_load(vcpu); down_read(&vcpu->kvm->slots_lock); goto preempted; }