stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
       [not found] <20220322172449.235575-1-mlevitsk@redhat.com>
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-30  0:27   ` Sean Christopherson
  0 siblings, 1 reply; 4+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky, stable

This can cause various unexpected issues, since VM is partially
destroyed at that point.

For example when AVIC is enabled, this causes avic_vcpu_load to
access physical id page entry which is already freed by .vm_destroy.

Fixes: 8221c1370056 ("svm: Manage vcpu load/unload when enable AVIC")
Cc: stable@vger.kernel.org

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/x86.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d3a9ce07a565..ba920e537ddf 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11759,20 +11759,15 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
 	vcpu_put(vcpu);
 }
 
-static void kvm_free_vcpus(struct kvm *kvm)
+static void kvm_unload_vcpu_mmus(struct kvm *kvm)
 {
 	unsigned long i;
 	struct kvm_vcpu *vcpu;
 
-	/*
-	 * Unpin any mmu pages first.
-	 */
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		kvm_clear_async_pf_completion_queue(vcpu);
 		kvm_unload_vcpu_mmu(vcpu);
 	}
-
-	kvm_destroy_vcpus(kvm);
 }
 
 void kvm_arch_sync_events(struct kvm *kvm)
@@ -11878,11 +11873,12 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
 		mutex_unlock(&kvm->slots_lock);
 	}
+	kvm_unload_vcpu_mmus(kvm);
 	static_call_cond(kvm_x86_vm_destroy)(kvm);
 	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
 	kvm_pic_destroy(kvm);
 	kvm_ioapic_destroy(kvm);
-	kvm_free_vcpus(kvm);
+	kvm_destroy_vcpus(kvm);
 	kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 	kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1));
 	kvm_mmu_uninit_vm(kvm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
@ 2022-03-30  0:27   ` Sean Christopherson
  2022-03-30 12:07     ` Paolo Bonzini
  0 siblings, 1 reply; 4+ messages in thread
From: Sean Christopherson @ 2022-03-30  0:27 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Vitaly Kuznetsov,
	Thomas Gleixner, stable

On Tue, Mar 22, 2022, Maxim Levitsky wrote:
> This can cause various unexpected issues, since VM is partially
> destroyed at that point.
> 
> For example when AVIC is enabled, this causes avic_vcpu_load to
> access physical id page entry which is already freed by .vm_destroy.

Hmm, the SEV unbinding of ASIDs should be done after MMU teardown too (which your
patch also does).

> 
> Fixes: 8221c1370056 ("svm: Manage vcpu load/unload when enable AVIC")
> Cc: stable@vger.kernel.org
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d3a9ce07a565..ba920e537ddf 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11759,20 +11759,15 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
>  	vcpu_put(vcpu);
>  }
>  
> -static void kvm_free_vcpus(struct kvm *kvm)
> +static void kvm_unload_vcpu_mmus(struct kvm *kvm)
>  {
>  	unsigned long i;
>  	struct kvm_vcpu *vcpu;
>  
> -	/*
> -	 * Unpin any mmu pages first.
> -	 */
>  	kvm_for_each_vcpu(i, vcpu, kvm) {
>  		kvm_clear_async_pf_completion_queue(vcpu);
>  		kvm_unload_vcpu_mmu(vcpu);
>  	}
> -
> -	kvm_destroy_vcpus(kvm);
>  }
>  
>  void kvm_arch_sync_events(struct kvm *kvm)
> @@ -11878,11 +11873,12 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
>  		mutex_unlock(&kvm->slots_lock);
>  	}
> +	kvm_unload_vcpu_mmus(kvm);
>  	static_call_cond(kvm_x86_vm_destroy)(kvm);
>  	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
>  	kvm_pic_destroy(kvm);
>  	kvm_ioapic_destroy(kvm);
> -	kvm_free_vcpus(kvm);
> +	kvm_destroy_vcpus(kvm);

Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
where it is by using vm_teardown, but effectively move SVM?

https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com

>  	kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
>  	kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1));
>  	kvm_mmu_uninit_vm(kvm);
> -- 
> 2.26.3
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-30  0:27   ` Sean Christopherson
@ 2022-03-30 12:07     ` Paolo Bonzini
  2022-04-28  5:47       ` Maxim Levitsky
  0 siblings, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2022-03-30 12:07 UTC (permalink / raw)
  To: Sean Christopherson, Maxim Levitsky
  Cc: kvm, Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Vitaly Kuznetsov, Thomas Gleixner,
	stable

On 3/30/22 02:27, Sean Christopherson wrote:
> Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
> by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
> where it is by using vm_teardown, but effectively move SVM?
> 
> https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com

I'd rather do that only for the TDX patches.

Paolo


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-30 12:07     ` Paolo Bonzini
@ 2022-04-28  5:47       ` Maxim Levitsky
  0 siblings, 0 replies; 4+ messages in thread
From: Maxim Levitsky @ 2022-04-28  5:47 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson
  Cc: kvm, Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Vitaly Kuznetsov, Thomas Gleixner,
	stable

On Wed, 2022-03-30 at 14:07 +0200, Paolo Bonzini wrote:
> On 3/30/22 02:27, Sean Christopherson wrote:
> > Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
> > by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
> > where it is by using vm_teardown, but effectively move SVM?
> > 
> > https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com
> 
> I'd rather do that only for the TDX patches.
> 
> Paolo
> 
Any update on this patch? Looks like it is not upstream nor in kvm/queue.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-04-28  5:47 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20220322172449.235575-1-mlevitsk@redhat.com>
2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
2022-03-30  0:27   ` Sean Christopherson
2022-03-30 12:07     ` Paolo Bonzini
2022-04-28  5:47       ` Maxim Levitsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).