* [PATCH] KVM: SVM: unconditionally wake up VCPU on IOMMU interrupt
@ 2017-10-10 10:57 Paolo Bonzini
2017-10-10 19:17 ` Radim Krčmář
0 siblings, 1 reply; 2+ messages in thread
From: Paolo Bonzini @ 2017-10-10 10:57 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: suravee.suthikulpanit, rkrcmar
Checking the mode is unnecessary, and is done without a memory barrier
separating the LAPIC write from the vcpu->mode read; in addition,
kvm_vcpu_wake_up is already doing a check for waiters on the wait queue
that has the same effect.
In practice it's safe because spin_lock has full-barrier semantics on x86,
but don't be too clever.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 01e6e8fab5d6..712406f725a2 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1034,15 +1034,12 @@ static int avic_ga_log_notifier(u32 ga_tag)
}
spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags);
- if (!vcpu)
- return 0;
-
/* Note:
* At this point, the IOMMU should have already set the pending
* bit in the vAPIC backing page. So, we just need to schedule
* in the vcpu.
*/
- if (vcpu->mode == OUTSIDE_GUEST_MODE)
+ if (vcpu)
kvm_vcpu_wake_up(vcpu);
return 0;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] KVM: SVM: unconditionally wake up VCPU on IOMMU interrupt
2017-10-10 10:57 [PATCH] KVM: SVM: unconditionally wake up VCPU on IOMMU interrupt Paolo Bonzini
@ 2017-10-10 19:17 ` Radim Krčmář
0 siblings, 0 replies; 2+ messages in thread
From: Radim Krčmář @ 2017-10-10 19:17 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm, suravee.suthikulpanit
2017-10-10 12:57+0200, Paolo Bonzini:
> Checking the mode is unnecessary, and is done without a memory barrier
> separating the LAPIC write from the vcpu->mode read; in addition,
> kvm_vcpu_wake_up is already doing a check for waiters on the wait queue
> that has the same effect.
>
> In practice it's safe because spin_lock has full-barrier semantics on x86,
> but don't be too clever.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-10-10 19:17 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-10 10:57 [PATCH] KVM: SVM: unconditionally wake up VCPU on IOMMU interrupt Paolo Bonzini
2017-10-10 19:17 ` Radim Krčmář
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).