From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 1/1] KVM: Fix potentially recursively get kvm lock Date: Tue, 12 May 2009 08:55:24 -0300 Message-ID: <20090512115524.GB10901@amt.cnet> References: <200905121705.53176.sheng.yang@intel.com> <1242120729-2280-1-git-send-email-sheng@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Avi Kivity , Alex Williamson , kvm@vger.kernel.org To: Sheng Yang Return-path: Received: from mx2.redhat.com ([66.187.237.31]:55392 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754632AbZELLzo (ORCPT ); Tue, 12 May 2009 07:55:44 -0400 Content-Disposition: inline In-Reply-To: <1242120729-2280-1-git-send-email-sheng@linux.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, May 12, 2009 at 05:32:09PM +0800, Sheng Yang wrote: > kvm_vm_ioctl_deassign_dev_irq() would potentially recursively get kvm->lock, > because it called kvm_deassigned_irq() which implicit hold kvm->lock by calling > deassign_host_irq(). > > Fix it by move kvm_deassign_irq() out of critial region. And add the missing > lock for deassign_guest_irq(). > > Reported-by: Alex Williamson > Signed-off-by: Sheng Yang > --- > virt/kvm/kvm_main.c | 14 +++++++------- > 1 files changed, 7 insertions(+), 7 deletions(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 4d00942..3c69655 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -215,6 +215,8 @@ static void kvm_assigned_dev_ack_irq(struct kvm_irq_ack_notifier *kian) > static void deassign_guest_irq(struct kvm *kvm, > struct kvm_assigned_dev_kernel *assigned_dev) > { > + mutex_lock(&kvm->lock); > + > kvm_unregister_irq_ack_notifier(&assigned_dev->ack_notifier); > assigned_dev->ack_notifier.gsi = -1; > > @@ -222,6 +224,8 @@ static void deassign_guest_irq(struct kvm *kvm, > kvm_free_irq_source_id(kvm, assigned_dev->irq_source_id); > assigned_dev->irq_source_id = -1; > assigned_dev->irq_requested_type &= ~(KVM_DEV_IRQ_GUEST_MASK); > + > + mutex_unlock(&kvm->lock); > } > > /* The function implicit hold kvm->lock mutex due to cancel_work_sync() */ > @@ -558,20 +562,16 @@ static int kvm_vm_ioctl_deassign_dev_irq(struct kvm *kvm, > struct kvm_assigned_irq > *assigned_irq) > { > - int r = -ENODEV; > struct kvm_assigned_dev_kernel *match; > > mutex_lock(&kvm->lock); > - > match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head, > assigned_irq->assigned_dev_id); > + mutex_unlock(&kvm->lock); assigned_dev list is protected by kvm->lock. So you could have another ioctl adding to it at the same time you're searching. Could either have a separate kvm->assigned_devs_lock, to protect kvm->arch.assigned_dev_head (users are ioctls that manipulate it), or change the IRQ injection to use a separate spinlock, kill the workqueue and call kvm_set_irq from the assigned device interrupt handler.