From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chris Wright Subject: Re: [PATCH 1/1] KVM: Fix potentially recursively get kvm lock Date: Fri, 22 May 2009 08:36:25 -0700 Message-ID: <20090522153625.GF20823@sequoia.sous-sol.org> References: <1242120729-2280-1-git-send-email-sheng@linux.intel.com> <20090512115524.GB10901@amt.cnet> <200905122213.36833.sheng.yang@intel.com> <20090512143021.GB12888@amt.cnet> <20090512194432.GA19969@amt.cnet> <1242164187.4788.4.camel@2710p.home> <20090512220908.GA22626@amt.cnet> <1242166650.4788.16.camel@2710p.home> <20090522150623.GD20823@sequoia.sous-sol.org> <1243006469.27733.59.camel@lappy> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Chris Wright , Marcelo Tosatti , "Yang, Sheng" , Avi Kivity , kvm@vger.kernel.org To: Alex Williamson Return-path: Received: from sous-sol.org ([216.99.217.87]:50624 "EHLO sequoia.sous-sol.org" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753207AbZEVPgm (ORCPT ); Fri, 22 May 2009 11:36:42 -0400 Content-Disposition: inline In-Reply-To: <1243006469.27733.59.camel@lappy> Sender: kvm-owner@vger.kernel.org List-ID: * Alex Williamson (alex.williamson@hp.com) wrote: > On Fri, 2009-05-22 at 08:06 -0700, Chris Wright wrote: > > * Alex Williamson (alex.williamson@hp.com) wrote: > > > On Tue, 2009-05-12 at 19:09 -0300, Marcelo Tosatti wrote: > > > > KVM: workaround workqueue / deassign_host_irq deadlock > > > > > > > > I think I'm running into the following deadlock in the kvm kernel module > > > > when trying to use device assignment: > > > > > > > > CPU A CPU B > > > > kvm_vm_ioctl_deassign_dev_irq() > > > > mutex_lock(&kvm->lock); worker_thread() > > > > -> kvm_deassign_irq() -> > > > > kvm_assigned_dev_interrupt_work_handler() > > > > -> deassign_host_irq() mutex_lock(&kvm->lock); > > > > -> cancel_work_sync() [blocked] > > > > > > > > Workaround the issue by dropping kvm->lock for cancel_work_sync(). > > > > Is this still pending? > > I haven't seen this particular workaround make it into a tree, however > Marcelo has been working on a set of patches to properly fix this. Most > recent version was sent on 5/20. Great, thanks. -chris