From: Matthew Rosato <mjrosato@linux.ibm.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
Sean Christopherson <seanjc@google.com>
Cc: alex.williamson@redhat.com, pbonzini@redhat.com,
cohuck@redhat.com, farman@linux.ibm.com, pmorel@linux.ibm.com,
borntraeger@linux.ibm.com, frankja@linux.ibm.com,
imbrenda@linux.ibm.com, david@redhat.com, akrowiak@linux.ibm.com,
jjherne@linux.ibm.com, pasic@linux.ibm.com,
zhenyuw@linux.intel.com, zhi.a.wang@intel.com,
linux-s390@vger.kernel.org, kvm@vger.kernel.org,
intel-gvt-dev@lists.freedesktop.org,
intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: async kvm_destroy_vm for vfio devices
Date: Thu, 12 Jan 2023 12:21:17 -0500 [thread overview]
Message-ID: <f7c39317-92a4-520e-8e69-a8606cd40e9a@linux.ibm.com> (raw)
In-Reply-To: <Y8AA8r5MzKQIF8I7@nvidia.com>
On 1/12/23 7:45 AM, Jason Gunthorpe wrote:
> On Wed, Jan 11, 2023 at 08:53:34PM +0000, Sean Christopherson wrote:
>> On Wed, Jan 11, 2023, Jason Gunthorpe wrote:
>>> On Wed, Jan 11, 2023 at 07:54:51PM +0000, Sean Christopherson wrote:
>>>
>>>> Something feels off. If KVM's refcount is 0, then accessing device->group->kvm
>>>> in vfio_device_open() can't happen unless there's a refcounting bug somewhere.
>>>
>>> The problem is in close, not open.
>>
>> The deadlock problem is, yes. My point is that if group_lock needs to be taken
>> when nullifying group->kvm during kvm_vfio_destroy(), then there is also a refcounting
>> prolem with respect to open(). If there is no refcounting problem, then nullifying
>> group->kvm during kvm_vfio_destroy() is unnecessary (but again, I doubt this is
>> the case).
>
> IIRC the drivers are supposed to use one of the refcount not zero
> incrs to counteract this, but I never checked they do..
>
> Yi is working on a patch to change things so vfio drops the kvm
> pointer when the kvm file closes, not when the reference goes to 0
> to avoid a refcount cycle problem which should also solve that.
>
>> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
>> index 6e8804fe0095..b3a84d65baa6 100644
>> --- a/drivers/vfio/vfio_main.c
>> +++ b/drivers/vfio/vfio_main.c
>> @@ -772,7 +772,12 @@ static struct file *vfio_device_open(struct vfio_device *device)
>> * reference and release it during close_device.
>> */
>> mutex_lock(&device->group->group_lock);
>> - device->kvm = device->group->kvm;
>> +
>> + if (device->kvm_ops && device->group->kvm) {
>> + ret = device->kvm_ops->get_kvm(device->group->kvm);
>
> At this point I'd rather just use the symbol get stuff like kvm does
> and call the proper functions.
>
So should I work up a v2 that does symbol gets for kvm_get_kvm_safe and kvm_put_kvm from vfio_main and drop kvm_put_kvm_async? Or is the patch Yi is working on changing things such that will also address the deadlock issue?
If so, something like the following (where vfio_kvm_get symbol gets kvm_get_kvm_safe and vfio_kvm_put symbol gets kvm_put_kvm):
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 5177bb061b17..a49bf1080f0a 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -361,16 +361,22 @@ static int vfio_device_first_open(struct vfio_device *device,
if (ret)
goto err_module_put;
+ if (kvm && !vfio_kvm_get(kvm)) {
+ ret = -ENOENT;
+ goto err_unuse_iommu;
+ }
device->kvm = kvm;
if (device->ops->open_device) {
ret = device->ops->open_device(device);
if (ret)
- goto err_unuse_iommu;
+ goto err_put_kvm;
}
return 0;
-err_unuse_iommu:
+err_put_kvm:
+ vfio_put_kvm(kvm);
device->kvm = NULL;
+err_unuse_iommu:
if (iommufd)
vfio_iommufd_unbind(device);
else
@@ -465,6 +471,9 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
vfio_device_group_close(device);
+ if (device->open_count == 0 && device->group->kvm)
+ vfio_kvm_put(device->group->kvm);
+
vfio_device_put_registration(device);
return 0;
next prev parent reply other threads:[~2023-01-12 18:01 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-09 20:10 [PATCH 0/2] kvm/vfio: fix potential deadlock on vfio group lock Matthew Rosato
2023-01-09 20:10 ` [PATCH 1/2] KVM: async kvm_destroy_vm for vfio devices Matthew Rosato
2023-01-09 20:13 ` Jason Gunthorpe
2023-01-09 20:24 ` Matthew Rosato
2023-01-09 21:07 ` Anthony Krowiak
2023-01-11 19:54 ` Sean Christopherson
2023-01-11 20:05 ` Jason Gunthorpe
2023-01-11 20:53 ` Sean Christopherson
2023-01-12 12:45 ` Jason Gunthorpe
2023-01-12 17:21 ` Matthew Rosato [this message]
2023-01-12 17:27 ` Jason Gunthorpe
2023-01-09 20:10 ` [PATCH 2/2] KVM: s390: pci: use asyncronous kvm put Matthew Rosato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f7c39317-92a4-520e-8e69-a8606cd40e9a@linux.ibm.com \
--to=mjrosato@linux.ibm.com \
--cc=akrowiak@linux.ibm.com \
--cc=alex.williamson@redhat.com \
--cc=borntraeger@linux.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=farman@linux.ibm.com \
--cc=frankja@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-gvt-dev@lists.freedesktop.org \
--cc=jgg@nvidia.com \
--cc=jjherne@linux.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=pmorel@linux.ibm.com \
--cc=seanjc@google.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhi.a.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).